首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The purpose of the current work is to introduce stratified bivariate ranked set sampling (SBVRSS) and investigate its performance for estimating the population mean using both naïve and ratio methods. The properties of the proposed estimator are derived along with the optimal allocation with respect to stratification. We conduct a simulation study to demonstrate the relative efficiency of SBVRSS as compared to stratified bivariate simple random sampling (SBVSRS) for ratio estimation. Data that consist of weights and bilirubin levels in the blood of 120 babies are used to illustrate the procedure on a real data set. Based on our simulation, SBVRSS for ratio estimation is more efficient than using SBVSRS in all cases.  相似文献   

2.
Three simple transformations are proposed in the context of ratio and product methods of estimation, based on any probability sampling design, and the usual unbiased estimation under varying probability sampling. These transformations may be effected

after the data are collected in a survey. The objective is to obtain improved estimators of the population total  相似文献   

3.
We adapt the ratio estimation using ranked set sampling, suggested by Samawi and Muttlak (Biometr J 38:753–764, 1996), to the ratio estimator for the population mean, based on Prasad (Commun Stat Theory Methods 18:379–392, 1989), in simple random sampling. Theoretically, we show that the proposed ratio estimator for the population mean is more efficient than the ratio estimator, in Prasad (1989), in all conditions. In addition, we support this theoretical result with the aid of a numerical example.   相似文献   

4.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   

5.
We investigate the relative performance of stratified bivariate ranked set sampling (SBVRSS), with respect to stratified simple random sampling (SSRS) for estimating the population mean with regression methods. The mean and variance of the proposed estimators are derived with the mean being shown to be unbiased. We perform a simulation study to compare the relative efficiency of SBVRSS to SSRS under various data-generating scenarios. We also compare the two sampling schemes on a real data set from trauma victims in a hospital setting. The results of our simulation study and the real data illustration indicate that using SBVRSS for regression estimation provides more efficiency than SSRS in most cases.  相似文献   

6.
The aim of this work was to evaluate whether the number of partitions of index components and the use of specific weights for each component influence the diagnostic accuracy of a composite index. Simulation studies were conducted in order to compare the sensitivity, specificity and area under the ROC curve (AUC) of indices constructed using equal number of components but different number of partitions for all components. Moreover, the odds ratio obtained from the univariate logistic regression model for each component was proposed as potential weight. The current simulation results showed that the sensitivity, specificity and AUC of an index increase as the number of partitions of components increases. However, the rate that the diagnostic accuracy increases is reduced as the number of partitions increases. In addition, it was found that the diagnostic accuracy of the weighted index developed using the proposed weights is higher compared with that of the corresponding un-weighted index. The use of large-scale index components and the use of effect size measures (i.e. odds ratios, ORs) of index components as potential weights are proposed in order to obtain indices with high diagnostic accuracy for a particular binary outcome.  相似文献   

7.
The composite likelihood is amongst the computational methods used for estimation of the generalized linear mixed model (GLMM) in the context of bivariate meta-analysis of diagnostic test accuracy studies. Its advantage is that the likelihood can be derived conveniently under the assumption of independence between the random effects, but there has not been a clear analysis of the merit or necessity of this method. For synthesis of diagnostic test accuracy studies, a copula mixed model has been proposed in the biostatistics literature. This general model includes the GLMM as a special case and can also allow for flexible dependence modelling, different from assuming simple linear correlation structures, normality and tail independence in the joint tails. A maximum likelihood (ML) method, which is based on evaluating the bi-dimensional integrals of the likelihood with quadrature methods, has been proposed, and in fact it eases any computational difficulty that might be caused by the double integral in the likelihood function. Both methods are thoroughly examined with extensive simulations and illustrated with data of a published meta-analysis. It is shown that the ML method has no non-convergence issues or computational difficulties and at the same time allows estimation of the dependence between study-specific sensitivity and specificity and thus prediction via summary receiver operating curves.  相似文献   

8.
Bayesian sample size estimation for equivalence and non-inferiority tests for diagnostic methods is considered. The goal of the study is to test whether a new screening test of interest is equivalent to, or not inferior to the reference test, which may or may not be a gold standard. Sample sizes are chosen by the model performance criteria of average posterior variance, length and coverage probability. In the absence of a gold standard, sample sizes are evaluated by the ratio of marginal probabilities of the two screening tests; whereas in the presence of gold standard, sample sizes are evaluated by the measures of sensitivity and specificity.  相似文献   

9.
10.
It is well-known that when ranked set sampling (RSS) scheme is employed to estimate the mean of a population, it is more efficient than simple random sampling (SRS) with the same sample size. One can use a RSS analog of SRS regression estimator to estimate the population mean of Y using its concomitant variable X when they are linearly related. Unfortunately, the variance of this estimate cannot be evaluated unless the distribution of X is known. We investigate the use of resampling methods to establish confidence intervals for the regression estimation of the population mean. Simulation studies show that the proposed methods perform well in a variety of situations when the assumption of linearity holds, and decently well under mild non-linearity.  相似文献   

11.
Ranked set sampling (RSS) is a cost-efficient technique for data collection when the units in a population can be easily judgment ranked by any cheap method other than actual measurements. Using auxiliary information in developing statistical procedures for inference about different population characteristics is a well-known approach. In this work, we deal with quantile estimation from a population with known mean when data are obtained according to RSS scheme. Through the simple device of mean-correction (subtract off the sample mean and add on the known population mean), a modified estimator is constructed from the standard quantile estimator. Asymptotic normality of the new estimator and its asymptotic efficiency relative to the original estimator are derived. Simulation results for several underlying distributions show that the proposed estimator is more efficient than the traditional one.  相似文献   

12.
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.  相似文献   

13.
The estimation or prediction of population characteristics based on the sample information is the key issue in survey sampling. If the sample sizes in subpopulations (domains) are large enough, similar methods as used for the whole population can be used to estimate or to predict subpopulations characteristics as well. To estimate or to predict characteristics of domains with small or even zero sample sizes, small area estimation methods “borrowing strength” from other subpopulations or time periods are widely used. We extend this problem and study methods of prediction of future population and subpopulations’ characteristics based on the longitudinal data.  相似文献   

14.
We propose a weighted empirical likelihood approach to inference with multiple samples, including stratified sampling, the estimation of a common mean using several independent and non-homogeneous samples and inference on a particular population using other related samples. The weighting scheme and the basic result are motivated and established under stratified sampling. We show that the proposed method can ideally be applied to the common mean problem and problems with related samples. The proposed weighted approach not only provides a unified framework for inference with multiple samples, including two-sample problems, but also facilitates asymptotic derivations and computational methods. A bootstrap procedure is also proposed in conjunction with the weighted approach to provide better coverage probabilities for the weighted empirical likelihood ratio confidence intervals. Simulation studies show that the weighted empirical likelihood confidence intervals perform better than existing ones.  相似文献   

15.
The odds ratio is a measure commonly used for expressing the association between an exposure and a binary outcome. A feature of the odds ratio is that its value depends on the choice of the distribution over which the probabilities in the odds ratio are evaluated. In particular, this means that an odds ratio conditional on a covariate may have a different value from an odds ratio marginal on the covariate, even if the covariate is not associated with the exposure (not a confounder). We define the individual odds ratio (IORs) and population odds ratios (PORs) as the ratio of the odds of the outcome for a unit increase in the exposure, respectively, for an individual in the population and for the whole population, in which case the odds are averaged across the population. The attenuation of conditional odds ratio, marginal odds ratio, and PORs from the IOR is demonstrated in a realistic simulation exercise. The degree of attenuation differs in the whole population and in a case–control sample, and the property of invariance to outcome-dependent sampling is only true for the IOR. The relevance of the non collapsibility of odds ratios in a range of methodological areas is discussed.  相似文献   

16.
When quantification of all sampling units is expensive but a set of units can be ranked, without formal measurement, ranked set sampling (RSS) is a cost-efficient alternate to simple random sampling (SRS). In this paper, we study the Kaplan–Meier estimator of survival probability based on RSS under random censoring time setup, and propose nonparametric estimators of the population mean. We present a simulation study to compare the performance of the suggested estimators. It turns out that RSS design can yield a substantial improvement in efficiency over the SRS design. Additionally, we apply the proposed methods to a real data set from an environmental study.  相似文献   

17.
Logistic regression is the most popular technique available for modeling dichotomous-dependent variables. It has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient logistic regression analysis based on moving extreme ranked set sampling (MERSSmin) scheme with ranking based on an easy-to-available auxiliary variable known to be associated with the variable of interest (response variable). The paper demonstrates that this approach will provide more powerful testing procedure as well as more efficient odds ratio and parameter estimation than using simple random sample (SRS). Theoretical derivation and simulation studies will be provided. Real data from 2011 Youth Risk Behavior Surveillance System (YRBSS) data are used to illustrate the procedures developed in this paper.  相似文献   

18.
We introduce a multi-step variance minimization algorithm for numerical estimation of Type I and Type II error probabilities in sequential tests. The algorithm can be applied to general test statistics and easily built into general design algorithms for sequential tests. Our simulation results indicate that the proposed algorithm is particularly useful for estimating tail probabilities, and may lead to significant computational efficiency gains over the crude Monte Carlo method.  相似文献   

19.
根据国外的理论研究及应用的经验,预计连续性抽样估计方法在中国的应用前景非常广阔。因此,将连续性抽样估计方法作为研究对象,对国外已有的相关研究成果进行理论化、系统化的研究综述,并比较分析各类连续性抽样估计方法,归纳出存在的问题及未来研究的趋势,为后续的研究提供参考,同时为该方法顺利地应用到中国实际调查工作中奠定理论基础,从而进一步推动中国统计调查方法体系的改革与发展。  相似文献   

20.
In this article, we consider the median ranked set sampling estimation and test of hypothesis for the mean for symmetric distributions. We suggest some alternative estimation strategies for parameters based on shrinkage and pretest principles. It is advantageous to use the non-sample information in the estimation process to construct alternative estimations for the parameter of interest. In this article, large sample properties of the suggested estimators will be assessed numerically using computer simulation. The relative performance of the suggested estimators for moderate and large samples will also be simulated. For illustration purposes, the proposed methodology is applied using data collocated from the Pepsi Cola production company in Al-Khobar, Saudi Arabia.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号