首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The capture-recapture method is applied to estimate the population size of a target population based on ascertainment data in epidemiological applications. We generalize the three-list case of Chao & Tsay (1998) to situations where more than three lists are available. An estimation procedure is presented using the concept of sample coverage, which can be interpreted as a measure of overlap information among multiple list records. When there is enough overlap, an estimator of the total population size is proposed. The bootstrap method is used to construct a variance estimator and confidence interval. If the overlap rate is relatively low, then the population size cannot be precisely estimated and thus only a lower (upper) bound is proposed for positively (negatively) dependent lists. The proposed method is applied to two data sets, one with a high and one with a low overlap rate.  相似文献   

2.
Confidence interval is a basic type of interval estimation in statistics. When dealing with samples from a normal population with the unknown mean and the variance, the traditional method to construct t-based confidence intervals for the mean parameter is to treat the n sampled units as n groups and build the intervals. Here we propose a generalized method. We first divide them into several equal-sized groups and then calculate the confidence intervals with the mean values of these groups. If we define “better” in terms of the expected length of the confidence interval, then the first method is better because the expected length of the confidence interval obtained from the first method is shorter. We prove this intuition theoretically. We also specify when the elements in each group are correlated, the first method is invalid, while the second can give us correct results in terms of the coverage probability. We illustrate this with analytical expressions. In practice, when the data set is extremely large and distributed in several data centers, the second method is a good tool to get confidence intervals, in both independent and correlated cases. Some simulations and real data analyses are presented to verify our theoretical results.  相似文献   

3.
Based on the generalized inference idea, a new kind of generalized confidence intervals is derived for the among-group variance component in the heteroscedastic one-way random effects model. We construct structure equations of all variance components in the model based on their minimal sufficient statistics; meanwhile, the fiducial generalized pivotal quantity (FGPQ) can be obtained through solving an implicit equation of the parameter of interest. Then, the confidence interval is derived naturally from the FGPQ. Simulation results demonstrate that the new procedure performs very well in terms of both empirical coverage probability and average interval length.  相似文献   

4.
What is the interpretation of a confidence interval following estimation of a Box-Cox transformation parameter λ? Several authors have argued that confidence intervals for linear model parameters ψ can be constructed as if λ. were known in advance, rather than estimated, provided the estimand is interpreted conditionally given $\hat \lambda$. If the estimand is defined as $\psi \left( {\hat \lambda } \right)$, a function of the estimated transformation, can the nominal confidence level be regarded as a conditional coverage probability given $\hat \lambda$, where the interval is random and the estimand is fixed? Or should it be regarded as an unconditional probability, where both the interval and the estimand are random? This article investigates these questions via large-n approximations, small- σ approximations, and simulations. It is shown that, when model assumptions are satisfied and n is large, the nominal confidence level closely approximates the conditional coverage probability. When n is small, this conditional approximation is still good for regression models with small error variance. The conditional approximation can be poor for regression models with moderate error variance and single-factor ANOVA models with small to moderate error variance. In these situations the nominal confidence level still provides a good approximation for the unconditional coverage probability. This suggests that, while the estimand may be interpreted conditionally, the confidence level should sometimes be interpreted unconditionally.  相似文献   

5.
In scenarios where the variance of a response variable can be attributed to two sources of variation, a confidence interval for a ratio of variance components gives information about the relative importance of the two sources. For example, if measurements taken from different laboratories are nine times more variable than the measurements taken from within the laboratories, then 90% of the variance in the responses is due to the variability amongst the laboratories and 10% of the variance in the responses is due to the variability within the laboratories. Assuming normally distributed sources of variation, confidence intervals for variance components are readily available. In this paper, however, simulation studies are conducted to evaluate the performance of confidence intervals under non-normal distribution assumptions. Confidence intervals based on the pivotal quantity method, fiducial inference, and the large-sample properties of the restricted maximum likelihood (REML) estimator are considered. Simulation results and an empirical example suggest that the REML-based confidence interval is favored over the other two procedures in unbalanced one-way random effects model.  相似文献   

6.
For a normal distribution with known variance, the standard confidence interval of the location parameter is derived from the classical Neyman procedure. When the parameter space is known to be restricted, the standard confidence interval is arguably unsatisfactory. Recent articles have addressed this problem and proposed confidence intervals for the mean of a normal distribution where the parameter space is not less than zero. In this article, we propose a new confidence interval, rp interval, and derive the Bayesian credible interval and likelihood ratio interval for general restricted parameter space. We compare these intervals with the standard interval and the minimax interval. Simulation studies are undertaken to assess the performances of these confidence intervals.  相似文献   

7.
We consider the problem of making inferences on the common mean of several heterogeneous log-normal populations. We apply the parametric bootstrap (PB) approach and the method of variance estimate recovery (MOVER) to construct confidence intervals for the log-normal common mean. We then compare the performances of the proposed confidence intervals with the existing confidence intervals via an extensive simulation study. Simulation results show that our proposed MOVER and PB confidence intervals can be recommended generally for different sample sizes and number of populations.  相似文献   

8.
This paper concerns the problem of “improving” interval estimators of variance components in one-way random model. The traditional method of construction relies on the establishment of confidence intervals, depending on corresponding sums of squares, from the analysis of variance. It has been shown in the paper how information on the estimator for a population mean can be used to “improve” the estimation of variance components.  相似文献   

9.
10.
After completion of a human genome project, the disease targets at molecular level can be identified. As a result, treatment modality for molecular targets can be developed. In practice, targeted clinical trials are usually conducted for evaluation of the possibility and feasibility of the individualized treatment of patients. However, the accuracy of diagnostic devices for identification of such molecular targets is usually not perfect. Therefore, some of the patients enrolled in targeted clinical trials with a positive result by the diagnostic device might not have the specific molecular targets and hence the treatment effects of the targeted drugs estimated from targeted clinical trials could be biased for the patient population truly with the molecular targets. Under an enrichment design for targeted clinical trials, we propose to use the EM algorithm and bootstrap method for obtaining the inference of the treatment effects of the targeted drugs in the patient population truly with molecular targets. A simulation study was conducted to empirically investigate the bias and variability of the proposed estimator and the size and power of the proposed testing method. Simulation results demonstrate that the proposed estimator is unbiased with adequate precision and the confidence interval can provide satisfactory coverage probability. In addition, the proposed testing procedure can adequately control the size with sufficient power. A practical example illustrates the utility of the proposed method. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
In the model of type I censored exponential lifetimes, coverage probabilities are compared for a number of confidence interval constructions proposed in literature. The coverage probabilities are calculated exactly for sample sizes up to 50 and for different degrees of censoring and different degrees of intended confidence. If not only a fair two-sided coverage is desired, but also fair one-sided coverage's, only few methods are quite satisfactory. A likelihood-based interval and a third root transformation to normality work almost perfectly, but the 2-based method that is perfect under no censoring and under type II censoring can also be advocated.  相似文献   

12.
Highly skewed and non-negative data can often be modeled by the delta-lognormal distribution in fisheries research. However, the coverage probabilities of extant interval estimation procedures are less satisfactory in small sample sizes and highly skewed data. We propose a heuristic method of estimating confidence intervals for the mean of the delta-lognormal distribution. This heuristic method is an estimation based on asymptotic generalized pivotal quantity to construct generalized confidence interval for the mean of the delta-lognormal distribution. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities, expected interval lengths and reasonable relative biases. Finally, the proposed method is employed in red cod densities data for a demonstration.  相似文献   

13.
Exact confidence intervals for variances rely on normal distribution assumptions. Alternatively, large-sample confidence intervals for the variance can be attained if one estimates the kurtosis of the underlying distribution. The method used to estimate the kurtosis has a direct impact on the performance of the interval and thus the quality of statistical inferences. In this paper the author considers a number of kurtosis estimators combined with large-sample theory to construct approximate confidence intervals for the variance. In addition, a nonparametric bootstrap resampling procedure is used to build bootstrap confidence intervals for the variance. Simulated coverage probabilities using different confidence interval methods are computed for a variety of sample sizes and distributions. A modification to a conventional estimator of the kurtosis, in conjunction with adjustments to the mean and variance of the asymptotic distribution of a function of the sample variance, improves the resulting coverage values for leptokurtically distributed populations.  相似文献   

14.
If the unknown mean of a univariate population is sufficiently close to the value of an initial guess then an appropriate shrinkage estimator has smaller average squared error than the sample mean. This principle has been known for some time, but it does not appear to have found extension to problems of interval estimation. The author presents valid two‐sided 95% and 99% “shrinkage” confidence intervals for the mean of a normal distribution. These intervals are narrower than the usual interval based on the Student distribution when the population mean lies in such an “effective interval.” A reduction of 20% in the mean width of the interval is possible when the population mean is sufficiently close to the value of the guess. The author also describes a modification to existing shrinkage point estimators of the general univariate mean that enables the effective interval to be enlarged.  相似文献   

15.
Quantile function plays an important role in statistical inference, and intermediate quantile is useful in risk management. It is known that Jackknife method fails for estimating the variance of a sample quantile. By assuming that the underlying distribution satisfies some extreme value conditions, we show that Jackknife variance estimator is inconsistent for an intermediate order statistic. Further we derive the asymptotic limit of the Jackknife-Studentized intermediate order statistic so that a confidence interval for an intermediate quantile can be obtained. A simulation study is conducted to compare this new confidence interval with other existing ones in terms of coverage accuracy.  相似文献   

16.
The Behrens–Fisher problem concerns the inferences for the difference between means of two independent normal populations without the assumption of equality of variances. In this article, we compare three approximate confidence intervals and a generalized confidence interval for the Behrens–Fisher problem. We also show how to obtain simultaneous confidence intervals for the three population case (analysis of variance, ANOVA) by the Bonferroni correction factor. We conduct an extensive simulation study to evaluate these methods in respect to their type I error rate, power, expected confidence interval width, and coverage probability. Finally, the considered methods are applied to two real dataset.  相似文献   

17.
A simple and unified prediction interval (PI) for the median of a future lifetime can be obtained through a power transformation. This interval usually possesses the correct coverage, at least asymptotically, when the transformation is known. However, when the transformation is unknown and is estimated from the data, a correction is required. A simple correction factor is derived based on large sample theory. Simulation shows that the unified PI after correction performs well. When compared with the existing frequentist PI's, it shows an equivalent or a better performance in terms of coverage probability and average length of the interval. Its nonparametric aspect and the ease of usage make it very attractive to practitioners. Real data examples are provided for illustration.  相似文献   

18.
In this paper we use the empirical likelihood method to construct confidence interval for truncation parameter in random truncation model. The empirical log-likelihood ratio is derived and its asymptotic distribution is shown to be a weighted chi-square. Simulation studies are used to compare the confidence intervals based on empirical likelihood and those based on normal approximation. It is found that the empirical likelihood method provides improved confidence interval.  相似文献   

19.
文章生成概化理论p×i、p×i×h、p×(i:h)三种不同设计下的正态数据、多项数据和二项数据,用Jackknife方法和Traditional方法估计数据的方差分量、标准误和置信区间,并比较这两种方法的性能。结果表明:(1)Jackknife方法在方差分量估计和标准误估计上都较为准确;(2)相较于Traditional方法,Jackknife方法在方差分量置信区间估计上略有不足。(3)相较于Traditional方法,Jackknife方法估计的准确性不随数据类型、研究设计和方差分量的不同而产生波动,具有更强的稳健性。  相似文献   

20.
The emphasis in the literature is on normalizing transformations, despite the greater importance of the homogeneity of variance in analysis. A strategy for a choice of variance-stabilizing transformation is suggested. The relevant component of variation must be identified and, when this is not within-subject variation, a major explanatory variable must also be selected to subdivide the data. A plot of group standard deviation against group mean, or log standard deviation against log mean, may identify a simple power transformation or shifted log transformation. In other cases, within the shifted Box-Cox family of transformations, a contour plot to show the region of minimum heterogeneity defined by an appropriate index is proposed to enable an informed choice of transformation. If used in conjunction with the maximum-likelihood contour plot for the normalizing transformation, then it is possible to assess whether or not there exists a transformation that satisfies both criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号