首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This article introduces mean-minimum (MM) exact confidence intervals for a binomial probability. These intervals guarantee that both the mean and the minimum frequentist coverage never drop below specified values. For example, an MM 95[93]% interval has mean coverage at least 95% and minimum coverage at least 93%. In the conventional sense, such an interval can be viewed as an exact 93% interval that has mean coverage at least 95% or it can be viewed as an approximate 95% interval that has minimum coverage at least 93%. Graphical and numerical summaries of coverage and expected length suggest that the Blaker-based MM exact interval is an attractive alternative to, even an improvement over, commonly recommended approximate and exact intervals, including the Agresti–Coull approximate interval, the Clopper–Pearson (CP) exact interval, and the more recently recommended CP-, Blaker-, and Sterne-based mean-coverage-adjusted approximate intervals.  相似文献   

2.
The author describes a method for improving standard “exact” confidence intervals in discrete distributions with respect to size while retaining correct level. The binomial, negative binomial, hypergeometric, and Poisson distributions are considered explicitly. Contrary to other existing methods, the author's solution possesses a natural nesting condition: if α < α', the 1 ‐ α' confidence interval is included in the 1 ‐ α interval. Nonparametric confidence intervals for a quantile are also considered.  相似文献   

3.
An EM algorithm is proposed for computing estimates of parameters of the negative bi-nomial distribution; the algorithm does not involve further iterations in the M-step, in contrast with the one given in Schader & Schmid (1985). The approach can be applied to the corresponding problem in the logarithmic series distribution. The convergence of the proposed scheme is investigated by simulation, the observed Fisher information is derivedand numerical examples based on real data are presented.  相似文献   

4.
Negative binomial group distribution was proposed in the literature which was motivated by inverse sampling when considering group inspection: products are inspected group by group, and the number of non-conforming items of a group is recorded only until the inspection of the whole group is finished. The non-conforming probability p of the population is thus the parameter of interest. In this paper, the confidence interval construction for this parameter is investigated. The common normal approximation and exact method are applied. To overcome the drawbacks of these commonly used methods, a composite method that is based on the confidence intervals of the negative binomial distribution is proposed, which benefits from the relationship between negative binomial distribution and negative binomial group distribution. Simulation studies are carried out to examine the performances of our methods. A real data example is also presented to illustrate the application of our method.  相似文献   

5.
We consider the classic problem of interval estimation of a proportion p based on binomial sampling. The ‘exact’ Clopper–Pearson confidence interval for p is known to be unnecessarily conservative. We propose coverage adjustments of the Clopper–Pearson interval that incorporate prior or posterior beliefs into the interval. Using heatmap‐type plots for comparing confidence intervals, we show that the coverage‐adjusted intervals have satisfying coverage and shorter expected lengths than competing intervals found in the literature.  相似文献   

6.
Abstract.  The likelihood ratio statistic for testing pointwise hypotheses about the survival time distribution in the current status model can be inverted to yield confidence intervals (CIs). One advantage of this procedure is that CIs can be formed without estimating the unknown parameters that figure in the asymptotic distribution of the maximum likelihood estimator (MLE) of the distribution function. We discuss the likelihood ratio-based CIs for the distribution function and the quantile function and compare these intervals to several different intervals based on the MLE. The quantiles of the limiting distribution of the MLE are estimated using various methods including parametric fitting, kernel smoothing and subsampling techniques. Comparisons are carried out both for simulated data and on a data set involving time to immunization against rubella. The comparisons indicate that the likelihood ratio-based intervals are preferable from several perspectives.  相似文献   

7.
Abstract

The standard method of obtaining a two-sided confidence interval for the Poisson mean produces an interval which is exact but can be shortened without violating the minimum coverage requirement. We classify the intervals proposed as alternatives to the standard method interval. We carry out the classification using two desirable properties of two-sided confidence intervals.  相似文献   

8.
We show that the confidence interval version of the extended exact unconditional Z test of Suissa and Shuster (1985) for testing the equality of two binomial proportions is due to general results of Buehler (1957), Sudakov and references cited there (1974), and Harris and Soms (1984). We apply these results to obtain exact unconditional confidence intervals for the difference between two proportions, deriving an explicit solution for the “best” outcome, make some comments on Buehler's (1957) method and give a numerical example. The Appendix contains a listing of the necessary FORTRAN programs.  相似文献   

9.
In this note, we derive the exact distribution of S by using the method of generating function and BELL polynomials, where S = X1 + X2 + ??? + Xn, and each Xi follows the negative binomial distribution with arbitrary parameters. As a particular case, we also obtain the exact distribution of the convolution of geometric random variables.  相似文献   

10.
The negative binomial distribution offers an alternative view to the binomial distribution for modeling count data. This alternative view is particularly useful when the probability of success is very small, because, unlike the fixed sampling scheme of the binomial distribution, the inverse sampling approach allows one to collect enough data in order to adequately estimate the proportion of success. However, despite work that has been done on the joint estimation of two binomial proportions from independent samples, there is little, if any, similar work for negative binomial proportions. In this paper, we construct and investigate three confidence regions for two negative binomial proportions based on three statistics: the Wald (W), score (S) and likelihood ratio (LR) statistics. For large-to-moderate sample sizes, this paper finds that all three regions have good coverage properties, with comparable average areas for large sample sizes but with the S method producing the smaller regions for moderate sample sizes. In the small sample case, the LR method has good coverage properties, but often at the expense of comparatively larger areas. Finally, we apply these three regions to some real data for the joint estimation of liver damage rates in patients taking one of two drugs.  相似文献   

11.
Starting from the compound Poisson INGARCH models, we introduce in this paper a new family of integer-valued models suitable to describe count data without zeros that we name zero-truncated CP-INGARCH processes. For such class of models, a probabilistic study concerning moments existence, stationarity and ergodicity is developed. The conditional quasi-maximum likelihood method is introduced to consistently estimate the parameters of a wide zero-truncated compound Poisson subclass of models. The conditional maximum likelihood method is also used to estimate the parameters of ZTCP-INGARCH processes associated with well-specified conditional laws. A simulation study that compares some of those estimators and illustrates their finite distance behaviour as well as a real-data application conclude the paper.  相似文献   

12.
We consider the problem of simultaneously estimating Poisson rate differences via applications of the Hsu and Berger stepwise confidence interval method (termed HBM), where comparisons to a common reference group are performed. We discuss continuity-corrected confidence intervals (CIs) and investigate the HBM performance with a moment-based CI, and uncorrected and corrected for continuity Wald and Pooled confidence intervals (CIs). Using simulations, we compare nine individual CIs in terms of coverage probability and the HBM with nine intervals in terms of family-wise error rate (FWER) and overall and local power. The simulations show that these statistical properties depend highly on parameter settings.  相似文献   

13.
When the data are discrete, standard approximate confidence limits often have coverage well below nominal for some parameter values. While ad hoc adjustments may largely solve this problem for particular cases, Kabaila & Lloyd (1997) gave a more systematic method of adjustment which leads to tight upper limits, which have coverage which is never below nominal and are as small as possible within a particular class. However, their computation for all but the simplest models is infeasible. This paper suggests modifying tight upper limits by an initial replacement of the unknown nuisance parameter vector by its profile maximum likelihood estimator. While the resulting limits no longer possess the optimal properties of tight limits exactly, the paper presents both numerical and theoretical evidence that the resulting coverage function is close to optimal. Moreover these profile upper limits are much (possibly many orders of magnitude) easier to compute than tight upper limits.  相似文献   

14.
Extended Poisson process modelling is generalised to allow for covariate-dependent dispersion as well as a covariate-dependent mean response. This is done by a re-parameterisation that uses approximate expressions for the mean and variance. Such modelling allows under- and over-dispersion, or a combination of both, in the same data set to be accommodated within the same modelling framework. All the necessary calculations can be done numerically, enabling maximum likelihood estimation of all model parameters to be carried out. The modelling is applied to re-analyse two published data sets, where there is evidence of covariate-dependent dispersion, with the modelling leading to more informative analyses of these data and more appropriate measures of the precision of any estimates.  相似文献   

15.
In this article, we develop four explicit asymptotic two-sided confidence intervals for the difference between two Poisson rates via a hybrid method. The basic idea of the proposed method is to estimate or recover the variances of the two Poisson rate estimates, which are required for constructing the confidence interval for the rate difference, from the confidence limits for the two individual Poisson rates. The basic building blocks of the approach are reliable confidence limits for the two individual Poisson rates. Four confidence interval estimators that have explicit solutions and good coverage levels are employed: the first normal with continuity correction, Rao score, Freeman and Tukey, and Jeffreys confidence intervals. Using simulation studies, we examine the performance of the four hybrid confidence intervals and compare them with three existing confidence intervals: the non-informative prior Bayes confidence interval, the t confidence interval based on Satterthwait's degrees of freedom, and the Bayes confidence interval based on Student's t confidence coefficient. Simulation results show that the proposed hybrid Freeman and Tukey, and the hybrid Jeffreys confidence intervals can be highly recommended because they outperform the others in terms of coverage probabilities and widths. The other methods tend to be too conservative and produce wider confidence intervals. The application of these confidence intervals are illustrated with three real data sets.  相似文献   

16.
Haibing (2009) proposed a procedure for successive comparisons between ordered treatment effects in one-way layout and showed that the proposed procedure has greater power than the procedure proposed by Lee and Spurrier (1995). Critical constants required for the proposed procedure were estimated using Monte Carlo simulation and few values of the constants were tabulated which limit the applications of the proposed procedure. In this article, a numerical method, using recursive integration methodology, is discussed to compute the critical constants which work efficiently for a large number of treatments and extensive values of critical constants are tabulated for the use of practitioners. Power comparisons of Haibing's and Lee and Spurrier's procedure is also discussed.  相似文献   

17.
Parametric confidence intervals are given for linear combinations of the means of independent Poisson variables and for their continuous versions. The performance of the intervals is assessed using simulation. A real data set is used to compare the proposed intervals with known ones. The proposed intervals are shown to be superior to known ones and comparable to exact intervals.  相似文献   

18.
通过逆抽样过程获得的分布又称为负二项分布,在流行病学研究和二分类变量分布的研究中应用极为广泛。因此,提出两种基于梯度统计量的逆抽样下风险差的置信区间的构建方法,分别依据风险差的极大似然估计(MLE)和方差最小无偏一致估计量(UMVUE)。与现有的WALD方法和得分方法相比,该方法所构建置信区间的优点在于:置信区间构建方法既不需要计算Fisher信息阵也不需要计算其逆矩阵,可使计算得以大大简化;对所提出的基于梯度统计量的置信区间构建方法进行蒙特卡洛模拟研究,模拟结果表明提出的构建方法可以得到很好的覆盖概率和较短的区间宽度。  相似文献   

19.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

20.
It is shown how various exact non-parametric inferences based on order statistics in one or two random samples can be generalized to situations with progressive type-II censoring, which is a kind of evolutionary right censoring. Ordinary type-II right censoring is a special case of such progressive censoring. These inferences include confidence intervals for a given parent quantile, prediction intervals for a given order statistic of a future sample, and related two-sample inferences based on exceedance probabilities. The proposed inferences are valid for any parent distribution with continuous distribution function. The key result is that each observable uncensored order statistic that becomes available with progressive type-II censoring can be represented as a mixture with known weights of underlying ordinary order statistics. The importance of this mixture representation lies in that various properties of such observable order statistics can be deduced immediately from well-known properties of ordinary order statistics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号