首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Guogen Shan 《Statistics》2018,52(5):1086-1095
In addition to point estimate for the probability of response in a two-stage design (e.g. Simon's two-stage design for binary endpoints), confidence limits should be computed and reported. The current method of inverting the p-value function to compute the confidence interval does not guarantee coverage probability in a two-stage setting. The existing exact approach to calculate one-sided limits is based on the overall number of responses to order the sample space. This approach could be conservative because many sample points have the same limits. We propose a new exact one-sided interval based on p-value for the sample space ordering. Exact intervals are computed by using binomial distributions directly, instead of a normal approximation. Both exact intervals preserve the nominal confidence level. The proposed exact interval based on the p-value generally performs better than the other exact interval with regard to expected length and simple average length of confidence intervals.  相似文献   

2.
ABSTRACT

For interval estimation of a binomial proportion and a Poisson mean, matching pseudocounts are derived, which give the one-sided Wald confidence intervals with second-order accuracy. The confidence intervals remove the bias of coverage probabilities given by the score confidence intervals. Partial poor behavior of the confidence intervals by the matching pseudocounts is corrected by hybrid methods using the score confidence interval depending on sample values.  相似文献   

3.
The well-known Wilson and Agresti–Coull confidence intervals for a binomial proportion p are centered around a Bayesian estimator. Using this as a starting point, similarities between frequentist confidence intervals for proportions and Bayesian credible intervals based on low-informative priors are studied using asymptotic expansions. A Bayesian motivation for a large class of frequentist confidence intervals is provided. It is shown that the likelihood ratio interval for p approximates a Bayesian credible interval based on Kerman’s neutral noninformative conjugate prior up to O(n? 1) in the confidence bounds. For the significance level α ? 0.317, the Bayesian interval based on the Jeffreys’ prior is then shown to be a compromise between the likelihood ratio and Wilson intervals. Supplementary materials for this article are available online.  相似文献   

4.
We consider the classic problem of interval estimation of a proportion p based on binomial sampling. The ‘exact’ Clopper–Pearson confidence interval for p is known to be unnecessarily conservative. We propose coverage adjustments of the Clopper–Pearson interval that incorporate prior or posterior beliefs into the interval. Using heatmap‐type plots for comparing confidence intervals, we show that the coverage‐adjusted intervals have satisfying coverage and shorter expected lengths than competing intervals found in the literature.  相似文献   

5.
Control charts are widely used for monitoring quality characteristics of high-yield processes. In such processes where a large number of zero observations exists in count data, the zero-inflated binomial (ZIB) models are more appropriate than the ordinary binomial models. In ZIB models, random shocks occur with probability θ, and upon the occurrence of random shocks, the number of non-conforming items in a sample of size n follows the binomial distribution with proportion p. In the present article, we study in more detail the exponentially weighted moving average control chart based on ZIB distribution (ZIB-EWMA) and we also propose a new control chart based on the double exponentially weighted moving average statistic for monitoring ZIB data (ZIB-DEWMA). The two control charts are studied in detecting upward shifts in θ or p individually, as well as in both parameters simultaneously. Through a simulation study, we compare the performance of the proposed chart with the ZIB-Shewhart, ZIB-EWMA and ZIB-CUSUM charts. Finally, an illustrative example is also presented to display the practical application of the ZIB charts.  相似文献   

6.
This paper compares the Bayesian and frequentist approaches to testing a one-sided hypothesis about a multivariate mean. First, this paper proposes a simple way to assign a Bayesian posterior probability to one-sided hypotheses about a multivariate mean. The approach is to use (almost) the exact posterior probability under the assumption that the data has multivariate normal distribution, under either a conjugate prior in large samples or under a vague Jeffreys prior. This is also approximately the Bayesian posterior probability of the hypothesis based on a suitably flat Dirichlet process prior over an unknown distribution generating the data. Then, the Bayesian approach and a frequentist approach to testing the one-sided hypothesis are compared, with results that show a major difference between Bayesian reasoning and frequentist reasoning. The Bayesian posterior probability can be substantially smaller than the frequentist p-value. A class of example is given where the Bayesian posterior probability is basically 0, while the frequentist p-value is basically 1. The Bayesian posterior probability in these examples seems to be more reasonable. Other drawbacks of the frequentist p-value as a measure of whether the one-sided hypothesis is true are also discussed.  相似文献   

7.
In recent years, there has been considerable interest in regression models based on zero-inflated distributions. These models are commonly encountered in many disciplines, such as medicine, public health, and environmental sciences, among others. The zero-inflated Poisson (ZIP) model has been typically considered for these types of problems. However, the ZIP model can fail if the non-zero counts are overdispersed in relation to the Poisson distribution, hence the zero-inflated negative binomial (ZINB) model may be more appropriate. In this paper, we present a Bayesian approach for fitting the ZINB regression model. This model considers that an observed zero may come from a point mass distribution at zero or from the negative binomial model. The likelihood function is utilized to compute not only some Bayesian model selection measures, but also to develop Bayesian case-deletion influence diagnostics based on q-divergence measures. The approach can be easily implemented using standard Bayesian software, such as WinBUGS. The performance of the proposed method is evaluated with a simulation study. Further, a real data set is analyzed, where we show that ZINB regression models seems to fit the data better than the Poisson counterpart.  相似文献   

8.
孟生旺  杨亮 《统计研究》2015,32(11):97-103
索赔频率预测是非寿险费率厘定的重要组成部分。最常使用的索赔频率预测模型是泊松回归和负二项回归,以及与它们相对应的零膨胀回归模型。但是,当索赔次数观察值既具有零膨胀特征,又存在组内相依结构时,上述模型都不能很好地拟合实际数据。为此,本文在泊松分布、负二项分布、广义泊松分布、P型负二项分布等条件下分别建立了随机效应零膨胀损失次数回归模型。为了改进模型的预测效果,对于连续型的解释变量,还引入了二次平滑项,并建立了结构性零比例与解释变量之间的回归关系。基于一组实际索赔次数数据的实证分析结果表明,该模型可以显著改进现有模型的拟合效果。  相似文献   

9.
Clinical trials often use paired binomial data as their clinical endpoint. The confidence interval is frequently used to estimate the treatment performance. Tang et al. (2009) have proposed exact and approximate unconditional methods for constructing a confidence interval in the presence of incomplete paired binary data. The approach proposed by Tang et al. can be overly conservative with large expected confidence interval width (ECIW) in some situations. We propose a profile likelihood‐based method with a Jeffreys' prior correction to construct the confidence interval. This approach generates confidence interval with a much better coverage probability and shorter ECIWs. The performances of the method along with the corrections are demonstrated through extensive simulation. Finally, three real world data sets are analyzed by all the methods. Statistical Analysis System (SAS) codes to execute the profile likelihood‐based methods are also presented. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
The problem of constructing control charts for fuzzy data has been considered in literature. The proposed transformation approaches and direct fuzzy approaches have their advantages and disadvantages. The representative values charts based on transformation methods are often recommended in practical application. For representing a fuzzy set by a crisp value, the weight of importance of the members assigned with some membership levels in a fuzzy set should be considered, and the possibility theory can be employed to deal with such problem. In this article, we propose to employ the weighted possibilistic mean (WPM), weighted interval valued possibilistic mean (WIVPM) of fuzzy number as a sort of representative values for the fuzzy attribute data, and establish new fuzzy control charts with WPM and WIVPM. The performance of the charts is compared to the existing fuzzy charts with a fuzzy c-chart example via newly defined average number of inspection for variation of control state.  相似文献   

11.
The Poisson–Lindley distribution is a compound discrete distribution that can be used as an alternative to other discrete distributions, like the negative binomial. This paper develops approximate one-sided and equal-tailed two-sided tolerance intervals for the Poisson–Lindley distribution. Practical applications of the Poisson–Lindley distribution frequently involve large samples, thus we utilize large-sample Wald confidence intervals in the construction of our tolerance intervals. A coverage study is presented to demonstrate the efficacy of the proposed tolerance intervals. The tolerance intervals are also demonstrated using two real data sets. The R code developed for our discussion is briefly highlighted and included in the tolerance package.  相似文献   

12.
Inference concerning the negative binomial dispersion parameter, denoted by c, is important in many biological and biomedical investigations. Properties of the maximum-likelihood estimator of c and its bias-corrected version have been studied extensively, mainly, in terms of bias and efficiency [W.W. Piegorsch, Maximum likelihood estimation for the negative binomial dispersion parameter, Biometrics 46 (1990), pp. 863–867; S.J. Clark and J.N. Perry, Estimation of the negative binomial parameter κ by maximum quasi-likelihood, Biometrics 45 (1989), pp. 309–316; K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185]. However, not much work has been done on the construction of confidence intervals (C.I.s) for c. The purpose of this paper is to study the behaviour of some C.I. procedures for c. We study, by simulations, three Wald type C.I. procedures based on the asymptotic distribution of the method of moments estimate (mme), the maximum-likelihood estimate (mle) and the bias-corrected mle (bcmle) [K.K. Saha and S.R. Paul, Bias corrected maximum likelihood estimator of the negative binomial dispersion parameter, Biometrics 61 (2005), pp. 179–185] of c. All three methods show serious under-coverage. We further study parametric bootstrap procedures based on these estimates of c, which significantly improve the coverage probabilities. The bootstrap C.I.s based on the mle (Boot-MLE method) and the bcmle (Boot-BCM method) have coverages that are significantly better (empirical coverage close to the nominal coverage) than the corresponding bootstrap C.I. based on the mme, especially for small sample size and highly over-dispersed data. However, simulation results on lengths of the C.I.s show evidence that all three bootstrap procedures have larger average coverage lengths. Therefore, for practical data analysis, the bootstrap C.I. Boot-MLE or Boot-BCM should be used, although Boot-MLE method seems to be preferable over the Boot-BCM method in terms of both coverage and length. Furthermore, Boot-MLE needs less computation than Boot-BCM.  相似文献   

13.
We specify three classes of one-sided and two-sided 1-α confidence intervals with certain monotonicity and symmetry on the confidence limits for the probability of success, the parameter in a binomial distribution. For each class of one-sided confidence intervals the smallest interval, in the sense of the set inclusion, is obtained based on the direct analysis of coverage probability functions. A simple sufficient and necessary condition for the existence of the smallest two-sided confidence interval is provided and the smallest interval is derived if it exists. Thus the proposed intervals are uniformly most accurate, and have the uniformly minimum expected length as well.  相似文献   

14.
In this article, we develop four explicit asymptotic two-sided confidence intervals for the difference between two Poisson rates via a hybrid method. The basic idea of the proposed method is to estimate or recover the variances of the two Poisson rate estimates, which are required for constructing the confidence interval for the rate difference, from the confidence limits for the two individual Poisson rates. The basic building blocks of the approach are reliable confidence limits for the two individual Poisson rates. Four confidence interval estimators that have explicit solutions and good coverage levels are employed: the first normal with continuity correction, Rao score, Freeman and Tukey, and Jeffreys confidence intervals. Using simulation studies, we examine the performance of the four hybrid confidence intervals and compare them with three existing confidence intervals: the non-informative prior Bayes confidence interval, the t confidence interval based on Satterthwait's degrees of freedom, and the Bayes confidence interval based on Student's t confidence coefficient. Simulation results show that the proposed hybrid Freeman and Tukey, and the hybrid Jeffreys confidence intervals can be highly recommended because they outperform the others in terms of coverage probabilities and widths. The other methods tend to be too conservative and produce wider confidence intervals. The application of these confidence intervals are illustrated with three real data sets.  相似文献   

15.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   

16.
We investigate three interval estimators for binomial misclassification rates in a complementary Poisson model where the data are possibly misclassified: a Wald-based interval, a score-based interval, and an interval based on the profile log-likelihood statistic. We investigate the coverage and average width properties of these intervals via a simulation study. For small Poisson counts and small misclassification rates, the intervals can perform poorly in terms of coverage. The profile log-likelihood confidence interval (CI) is often proved to outperform the other intervals with good coverage and width properties. Lastly, we apply the CIs to a real data set involving traffic accident data that contain misclassified counts.  相似文献   

17.
We develop quality control charts for attributes using the maxima nomination sampling (MNS) method and compare them with the usual control charts based on simple random sampling (SRS) method, using average run length (ARL) performance, the required sample size in detecting quality improvement, and non-existence region for control limits. We study the effect of the sample size, the set size, and nonconformity proportion on the performance of MNS control charts using ARL curve. We show that MNS control chart can be used as a better benchmark for indicating quality improvement or quality deterioration relative to its SRS counterpart. We consider MNS charts from a cost perspective. We also develop MNS attribute control charts using randomized tests. A computer program is designed to determine the optimal control limits for an MNS p-chart such that, assuming known parameter values, the absolute deviation between the ARL and a specific nominal value is minimized. We provide good approximations for the optimal MNS control limits using regression analysis. Theoretical results are augmented with numerical evaluations. These show that MNS based control charts can yield substantial improvement over the usual control charts based on SRS.  相似文献   

18.
Control charts for counted data are commonly designed assuming that counts follow Poisson dynamics. However, in various real situations, the true underlying dynamics of the events are more properly modelled by a negative binomial process. This paper examines the consequences of the Poisson approximation to negative binomial dynamics for counts under CUSUM-type schemes. It is essentially found that, on setting up Poisson dyamics for an underlying negative binomial data structure, the real in-control average run length decreases, whereas the sensitivity of the chart is affected less. These results warn against the routine use of the Poisson assumption in planning control charts for counts.  相似文献   

19.
ABSTRACT

In this paper, we consider the problem of constructing non parametric confidence intervals for the mean of a positively skewed distribution. We suggest calibrated, smoothed bootstrap upper and lower percentile confidence intervals. For the theoretical properties, we show that the proposed one-sided confidence intervals have coverage probability α + O(n? 3/2). This is an improvement upon the traditional bootstrap confidence intervals in terms of coverage probability. A version smoothed approach is also considered for constructing a two-sided confidence interval and its theoretical properties are also studied. A simulation study is performed to illustrate the performance of our confidence interval methods. We then apply the methods to a real data set.  相似文献   

20.
Processes of serially dependent Poisson counts are commonly observed in real-world applications and can often be modeled by the first-order integer-valued autoregressive (INAR) model. For detecting positive shifts in the mean of a Poisson INAR(1) process, we propose the one-sided s exponentially weighted moving average (EWMA) control chart, which is based on a new type of rounding operation. The s-EWMA chart allows computing average run length (ARLs) exactly and efficiently with a Markov chain approach. Using an implementation of this procedure for ARL computation, the s-EWMA chart is easily designed, which is demonstrated with a real-data example. Based on an extensive study of ARLs, the out-of-control performance of the chart is analyzed and compared with that of a c chart and a one-sided cumulative sum (CUSUM) chart. We also investigate the robustness of the chart against departures from the assumed Poisson marginal distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号