首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For a normal model with a conjugate prior, we provide an in-depth examination of the effects of the hyperparameters on the long-run frequentist properties of posterior point and interval estimates. Under an assumed sampling model for the data-generating mechanism, we examine how hyperparameter values affect the mean-squared error (MSE) of posterior means and the true coverage of credible intervals. We develop two types of hyperparameter optimality. MSE optimal hyperparameters minimize the MSE of posterior point estimates. Credible interval optimal hyperparameters result in credible intervals that have a minimum length while still retaining nominal coverage. A poor choice of hyperparameters has a worse consequence on the credible interval coverage than on the MSE of posterior point estimates. We give an example to demonstrate how our results can be used to evaluate the potential consequences of hyperparameter choices.  相似文献   

2.
We investigate the sample size problem when a binomial parameter is to be estimated, but some degree of misclassification is possible. The problem is especially challenging when the degree to which misclassification occurs is not exactly known. Motivated by a Canadian survey of the prevalence of toxoplasmosis infection in pregnant women, we examine the situation where it is desired that a marginal posterior credible interval for the prevalence of width w has coverage 1−α, using a Bayesian sample size criterion. The degree to which the misclassification probabilities are known a priori can have a very large effect on sample size requirements, and in some cases achieving a coverage of 1−α is impossible, even with an infinite sample size. Therefore, investigators must carefully evaluate the degree to which misclassification can occur when estimating sample size requirements.  相似文献   

3.
There exist various methods for providing confidence intervals for unknown parameters of interest on the basis of a random sample. Generally, the bounds are derived from a system of non-linear equations. In this article, we present a general solution to obtain an unbiased confidence interval with confidence coefficient 1 ? α in one-parameter exponential families. Also we discuss two Bayesian credible intervals, the highest posterior density (HPD) and relative surprise (RS) credible intervals. Standard criteria like the coverage length and coverage probability are used to assess the performance of the HPD and RS credible intervals. Simulation studies and real data applications are presented for illustrative purposes.  相似文献   

4.
The poor performance of the Wald method for constructing confidence intervals (CIs) for a binomial proportion has been demonstrated in a vast literature. The related problem of sample size determination needs to be updated and comparative studies are essential to understanding the performance of alternative methods. In this paper, the sample size is obtained for the Clopper–Pearson, Bayesian (Uniform and Jeffreys priors), Wilson, Agresti–Coull, Anscombe, and Wald methods. Two two-step procedures are used: one based on the expected length (EL) of the CI and another one on its first-order approximation. In the first step, all possible solutions that satisfy the optimal criterion are obtained. In the second step, a single solution is proposed according to a new criterion (e.g. highest coverage probability (CP)). In practice, it is expected a sample size reduction, therefore, we explore the behavior of the methods admitting 30% and 50% of losses. For all the methods, the ELs are inflated, as expected, but the coverage probabilities remain close to the original target (with few exceptions). It is not easy to suggest a method that is optimal throughout the range (0, 1) for p. Depending on whether the goal is to achieve CP approximately or above the nominal level different recommendations are made.  相似文献   

5.
Bayesian sample size estimation for equivalence and non-inferiority tests for diagnostic methods is considered. The goal of the study is to test whether a new screening test of interest is equivalent to, or not inferior to the reference test, which may or may not be a gold standard. Sample sizes are chosen by the model performance criteria of average posterior variance, length and coverage probability. In the absence of a gold standard, sample sizes are evaluated by the ratio of marginal probabilities of the two screening tests; whereas in the presence of gold standard, sample sizes are evaluated by the measures of sensitivity and specificity.  相似文献   

6.
The problem of sample size determination in the context of Bayesian analysis is considered. For the familiar and practically important parameter of a geometric distribution with a beta prior, three different Bayesian approaches based on the highest posterior density intervals are discussed. A computer program handles all computational complexities and is available upon request.  相似文献   

7.
The optimal sample size comparing two Poisson rates when the counts are underreported is investigated. We consider two sampling scenarios. We first consider the case where only underreported data will be sampled and rely on informative prior distributions to obtain posterior identifiability. We also consider the case where an expensive infallible search method and a fallible method are available. An interval based sample size criterion is used in both sampling scenarios. Since the posterior distributions of the two rates are functions of confluent hypergeometric and hypergeometric functions simulation based methods are necessary to perform the sample size determination scheme.  相似文献   

8.
The paper develops some objective priors for correlation coefficient of the bivariate normal distribution. The criterion used is the asymptotic matching of coverage probabilities of Bayesian credible intervals with the corresponding frequentist coverage probabilities. The paper uses various matching criteria, namely, quantile matching, highest posterior density matching, and matching via inversion of test statistics. Each matching criterion leads to a different prior for the parameter of interest. We evaluate their performance by comparing credible intervals through simulation studies. In addition, inference through several likelihood-based methods have been discussed.  相似文献   

9.
In this paper, we develop a matching prior for the product of means in several normal distributions with unrestricted means and unknown variances. For this problem, properly assigning priors for the product of normal means has been issued because of the presence of nuisance parameters. Matching priors, which are priors matching the posterior probabilities of certain regions with their frequentist coverage probabilities, are commonly used but difficult to derive in this problem. We developed the first order probability matching priors for this problem; however, the developed matching priors are unproper. Thus, we apply an alternative method and derive a matching prior based on a modification of the profile likelihood. Simulation studies show that the derived matching prior performs better than the uniform prior and Jeffreys’ prior in meeting the target coverage probabilities, and meets well the target coverage probabilities even for the small sample sizes. In addition, to evaluate the validity of the proposed matching prior, Bayesian credible interval for the product of normal means using the matching prior is compared to Bayesian credible intervals using the uniform prior and Jeffrey’s prior, and the confidence interval using the method of Yfantis and Flatman.  相似文献   

10.
Standard multivariate control charts usually employ fixed sample sizes at equal sampling intervals to monitor a process. In this study, a multivariate exponential weighted moving average (MEWMA) chart with adaptive sample sizes is investigated. Performance measure of the adaptive-sample-size MEWMA chart is obtained through a Markov chain approach. The performance of the adaptive-sample-size MEWMA chart is compared with the fixed-sample-size control chart in terms of steady-state average run length for different magnitude of shifts in the process mean. It is shown that the adaptive-sample-size chart is more efficient than the fixed-sample-size MEWMA control chart in detecting shifts in the process mean.  相似文献   

11.
A simulation study was done to compare seven confidence interval methods, based on the normal approximation, for the difference of two binomial probabilities. Cases considered included minimum expected cell sizes ranging from 2 to 15 and smallest group sizes (NMIN) ranging from 6 to 100. Our recommendation is to use a continuity correction of 1/(2 NMIN) combined with the use of (N ? 1) rather than N in the estimate of the standard error. For all of the cases considered with minimum expected cell size of at least 3, this method gave coverage probabilities close to or greater than the nominal 90% and 95%. The Yates method is also acceptable, but it is slightly more conservative. At the other extreme, the usual method (with no continuity correction) does not provide adequate coverage even at the larger sample sizes. For the 99% intervals, our recommended method and the Yates correction performed equally well and are reasonable for minimum expected cell sizes of at least 5. None of the methods performed consistently well for a minimum expected cell size of 2.  相似文献   

12.
Cochran's rule for the minimum sample size to ensure adequate coverage of nominal 95% confidence intervals is derived by using the Edgeworth expansion for the distribution function of the standardized sample mean. The rule is extended for confidence intervals based on the Studentized sample mean. The performance of the rule and Edgeworth approximations for smaller sample sizes are examined by simulation.  相似文献   

13.
Noninformative priors are used for estimating the reliability of a stress-strength system. Several reference priors (cf. Berger and Bernardo 1989, 1992) are derived. A class of priors is found by matching the coverage probabilities of one-sided Bayesian credible intervals with the corresponding frequentist coverage probabilities. It turns out that none of the reference priors is a matching prior. Sufficient conditions for propriety of posteriors under reference priors and matching priors are provided. A simple matching prior is compared with three reference priors when sample sizes are small. The study shows that the matching prior performs better than Jeffreys's prior and reference priors in meeting the target coverage probabilities.  相似文献   

14.
For the unbalanced one-way random effects model with heterogeneous error variances, we propose the non-informative priors for the between-group variance and develop the first- and second-order matching priors. It turns out that the second-order matching priors do not exist and the reference prior and Jeffreys prior do not satisfy a first-order matching criterion. We also show that the first-order matching prior meets the frequentist target coverage probabilities much better than the Jeffreys prior and reference prior through simulation study, and the Bayesian credible intervals based on the matching prior and reference prior give shorter intervals than the existing confidence intervals by examples.  相似文献   

15.
We investigate the exact coverage and expected length properties of the model averaged tail area (MATA) confidence interval proposed by Turek and Fletcher, CSDA, 2012, in the context of two nested, normal linear regression models. The simpler model is obtained by applying a single linear constraint on the regression parameter vector of the full model. For given length of response vector and nominal coverage of the MATA confidence interval, we consider all possible models of this type and all possible true parameter values, together with a wide class of design matrices and parameters of interest. Our results show that, while not ideal, MATA confidence intervals perform surprisingly well in our regression scenario, provided that we use the minimum weight within the class of weights that we consider on the simpler model.  相似文献   

16.
Motivated by a study on comparing sensitivities and specificities of two diagnostic tests in a paired design when the sample size is small, we first derived an Edgeworth expansion for the studentized difference between two binomial proportions of paired data. The Edgeworth expansion can help us understand why the usual Wald interval for the difference has poor coverage performance in the small sample size. Based on the Edgeworth expansion, we then derived a transformation based confidence interval for the difference. The new interval removes the skewness in the Edgeworth expansion; the new interval is easy to compute, and its coverage probability converges to the nominal level at a rate of O(n−1/2). Numerical results indicate that the new interval has the average coverage probability that is very close to the nominal level on average even for sample sizes as small as 10. Numerical results also indicate this new interval has better average coverage accuracy than the best existing intervals in finite sample sizes.  相似文献   

17.
ABSTRACT

In non-normal populations, it is more convenient to use the coefficient of quartile variation rather than the coefficient of variation. This study compares the percentile and t-bootstrap confidence intervals with Bonett's confidence interval for the quartile variation. We show that empirical coverage of the bootstrap confidence intervals is closer to the nominal coverage (0.95) for small sample sizes (n = 5, 6, 7, 8, 9, 10 and 15) for most distributions studied. Bootstrap confidence intervals also have smaller average width. Thus, we propose using bootstrap confidence intervals for the coefficient of quartile variation when the sample size is small.  相似文献   

18.
We study Poisson confidence procedures that potentially lead to short confidence intervals, investigating the class of all minimal cardinality procedures. We consider how length minimization should be properly defined, and show that Casella and Robert's (1989) criterion for comparing Poisson confidence procedures leads to a contradiction. We provide an alternative criterion for comparing length performance, identify the unique length optimal minimal cardinality procedure by this criterion, and propose a modification that eliminates an important drawback it possesses. We focus on procedures whose coverage never falls below the nominal level and discuss the case in which the nominal level represents mean coverage.  相似文献   

19.
We formulate Bayesian approaches to the problems of determining the required sample size for Bayesian interval estimators of a predetermined length for a single Poisson rate, for the difference between two Poisson rates, and for the ratio of two Poisson rates. We demonstrate the efficacy of our Bayesian-based sample-size determination method with two real-data quality-control examples and compare the results to frequentist sample-size determination methods.  相似文献   

20.
The theoretical foundation for a number of model selection criteria is established in the context of inhomogeneous point processes and under various asymptotic settings: infill, increasing domain and combinations of these. For inhomogeneous Poisson processes we consider Akaike's information criterion and the Bayesian information criterion, and in particular we identify the point process analogue of ‘sample size’ needed for the Bayesian information criterion. Considering general inhomogeneous point processes we derive new composite likelihood and composite Bayesian information criteria for selecting a regression model for the intensity function. The proposed model selection criteria are evaluated using simulations of Poisson processes and cluster point processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号