首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We derive a computationally convenient formula for the large sample coverage probability of a confidence interval for a scalar parameter of interest following a preliminary hypothesis test that a specified vector parameter takes a given value in a general regression model. Previously, this large sample coverage probability could only be estimated by simulation. Our formula only requires the evaluation, by numerical integration, of either a double or a triple integral, irrespective of the dimension of this specified vector parameter. We illustrate the application of this formula to a confidence interval for the odds ratio of myocardial infarction when the exposure is recent oral contraceptive use, following a preliminary test where two specified interactions in a logistic regression model are zero. For this real‐life data, we compare this large sample coverage probability with the actual coverage probability of this confidence interval, obtained by simulation.  相似文献   

2.
Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter.  相似文献   

3.
The purpose of our study is to propose a. procedure for determining the sample size at each stage of the repeated group significance, tests intended to compare the efficacy of two treatments when a response variable is normal. It is necessary to devise a procedure for reducing the maximum sample size because a large number of sample size are often used in group sequential test. In order to reduce the sample size at each stage, we construct the repeated confidence boundaries which enable us to find which of the two treatments is the more effective at an early stage. Thus we use the recursive formulae of numerical integrations to determine the sample size at the intermediate stage. We compare our procedure with Pocock's in terms of maximum sample size and average sample size in the simulations.  相似文献   

4.
The standard approach to construct nonparametric tolerance intervals is to use the appropriate order statistics, provided a minimum sample size requirement is met. However, it is well-known that this traditional approach is conservative with respect to the nominal level. One way to improve the coverage probabilities is to use interpolation. However, the extension to the case of two-sided tolerance intervals, as well as for the case when the minimum sample size requirement is not met, have not been studied. In this paper, an approach using linear interpolation is proposed for improving coverage probabilities for the two-sided setting. In the case when the minimum sample size requirement is not met, coverage probabilities are shown to improve by using linear extrapolation. A discussion about the effect on coverage probabilities and expected lengths when transforming the data is also presented. The applicability of this approach is demonstrated using three real data sets.  相似文献   

5.
There are several failure modes may cause system failed in reliability and survival analysis. It is usually assumed that the causes of failure modes are independent each other, though this assumption does not always hold. Dependent competing risks modes from Marshall-Olkin bivariate Weibull distribution under Type-I progressive interval censoring scheme are considered in this paper. We derive the maximum likelihood function, the maximum likelihood estimates, the 95% Bootstrap confidence intervals and the 95% coverage percentages of the parameters when shape parameter is known, and EM algorithm is applied when shape parameter is unknown. The Monte-Carlo simulation is given to illustrate the theoretical analysis and the effects of parameters estimates under different sample sizes. Finally, a data set has been analyzed for illustrative purposes.  相似文献   

6.
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models.  相似文献   

7.
The coverage rate of the original data by the prediction interval in simple linear regression is obtained by computer simulation. The results show that for small sample size, the coverage rate is higher than the assigned prediction coverage rate (confidence level). The two coverage rates begin to converge when the sample size is larger than 50 and the convergence rate depends very little on the distribution of the independent variable. Also, theoretical results on the asymptotic coverage rate and on the absolute minimum bounds are obtained  相似文献   

8.
In many diagnostic studies, multiple diagnostic tests are performed on each subject or multiple disease markers are available. Commonly, the information should be combined to improve the diagnostic accuracy. We consider the problem of comparing the discriminatory abilities between two groups of biomarkers. Specifically, this article focuses on confidence interval estimation of the difference between paired AUCs based on optimally combined markers under the assumption of multivariate normality. Simulation studies demonstrate that the proposed generalized variable approach provides confidence intervals with satisfying coverage probabilities at finite sample sizes. The proposed method can also easily provide P-values for hypothesis testing. Application to analysis of a subset of data from a study on coronary heart disease illustrates the utility of the method in practice.  相似文献   

9.
Volume 3 of Analysis of Messy Data by Milliken & Johnson (2002) provides detailed recommendations about sequential model development for the analysis of covariance. In his review of this volume, Koehler (2002) asks whether users should be concerned about the effect of this sequential model development on the coverage probabilities of confidence intervals for comparing treatments. We present a general methodology for the examination of these coverage probabilities in the context of the two‐stage model selection procedure that uses two F tests and is proposed in Chapter 2 of Milliken & Johnson (2002). We apply this methodology to an illustrative example from this volume and show that these coverage probabilities are typically very far below nominal. Our conclusion is that users should be very concerned about the coverage probabilities of confidence intervals for comparing treatments constructed after this two‐stage model selection procedure.  相似文献   

10.
This paper uses graphical methods to illustrate and compare the coverage properties of a number of methods for calculating confidence intervals for the difference between two independent binomial proportions. We investigate both small‐sample and large‐sample properties of both two‐sided and one‐sided coverage, with an emphasis on asymptotic methods. In terms of aligning the smoothed coverage probability surface with the nominal confidence level, we find that the score‐based methods on the whole have the best two‐sided coverage, although they have slight deficiencies for confidence levels of 90% or lower. For an easily taught, hand‐calculated method, the Brown‐Li ‘Jeffreys’ method appears to perform reasonably well, and in most situations, it has better one‐sided coverage than the widely recommended alternatives. In general, we find that the one‐sided properties of many of the available methods are surprisingly poor. In fact, almost none of the existing asymptotic methods achieve equal coverage on both sides of the interval, even with large sample sizes, and consequently if used as a non‐inferiority test, the type I error rate (which is equal to the one‐sided non‐coverage probability) can be inflated. The only exception is the Gart‐Nam ‘skewness‐corrected’ method, which we express using modified notation in order to include a bias correction for improved small‐sample performance, and an optional continuity correction for those seeking more conservative coverage. Using a weighted average of two complementary methods, we also define a new hybrid method that almost matches the performance of the Gart‐Nam interval. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Abstract.  The paper develops empirical Bayes (EB) confidence intervals for population means with distributions belonging to the natural exponential family-quadratic variance function (NEF-QVF) family when the sample size for a particular population is moderate or large. The basis for such development is to find an interval centred around the posterior mean which meets the target coverage probability asymptotically, and then show that the difference between the coverage probabilities of the Bayes and EB intervals is negligible up to a certain order. The approach taken is Edgeworth expansion so that the sample sizes from the different populations need not be significantly large. The proposed intervals meet the target coverage probabilities asymptotically, and are easy to construct. We illustrate use of these intervals in the context of small area estimation both through real and simulated data. The proposed intervals are different from the bootstrap intervals. The latter can be applied quite generally, but the order of accuracy of these intervals in meeting the desired coverage probability is unknown.  相似文献   

12.
The interval between two prespecified order statistics of a sample provides a distribution-free confidence interval for a population quantile. However, due to discreteness, only a small set of exact coverage probabilities is available. Interpolated confidence intervals are designed to expand the set of available coverage probabilities. However, we show here that the infimum of the coverage probability for an interpolated confidence interval is either the coverage probability for the inner interval or the coverage probability obtained by removing the more likely of the two extreme subintervals from the outer interval. Thus, without additional assumptions, interpolated intervals do not expand the set of available guaranteed coverage probabilities.  相似文献   

13.
A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.  相似文献   

14.
Motivated by a study on comparing sensitivities and specificities of two diagnostic tests in a paired design when the sample size is small, we first derived an Edgeworth expansion for the studentized difference between two binomial proportions of paired data. The Edgeworth expansion can help us understand why the usual Wald interval for the difference has poor coverage performance in the small sample size. Based on the Edgeworth expansion, we then derived a transformation based confidence interval for the difference. The new interval removes the skewness in the Edgeworth expansion; the new interval is easy to compute, and its coverage probability converges to the nominal level at a rate of O(n−1/2). Numerical results indicate that the new interval has the average coverage probability that is very close to the nominal level on average even for sample sizes as small as 10. Numerical results also indicate this new interval has better average coverage accuracy than the best existing intervals in finite sample sizes.  相似文献   

15.
This study constructs a simultaneous confidence region for two combinations of coefficients of linear models and their ratios based on the concept of generalized pivotal quantities. Many biological studies, such as those on genetics, assessment of drug effectiveness, and health economics, are interested in a comparison of several dose groups with a placebo group and the group ratios. The Bonferroni correction and the plug-in method based on the multivariate-t distribution have been proposed for the simultaneous region estimation. However, the two methods are asymptotic procedures, and their performance in finite sample sizes has not been thoroughly investigated. Based on the concept of generalized pivotal quantity, we propose a Bonferroni correction procedure and a generalized variable (GV) procedure to construct the simultaneous confidence regions. To address a genetic concern of the dominance ratio, we conduct a simulation study to empirically investigate the probability coverage and expected length of the methods for various combinations of sample sizes and values of the dominance ratio. The simulation results demonstrate that the simultaneous confidence region based on the GV procedure provides sufficient coverage probability and reasonable expected length. Thus, it can be recommended in practice. Numerical examples using published data sets illustrate the proposed methods.  相似文献   

16.
The author proposes an adaptive method which produces confidence intervals that are often narrower than those obtained by the traditional procedures. The proposed methods use both a weighted least squares approach to reduce the length of the confidence interval and a permutation technique to insure that its coverage probability is near the nominal level. The author reports simulations comparing the adaptive intervals to the traditional ones for the difference between two population means, for the slope in a simple linear regression, and for the slope in a multiple linear regression having two correlated exogenous variables. He is led to recommend adaptive intervals for sample sizes superior to 40 when the error distribution is not known to be Gaussian.  相似文献   

17.
In ranked-set sampling (RSS), a stratification by ranks is used to obtain a sample that tends to be more informative than a simple random sample of the same size. Previous work has shown that if the rankings are perfect, then one can use RSS to obtain Kolmogorov–Smirnov type confidence bands for the CDF that are narrower than those obtained under simple random sampling. Here we develop Kolmogorov–Smirnov type confidence bands that work well whether the rankings are perfect or not. These confidence bands are obtained by using a smoothed bootstrap procedure that takes advantage of special features of RSS. We show through a simulation study that the coverage probabilities are close to nominal even for samples with just two or three observations. A new algorithm allows us to avoid the bootstrap simulation step when sample sizes are relatively small.  相似文献   

18.
In the cases with three ordinal diagnostic groups, the important measures of diagnostic accuracy are the volume under surface (VUS) and the partial volume under surface (PVUS) which are the extended forms of the area under curve (AUC) and the partial area under curve (PAUC). This article addresses confidence interval estimation of the difference in paired VUS s and the difference in paired PVUS s. To focus especially on studies with small to moderate sample sizes, we propose an approach based on the concepts of generalized inference. A Monte Carlo study demonstrates that the proposed approach generally can provide confidence intervals with reasonable coverage probabilities even at small sample sizes. The proposed approach is compared to a parametric bootstrap approach and a large sample approach through simulation. Finally, the proposed approach is illustrated via an application to a data set of blood test results of anemia patients.  相似文献   

19.
20.
In traditional bootstrap applications the size of a bootstrap sample equals the parent sample size, n say. Recent studies have shown that using a bootstrap sample size different from n may sometimes provide a more satisfactory solution. In this paper we apply the latter approach to correct for coverage error in construction of bootstrap confidence bounds. We show that the coverage error of a bootstrap percentile method confidence bound, which is of order O ( n −2/2) typically, can be reduced to O ( n −1) by use of an optimal bootstrap sample size. A simulation study is conducted to illustrate our findings, which also suggest that the new method yields intervals of shorter length and greater stability compared to competitors of similar coverage accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号