首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider the problem of finding an upper 1 –α confidence limit for a scalar parameter of interest ø in the presence of a nuisance parameter vector θ when the data are discrete. Approximate upper limits T may be found by approximating the relevant unknown finite sample distribution by its limiting distribution. Such approximate upper limits typically have coverage probabilities below, sometimes far below, 1 –α for certain values of (θ, ø). This paper remedies that defect by shifting the possible values t of T so that they are as small as possible subject both to the minimum coverage probability being greater than or equal to 1 –α, and to the shifted values being in the same order as the unshifted ts. The resulting upper limits are called ‘tight’. Under very weak and easily checked regularity conditions, a formula is developed for the tight upper limits.  相似文献   

2.
We consider the problem of finding an upper 1 –α confidence limit (α < ½) for a scalar parameter of interest θ in the presence of a nuisance parameter vector ψ when the data are discrete. Using a statistic T as a starting point, Kabaila & Lloyd (1997) define what they call the tight upper limit with respect to T . This tight upper limit possesses certain attractive properties. However, these properties provide very little guidance on the choice of T itself. The practical recommendation made by Kabaila & Lloyd (1997) is that T be an approximate upper 1 –α confidence limit for θ rather than, say, an approximately median unbiased estimator of θ. We derive a large sample approximation which provides strong theoretical support for this recommendation.  相似文献   

3.
This paper concerns prediction from the frequentist point of view. The aim is to define a well-calibrated predictive distribution giving prediction intervals, and in particular prediction limits, with coverage probability equal or close to the target nominal value. This predictive distribution can be considered in a number of situations, including discrete data and non-regular cases, and it is founded on the idea of calibrating prediction limits to control the associated coverage probability. Whenever the computation of the proposed distribution is not feasible, this can be approximated using a suitable bootstrap simulation procedure or by considering high-order asymptotic expansions, giving predictive distributions already known in the literature. Examples and applications of the results to different contexts show the wide applicability and the very good performance of the proposed predictive distribution.  相似文献   

4.
This article explores the calculation of tolerance limits for the Poisson regression model based on the profile likelihood methodology and small-sample asymptotic corrections to improve the coverage probability performance. The data consist of n counts, where the mean or expected rate depends upon covariates via the log regression function. This article evaluated upper tolerance limits as a function of covariates. The upper tolerance limits are obtained from upper confidence limits of the mean. To compute upper confidence limits the following methodologies were considered: likelihood based asymptotic methods, small-sample asymptotic methods to improve the likelihood based methodology, and the delta method. Two applications are discussed: one application relating to defects in semiconductor wafers due to plasma etching and the other examining the number of surface faults in upper seams of coal mines. All three methodologies are illustrated for the two applications.  相似文献   

5.
One standard summary of a clinical trial is a confidence limit for the effect of the treatment. Unfortunately, standard approximate limits may have poor frequentist properties, even for quite large sample sizes. It has been known since Buehler (1957 Buehler, R. J. (1957). Confidence intervals for the product of two binomial parameters. Journal of Computational and Graphical Statistics 52:482493. [Google Scholar]) that an imperfect confidence limit can be adjusted to have exact coverage. These “tight” limits are the gold standard frequentist confidence limit. Computing tight limits requires exact calculation of certain tail probabilities and optimisation of potentially erratic functions of the nuisance parameter. Naive implementation is both computationally unreliable and highly burdensome, and perhaps explains why they are not in common use. For clinical trials however, where the data and parameter have dimension two, the difficulties can be fully surmounted. This paper brings together several results in the area and applies them to simple two dimensional problems. It is shown how to reduce the computational burden by an order of magnitude. Difficulties with the optimisation reliability are mitigated by applying two different computational strategies, which tend to break down under different conditions, and taking the less stringent of the two computed limits. This paper specifically develops limits for the relative risk in a clinical trial, but it should be clear to the reader that the method extends to arbitrary measures of treatment effect without essential modification.  相似文献   

6.
This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data‐based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 ‐ α constructed on this (false) assumption, and calls this the naive 1 ‐ α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real‐life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real‐life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one‐dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real‐life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.  相似文献   

7.
Reference‐scaled average bioequivalence (RSABE) approaches for highly variable drugs are based on linearly scaling the bioequivalence limits according to the reference formulation within‐subject variability. RSABE methods have type I error control problems around the value where the limits change from constant to scaled. In all these methods, the probability of type I error has only one absolute maximum at this switching variability value. This allows adjusting the significance level to obtain statistically correct procedures (that is, those in which the probability of type I error remains below the nominal significance level), at the expense of some potential power loss. In this paper, we explore adjustments to the EMA and FDA regulatory RSABE approaches, and to a possible improvement of the original EMA method, designated as HoweEMA. The resulting adjusted methods are completely correct with respect to type I error probability. The power loss is generally small and tends to become irrelevant for moderately large (affordable in real studies) sample sizes.  相似文献   

8.
In 1957, R.J. Buehler gave a method of constructing honest upper confidence limits for a parameter that are as small as possible subject to a pre‐specified ordering restriction. In reliability theory, these ‘Buehler bounds’ play a central role in setting upper confidence limits for failure probabilities. Despite their stated strong optimality property, Buehler bounds remain virtually unknown to the wider statistical audience. This paper has two purposes. First, it points out that Buehler's construction is not well defined in general. However, a slightly modified version of the Buehler construction is minimal in a slightly weaker, but still compelling, sense. A proof is presented of the optimality of this modified Buehler construction under minimal regularity conditions. Second, the paper demonstrates that Buehler bounds can be expressed as the supremum of Buehler bounds conditional on any nuisance parameters, under very weak assumptions. This result is then used to demonstrate that Buehler bounds reduce to a trivial construction for the location‐scale model. This places important practical limits on the application of Buehler bounds and explains why they are not as well known as they deserve to be.  相似文献   

9.
We study Poisson confidence procedures that potentially lead to short confidence intervals, investigating the class of all minimal cardinality procedures. We consider how length minimization should be properly defined, and show that Casella and Robert's (1989) criterion for comparing Poisson confidence procedures leads to a contradiction. We provide an alternative criterion for comparing length performance, identify the unique length optimal minimal cardinality procedure by this criterion, and propose a modification that eliminates an important drawback it possesses. We focus on procedures whose coverage never falls below the nominal level and discuss the case in which the nominal level represents mean coverage.  相似文献   

10.
Abstract.  Conventional bootstrap- t intervals for density functions based on kernel density estimators exhibit poor coverages due to failure of the bootstrap to estimate the bias correctly. The problem can be resolved by either estimating the bias explicitly or undersmoothing the kernel density estimate to undermine its bias asymptotically. The resulting bias-corrected intervals have an optimal coverage error of order arbitrarily close to second order for a sufficiently smooth density function. We investigated the effects on coverage error of both bias-corrected intervals when the nominal coverage level is calibrated by the iterated bootstrap. In either case, an asymptotic reduction of coverage error is possible provided that the bias terms are handled using an extra round of smoothed bootstrapping. Under appropriate smoothness conditions, the optimal coverage error of the iterated bootstrap- t intervals has order arbitrarily close to third order. Examples of both simulated and real data are reported to illustrate the iterated bootstrap procedures.  相似文献   

11.
In several statistical problems, nonparametric confidence intervals for population quantiles can be constructed and their coverage probabilities can be computed exactly, but cannot in general be rendered equal to a pre-determined level. The same difficulty arises for coverage probabilities of nonparametric prediction intervals for future observations. One solution to this difficulty is to interpolate between intervals which have the closest coverage probability from above and below to the pre-determined level. In this paper, confidence intervals for population quantiles are constructed based on interpolated upper and lower records. Subsequently, prediction intervals are obtained for future upper records based on interpolated upper records. Additionally, we derive upper bounds for the coverage error of these confidence and prediction intervals. Finally, our results are applied to some real data sets. Also, a comparison via a simulation study is done with similar classical intervals obtained before.  相似文献   

12.
The method of constructing confidence intervals from hypothesis tests is studied in the case in which there is a single unknown parameter and is proved to provide confidence intervals with coverage probability that is at least the nominal level. The confidence intervals obtained by the method in several different contexts are seen to compare favorably with confidence intervals obtained by traditional methods. The traditional intervals are seen to have coverage probability less than the nominal level in several instances, This method can be applied to all confidence interval problems and reduces to the traditional method when an exact pivotal statistic is known.  相似文献   

13.
The problems of estimating the mean and an upper percentile of a lognormal population with nonnegative values are considered. For estimating the mean of a such population based on data that include zeros, a simple confidence interval (CI) that is obtained by modifying Tian's [Inferences on the mean of zero-inflated lognormal data: the generalized variable approach. Stat Med. 2005;24:3223—3232] generalized CI, is proposed. A fiducial upper confidence limit (UCL) and a closed-form approximate UCL for an upper percentile are developed. Our simulation studies indicate that the proposed methods are very satisfactory in terms of coverage probability and precision, and better than existing methods for maintaining balanced tail error rates. The proposed CI and the UCL are simple and easy to calculate. All the methods considered are illustrated using samples of data involving airborne chlorine concentrations and data on diagnostic test costs.  相似文献   

14.
The Bootstrap and Kriging Prediction Intervals   总被引:1,自引:0,他引:1  
Kriging is a method for spatial prediction that, given observations of a spatial process, gives the optimal linear predictor of the process at a new specified point. The kriging predictor may be used to define a prediction interval for the value of interest. The coverage of the prediction interval will, however, equal the nominal desired coverage only if it is constructed using the correct underlying covariance structure of the process. If this is unknown, it must be estimated from the data. We study the effect on the coverage accuracy of the prediction interval of substituting the true covariance parameters by estimators, and the effect of bootstrap calibration of coverage properties of the resulting 'plugin' interval. We demonstrate that plugin and bootstrap calibrated intervals are asymptotically accurate in some generality and that bootstrap calibration appears to have a significant effect in improving the rate of convergence of coverage error.  相似文献   

15.
We provide a comprehensive and critical review of Yates’ continuity correction for the normal approximation to the binomial distribution, in particular emphasizing its poor ability to approximate extreme tail probabilities. As an alternative method, we also review Cressie's finely tuned continuity correction. In addition, we demonstrate how Yates’ continuity correction is used to improve the coverage probability of binomial confidence limits, and propose new confidence limits by applying Cressie's continuity correction. These continuity correction methods are numerically compared and illustrated by data examples arising from industry and medicine.  相似文献   

16.
The Buehler 1 –α upper confidence limit is as small as possible, subject to the constraints that its coverage probability is at least 1 –α and that it is a non‐decreasing function of a pre‐specified statistic T. This confidence limit has important biostatistical and reliability applications. Previous research has examined the way the choice of T affects the efficiency of the Buehler 1 –α upper confidence limit for a given value of α. This paper considers how T should be chosen when the Buehler limit is to be computed for a range of values of α. If T is allowed to depend on α then the Buehler limit is not necessarily a non‐increasing function of α, i.e. the limit is ‘non‐nesting’. Furthermore, non‐nesting occurs in standard and practical examples. Therefore, if the limit is to be computed for a range [αL, αU]of values of α, this paper suggests that T should be a carefully chosen approximate 1 –αL upper limit for θ. The choice leads to Buehler limits that have high statistical efficiency and are nesting.  相似文献   

17.
We describe a general family of contingent response models. These models have ternary outcomes constructed from two Bernoulli outcomes, where one outcome is only observed if the other outcome is positive. This family is represented in a canonical form which yields general results for its Fisher information. A bivariate extreme value distribution illustrates the model and optimal design results. To provide a motivating context, we call the two binary events that compose the contingent responses toxicity and efficacy. Efficacy or lack thereof is assumed only to be observable in the absence of toxicity, resulting in the ternary response (toxicity, efficacy without toxicity, neither efficacy nor toxicity). The rate of toxicity, and the rate of efficacy conditional on no toxicity, are assumed to increase with dose. While optimal designs for contingent response models are numerically found, limiting optimal designs can be expressed in closed forms. In particular, in the family of four parameter bivariate location-scale models we study, as the marginal probability functions of toxicity and no efficacy diverge, limiting D optimal designs are shown to consist of a mixture of the D optimal designs for each failure (toxicity and no efficacy) univariately. Limiting designs are also obtained for the case of equal scale parameters.  相似文献   

18.
Suppose that X is a discrete random variable whose possible values are {0, 1, 2,⋯} and whose probability mass function belongs to a family indexed by the scalar parameter θ . This paper presents a new algorithm for finding a 1 − α confidence interval for θ based on X which possesses the following three properties: (i) the infimum over θ of the coverage probability is 1 − α ; (ii) the confidence interval cannot be shortened without violating the coverage requirement; (iii) the lower and upper endpoints of the confidence intervals are increasing functions of the observed value x . This algorithm is applied to the particular case that X has a negative binomial distribution.  相似文献   

19.
The standard approach to construct nonparametric tolerance intervals is to use the appropriate order statistics, provided a minimum sample size requirement is met. However, it is well-known that this traditional approach is conservative with respect to the nominal level. One way to improve the coverage probabilities is to use interpolation. However, the extension to the case of two-sided tolerance intervals, as well as for the case when the minimum sample size requirement is not met, have not been studied. In this paper, an approach using linear interpolation is proposed for improving coverage probabilities for the two-sided setting. In the case when the minimum sample size requirement is not met, coverage probabilities are shown to improve by using linear extrapolation. A discussion about the effect on coverage probabilities and expected lengths when transforming the data is also presented. The applicability of this approach is demonstrated using three real data sets.  相似文献   

20.
A challenge for implementing performance-based Bayesian sample size determination is selecting which of several methods to use. We compare three Bayesian sample size criteria: the average coverage criterion (ACC) which controls the coverage rate of fixed length credible intervals over the predictive distribution of the data, the average length criterion (ALC) which controls the length of credible intervals with a fixed coverage rate, and the worst outcome criterion (WOC) which ensures the desired coverage rate and interval length over all (or a subset of) possible datasets. For most models, the WOC produces the largest sample size among the three criteria, and sample sizes obtained by the ACC and the ALC are not the same. For Bayesian sample size determination for normal means and differences between normal means, we investigate, for the first time, the direction and magnitude of differences between the ACC and ALC sample sizes. For fixed hyperparameter values, we show that the difference of the ACC and ALC sample size depends on the nominal coverage, and not on the nominal interval length. There exists a threshold value of the nominal coverage level such that below the threshold the ALC sample size is larger than the ACC sample size, and above the threshold the ACC sample size is larger. Furthermore, the ACC sample size is more sensitive to changes in the nominal coverage. We also show that for fixed hyperparameter values, there exists an asymptotic constant ratio between the WOC sample size and the ALC (ACC) sample size. Simulation studies are conducted to show that similar relationships among the ACC, ALC, and WOC may hold for estimating binomial proportions. We provide a heuristic argument that the results can be generalized to a larger class of models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号