首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
We develop an approach to evaluating frequentist model averaging procedures by considering them in a simple situation in which there are two‐nested linear regression models over which we average. We introduce a general class of model averaged confidence intervals, obtain exact expressions for the coverage and the scaled expected length of the intervals, and use these to compute these quantities for the model averaged profile likelihood (MPI) and model‐averaged tail area confidence intervals proposed by D. Fletcher and D. Turek. We show that the MPI confidence intervals can perform more poorly than the standard confidence interval used after model selection but ignoring the model selection process. The model‐averaged tail area confidence intervals perform better than the MPI and postmodel‐selection confidence intervals but, for the examples that we consider, offer little over simply using the standard confidence interval for θ under the full model, with the same nominal coverage.  相似文献   

2.
Volume 3 of Analysis of Messy Data by Milliken & Johnson (2002) provides detailed recommendations about sequential model development for the analysis of covariance. In his review of this volume, Koehler (2002) asks whether users should be concerned about the effect of this sequential model development on the coverage probabilities of confidence intervals for comparing treatments. We present a general methodology for the examination of these coverage probabilities in the context of the two‐stage model selection procedure that uses two F tests and is proposed in Chapter 2 of Milliken & Johnson (2002). We apply this methodology to an illustrative example from this volume and show that these coverage probabilities are typically very far below nominal. Our conclusion is that users should be very concerned about the coverage probabilities of confidence intervals for comparing treatments constructed after this two‐stage model selection procedure.  相似文献   

3.
Generalized additive models represented using low rank penalized regression splines, estimated by penalized likelihood maximisation and with smoothness selected by generalized cross validation or similar criteria, provide a computationally efficient general framework for practical smooth modelling. Various authors have proposed approximate Bayesian interval estimates for such models, based on extensions of the work of Wahba, G. (1983) [Bayesian confidence intervals for the cross validated smoothing spline. J. R. Statist. Soc. B 45 , 133–150] and Silverman, B.W. (1985) [Some aspects of the spline smoothing approach to nonparametric regression curve fitting. J. R. Statist. Soc. B 47 , 1–52] on smoothing spline models of Gaussian data, but testing of such intervals has been rather limited and there is little supporting theory for the approximations used in the generalized case. This paper aims to improve this situation by providing simulation tests and obtaining asymptotic results supporting the approximations employed for the generalized case. The simulation results suggest that while across‐the‐model performance is good, component‐wise coverage probabilities are not as reliable. Since this is likely to result from the neglect of smoothing parameter variability, a simple and efficient simulation method is proposed to account for smoothing parameter uncertainty: this is demonstrated to substantially improve the performance of component‐wise intervals.  相似文献   

4.
We study the focused information criterion and frequentist model averaging and their application to post‐model‐selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non‐parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR‐estimator of a focused parameter is a non‐linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non‐normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure.  相似文献   

5.
Abstract.  The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient.  相似文献   

6.
Statistical inference methods for the Weibull parameters and their functions usually depend on extensive tables, and hence are rather inconvenient for the practical applications. In this paper, we propose a general method for constructing confidence intervals for the Weibull parameters and their functions, which eliminates the need for the extensive tables. The method is applied to obtain confidence intervals for the scale parameter, the mean-time-to-failure, the percentile function, and the reliability function. Monte-Carlo simulation shows that these intervals possess excellent finite sample properties, having coverage probabilities very close to their nominal levels, irrespective of the sample size and the degree of censorship.  相似文献   

7.
In this article, we discuss the utility of tolerance intervals for various regression models. We begin with a discussion of tolerance intervals for linear and nonlinear regression models. We then introduce a novel method for constructing nonparametric regression tolerance intervals by extending the well-established procedure for univariate data. Simulation results and application to real datasets are presented to help visualize regression tolerance intervals and to demonstrate that the methods we discuss have coverage probabilities very close to the specified nominal confidence level.  相似文献   

8.
The problems of constructing tolerance intervals for the binomial and Poisson distributions are considered. Closed-form approximate equal-tailed tolerance intervals (that control percentages in both tails) are proposed for both distributions. Exact coverage probabilities and expected widths are evaluated for the proposed equal-tailed tolerance intervals and the existing intervals. Furthermore, an adjustment to the nominal confidence level is suggested so that an equal-tailed tolerance interval can be used as a tolerance interval which includes a specified proportion of the population, but does not necessarily control percentages in both tails. Comparison of such coverage-adjusted tolerance intervals with respect to coverage probabilities and expected widths indicates that the closed-form approximate tolerance intervals are comparable with others, and less conservative, with minimum coverage probabilities close to the nominal level in most cases. The approximate tolerance intervals are simple and easy to compute using a calculator, and they can be recommended for practical applications. The methods are illustrated using two practical examples.  相似文献   

9.
We obtain approximate Bayes–confidence intervals for a scalar parameter based on directed likelihood. The posterior probabilities of these intervals agree with their unconditional coverage probabilities to fourth order, and with their conditional coverage probabilities to third order. These intervals are constructed for arbitrary smooth prior distributions. A key feature of the construction is that log-likelihood derivatives beyond second order are not required, unlike the asymptotic expansions of Severini.  相似文献   

10.
Multi-sample inference for simple-tree alternatives with ranked-set samples   总被引:1,自引:0,他引:1  
This paper develops a non‐parametric multi‐sample inference for simple‐tree alternatives for ranked‐set samples. The multi‐sample inference provides simultaneous one‐sample sign confidence intervals for the population medians. The decision rule compares these intervals to achieve the desired type I error. For the specified upper bounds on the experiment‐wise error rates, corresponding individual confidence coefficients are presented. It is shown that the testing procedure is distribution‐free. To achieve the desired confidence coefficients for multi‐sample inference, a nonparametric confidence interval is constructed by interpolating the adjacent order statistics. Interpolation coefficients and coverage probabilities are provided, along with the nominal levels.  相似文献   

11.
We propose and compare several methods of constructing wavelet-based confidence intervals for the self-similarity parameter in heavy-tailed observations. We use empirical coverage probabilities to assess the procedures by applying them to Linear Fractional Stable Motion with many choices of parameters. We find that the asymptotic confidence intervals provide empirical coverage often much lower than nominal. We recommend the use of resampling confidence intervals. We also propose a procedure for monitoring the constancy of the self-similarity parameter and apply it to Ethernet data sets.  相似文献   

12.
Confidence intervals for parameters of distributions with discrete sample spaces will be less conservative (i.e. have smaller coverage probabilities that are closer to the nominal level) when defined by inverting a test that does not require equal probability in each tail. However, the P‐value obtained from such tests can exhibit undesirable properties, which in turn result in undesirable properties in the associated confidence intervals. We illustrate these difficulties using P‐values for binomial proportions and the difference between binomial proportions.  相似文献   

13.

We consider a sieve bootstrap procedure to quantify the estimation uncertainty of long-memory parameters in stationary functional time series. We use a semiparametric local Whittle estimator to estimate the long-memory parameter. In the local Whittle estimator, discrete Fourier transform and periodogram are constructed from the first set of principal component scores via a functional principal component analysis. The sieve bootstrap procedure uses a general vector autoregressive representation of the estimated principal component scores. It generates bootstrap replicates that adequately mimic the dependence structure of the underlying stationary process. We first compute the estimated first set of principal component scores for each bootstrap replicate and then apply the semiparametric local Whittle estimator to estimate the memory parameter. By taking quantiles of the estimated memory parameters from these bootstrap replicates, we can nonparametrically construct confidence intervals of the long-memory parameter. As measured by coverage probability differences between the empirical and nominal coverage probabilities at three levels of significance, we demonstrate the advantage of using the sieve bootstrap compared to the asymptotic confidence intervals based on normality.

  相似文献   

14.
Inverse sampling is an appropriate design for the second phase of capture-recapture experiments which provides an exactly unbiased estimator of the population size. However, the sampling distribution of the resulting estimator tends to be highly right skewed for small recapture samples, so, the traditional Wald-type confidence intervals appear to be inappropriate. The objective of this paper is to study the performance of interval estimators for the population size under inverse recapture sampling without replacement. To this aim, we consider the Wald-type, the logarithmic transformation-based, the Wilson score, the likelihood ratio and the exact methods. Also, we propose some bootstrap confidence intervals for the population size, including the with-replacement bootstrap (BWR), the without replacement bootstrap (BWO), and the Rao–Wu’s rescaling method. A Monte Carlo simulation is employed to evaluate the performance of suggested methods in terms of the coverage probability, error rates and standardized average length. Our results show that the likelihood ratio and exact confidence intervals are preferred to other competitors, having the coverage probabilities close to the desired nominal level for any sample size, with more balanced error rate for exact method and shorter length for likelihood ratio method. It is notable that the BWO and Rao–Wu’s rescaling methods also may provide good intervals for some situations, however, those coverage probabilities are not invariant with respect to the population arguments, so one must be careful to use them.  相似文献   

15.
A stratified analysis of the differences in proportions has been widely employed in epidemiological research, social sciences, and drug development. It provides a useful framework for combining data across strata to produce a common effect. However, for rare events with incidence rates close to zero, popular confidence intervals for risk differences in a stratified analysis may not have appropriate coverage probabilities that approach the nominal confidence levels and the algorithms may fail to produce a valid confidence interval because of zero events in both the arms of a stratum. The main objective of this study is to evaluate the performance of certain methods commonly employed to construct confidence intervals for stratified risk differences when the response probabilities are close to a boundary value of zero or one. Additionally, we propose an improved stratified Miettinen–Nurminen confidence interval that exhibits a superior performance over standard methods while avoiding computational difficulties involving rare events. The proposed method can also be employed when the response probabilities are close to one.  相似文献   

16.
This article studies the construction of a Bayesian confidence interval for the ratio of marginal probabilities in matched-pair designs. Under a Dirichlet prior distribution, the exact posterior distribution of the ratio is derived. The tail confidence interval and the highest posterior density (HPD) interval are studied, and their frequentist performances are investigated by simulation in terms of mean coverage probability and mean expected length of the interval. An advantage of Bayesian confidence interval is that it is always well defined for any data structure and has shorter mean expected width. We also find that the Bayesian tail interval at Jeffreys prior performs as well as or better than the frequentist confidence intervals.  相似文献   

17.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   

18.
This article examines confidence intervals for the single coefficient of variation and the difference of coefficients of variation in the two-parameter exponential distributions, using the method of variance of estimates recovery (MOVER), the generalized confidence interval (GCI), and the asymptotic confidence interval (ACI). In simulation, the results indicate that coverage probabilities of the GCI maintain the nominal level in general. The MOVER performs well in terms of coverage probability when data only consist of positive values, but it has wider expected length. The coverage probabilities of the ACI satisfy the target for large sample sizes. We also illustrate our confidence intervals using a real-world example in the area of medical science.  相似文献   

19.
The primary goal of this paper is to examine the small sample coverage probability and size of jackknife confidence intervals centered at a Stein-rule estimator. A Monte Carlo experiment is used to explore the coverage probabilities and lengths of nominal 90% and 95% delete-one and infinitesimal jackknife confidence intervals centered at the Stein-rule estimator; these are compared to those obtained using a bootstrap procedure.  相似文献   

20.
We construct new pivotals to obtain confidence bounds and confidence intervals for the mean of a stationary process. These follow the approach based on estimating functions. The new pivotals are compared with the standard pivotal based on studentization. We study the first four cumulants of each of these pivotals and explain why the pivotals based on the estimating function approach result in better coverage probabilities. Some simulation results comparing these pivotals have been reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号