首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 824 毫秒
1.
The generalized bootstrap is a parametric bootstrap method in which the underlying distribution function is estimated by fitting a generalized lambda distribution to the observed data. In this study, the generalized bootstrap is compared with the traditional parametric and non-parametric bootstrap methods in estimating the quantiles at different levels, especially for high quantiles. The performances of the three methods are evaluated in terms of cover rate, average interval width and standard deviation of width of the 95% bootstrap confidence intervals. Simulation results showed that the generalized bootstrap has overall better performance than the non-parametric bootstrap in high quantile estimation.  相似文献   

2.
One of the indicators for evaluating the capability of a process is the process capability index. In this article, bootstrap confidence intervals of the generalized process capability index (GPCI) proposed by Maiti et al. are studied through simulation, when the underlying distributions are Lindley and Power Lindley distributions. The maximum likelihood method is used to estimate the parameters of the models. Three bootstrap confidence intervals namely, standard bootstrap (SB), percentile bootstrap (PB), and bias-corrected percentile bootstrap (BCPB) are considered for obtaining confidence intervals of GPCI. A Monte Carlo simulation has been used to investigate the estimated coverage probabilities and average width of the bootstrap confidence intervals. Simulation results show that the estimated coverage probabilities of the percentile bootstrap confidence interval and the bias-corrected percentile bootstrap confidence interval get closer to the nominal confidence level than those of the standard bootstrap confidence interval. Finally, three real datasets are analyzed for illustrative purposes.  相似文献   

3.
Generalized order statistics (gos) were introduced by Kamps [1995. A Concept of Generalized Order Statistics. Teubner, Stuttgart] to unify several models of ordered random variables (rv's), e.g., (ordinary) order statistics (oos), records, sequential order statistics (sos). In a wide subclass of gos that includes oos and sos, the possible limit distribution functions (df's) of the maximum gos are obtained in Nasri-Roudsari [1996. Extreme value theory of generalized order statistics. J. Statist. Plann. Inference 55, 281–297]. In this paper, for this subclass, necessary and sufficient conditions of weak convergence, as well as the form of the possible limit df's of extreme, intermediate and central gos are derived. These results are extended to a wider subclass.  相似文献   

4.
Missing observations due to non‐response are commonly encountered in data collected from sample surveys. The focus of this article is on item non‐response which is often handled by filling in (or imputing) missing values using the observed responses (donors). Random imputation (single or fractional) is used within homogeneous imputation classes that are formed on the basis of categorical auxiliary variables observed on all the sampled units. A uniform response rate within classes is assumed, but that rate is allowed to vary across classes. We construct confidence intervals (CIs) for a population parameter that is defined as the solution to a smooth estimating equation with data collected using stratified simple random sampling. The imputation classes are assumed to be formed across strata. Fractional imputation with a fixed number of random draws is used to obtain an imputed estimating function. An empirical likelihood inference method under the fractional imputation is proposed and its asymptotic properties are derived. Two asymptotically correct bootstrap methods are developed for constructing the desired CIs. In a simulation study, the proposed bootstrap methods are shown to outperform traditional bootstrap methods and some non‐bootstrap competitors under various simulation settings. The Canadian Journal of Statistics 47: 281–301; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
ABSTRACT

This article investigates the finite sample properties of a range of inference methods for propensity score-based matching and weighting estimators frequently applied to evaluate the average treatment effect on the treated. We analyze both asymptotic approximations and bootstrap methods for computing variances and confidence intervals in our simulation designs, which are based on German register data and U.S. survey data. We vary the design w.r.t. treatment selectivity, effect heterogeneity, share of treated, and sample size. The results suggest that in general, theoretically justified bootstrap procedures (i.e., wild bootstrapping for pair matching and standard bootstrapping for “smoother” treatment effect estimators) dominate the asymptotic approximations in terms of coverage rates for both matching and weighting estimators. Most findings are robust across simulation designs and estimators.  相似文献   

6.
Based on recent developments in the field of operations research, we propose two adaptive resampling algorithms for estimating bootstrap distributions. One algorithm applies the principle of the recently proposed cross-entropy (CE) method for rare event simulation, and does not require calculation of the resampling probability weights via numerical optimization methods (e.g., Newton's method), whereas the other algorithm can be viewed as a multi-stage extension of the classical two-step variance minimization approach. The two algorithms can be easily used as part of a general algorithm for Monte Carlo calculation of bootstrap confidence intervals and tests, and are especially useful in estimating rare event probabilities. We analyze theoretical properties of both algorithms in an idealized setting and carry out simulation studies to demonstrate their performance. Empirical results on both one-sample and two-sample problems as well as a real survival data set show that the proposed algorithms are not only superior to traditional approaches, but may also provide more than an order of magnitude of computational efficiency gains.  相似文献   

7.
In this article, we propose a new technique for constructing confidence intervals for the mean of a noisy sequence with multiple change-points. We use the weighted bootstrap to generalize the bootstrap aggregating or bagging estimator. A standard deviation formula for the bagging estimator is introduced, based on which smoothed confidence intervals are constructed. To further improve the performance of the smoothed interval for weak signals, we suggest a strategy of adaptively choosing between the percentile intervals and the smoothed intervals. A new intensity plot is proposed to visualize the pattern of the change-points. We also propose a new change-point estimator based on the intensity plot, which has superior performance in comparison with the state-of-the-art segmentation methods. The finite sample performance of the confidence intervals and the change-point estimator are evaluated through Monte Carlo studies and illustrated with a real data example.  相似文献   

8.
In this study, we investigate the concept of the mean response for a treatment group mean as well as its estimation and prediction for generalized linear models with a subject‐wise random effect. Generalized linear models are commonly used to analyze categorical data. The model‐based mean for a treatment group usually estimates the response at the mean covariate. However, the mean response for the treatment group for studied population is at least equally important in the context of clinical trials. New methods were proposed to estimate such a mean response in generalized linear models; however, this has only been done when there are no random effects in the model. We suggest that, in a generalized linear mixed model (GLMM), there are at least two possible definitions of a treatment group mean response that can serve as estimation/prediction targets. The estimation of these treatment group means is important for healthcare professionals to be able to understand the absolute benefit vs risk. For both of these treatment group means, we propose a new set of methods that suggests how to estimate/predict both of them in a GLMMs with a univariate subject‐wise random effect. Our methods also suggest an easy way of constructing corresponding confidence and prediction intervals for both possible treatment group means. Simulations show that proposed confidence and prediction intervals provide correct empirical coverage probability under most circumstances. Proposed methods have also been applied to analyze hypoglycemia data from diabetes clinical trials.  相似文献   

9.
Standard algorithms for the construction of iterated bootstrap confidence intervals are computationally very demanding, requiring nested levels of bootstrap resampling. We propose an alternative approach to constructing double bootstrap confidence intervals that involves replacing the inner level of resampling by an analytical approximation. This approximation is based on saddlepoint methods and a tail probability approximation of DiCiccio and Martin (1991). Our technique significantly reduces the computational expense of iterated bootstrap calculations. A formal algorithm for the construction of our approximate iterated bootstrap confidence intervals is presented, and some crucial practical issues arising in its implementation are discussed. Our procedure is illustrated in the case of constructing confidence intervals for ratios of means using both real and simulated data. We repeat an experiment of Schenker (1985) involving the construction of bootstrap confidence intervals for a variance and demonstrate that our technique makes feasible the construction of accurate bootstrap confidence intervals in that context. Finally, we investigate the use of our technique in a more complex setting, that of constructing confidence intervals for a correlation coefficient.  相似文献   

10.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   

11.
Various bootstrap methods for variance estimation and confidence intervals in complex survey data, where sampling is done without replacement, have been proposed in the literature. The oldest, and perhaps the most intuitively appealing, is the without-replacement bootstrap (BWO) method proposed by Gross (1980). Unfortunately, the BWO method is only applicable to very simple sampling situations. We first introduce extensions of the BWO method to more complex sampling designs. The performance of the BWO and two other bootstrap methods, the rescaling bootstrap (Rao and Wu 1988) and the mirror-match bootstrap (Sitter 1992), are then compared through a simulation study. Together these three methods encompass the various bootstrap proposals.  相似文献   

12.
This article deals with the bootstrap as an alternative method to construct confidence intervals for the hyperparameters of structural models. The bootstrap procedure considered is the classical nonparametric bootstrap in the residuals of the fitted model using a well-known approach. The performance of this procedure is empirically obtained through Monte Carlo simulations implemented in Ox. Asymptotic and percentile bootstrap confidence intervals for the hyperparameters are built and compared by means of the coverage percentages. The results are similar but the bootstrap procedure is better for small sample sizes. The methods are applied to a real time series and confidence intervals are built for the hyperparameters.  相似文献   

13.
Generalized partially linear varying-coefficient models (GPLVCM) are frequently used in statistical modeling. However, the statistical inference of the GPLVCM, such as confidence region/interval construction, has not been very well developed. In this article, empirical likelihood-based inference for the parametric components in the GPLVCM is investigated. Based on the local linear estimators of the GPLVCM, an estimated empirical likelihood-based statistic is proposed. We show that the resulting statistic is asymptotically non-standard chi-squared. By the proposed empirical likelihood method, the confidence regions for the parametric components are constructed. In addition, when some components of the parameter are of particular interest, the construction of their confidence intervals is also considered. A simulation study is undertaken to compare the empirical likelihood and the other existing methods in terms of coverage accuracies and average lengths. The proposed method is applied to a real example.  相似文献   

14.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   

15.
This paper considers problems of interval estimation and hypotheses testing for the generalized Lorenz curve under the Pareto distribution. Our approach is based on the concepts of generalized test variables and generalized pivotal quantities. The merits of the proposed procedures are numerically carried out and compared with asymptotic and bootstrap methods. Empirical evidence shows that the coverage accuracy of the proposed confidence intervals and the type I error control of the proposed exact tests are satisfactory. For illustration purposes, a real data set on median income of the 20 occupations in the United States Census of Population is analysed.  相似文献   

16.
Confidence intervals obtained by bootstrap methods and normal approximation are compared, based on output data from terminating and steady-state simulations. Bootstrap intervals are equal or better than normal approximation intervals in actual probability coverages. Furthermore, bootstrap methods capture the skewness in the distribution of outputs and, therefore, are more desirable than normal approximation.  相似文献   

17.
In survey sampling, policy decisions regarding the allocation of resources to sub‐groups of a population depend on reliable predictors of their underlying parameters. However, in some sub‐groups, called small areas due to small sample sizes relative to the population, the information needed for reliable estimation is typically not available. Consequently, data on a coarser scale are used to predict the characteristics of small areas. Mixed models are the primary tools in small area estimation (SAE) and also borrow information from alternative sources (e.g., previous surveys and administrative and census data sets). In many circumstances, small area predictors are associated with location. For instance, in the case of chronic disease or cancer, it is important for policy makers to understand spatial patterns of disease in order to determine small areas with high risk of disease and establish prevention strategies. The literature considering SAE with spatial random effects is sparse and mostly in the context of spatial linear mixed models. In this article, small area models are proposed for the class of spatial generalized linear mixed models to obtain small area predictors and corresponding second‐order unbiased mean squared prediction errors via Taylor expansion and a parametric bootstrap approach. The performance of the proposed approach is evaluated through simulation studies and application of the models to a real esophageal cancer data set from Minnesota, U.S.A. The Canadian Journal of Statistics 47: 426–437; 2019 © 2019 Statistical Society of Canada  相似文献   

18.
Twenty-four-hour urinary excretion of nicotine equivalents, a biomarker for exposure to cigarette smoke, has been widely used in biomedical studies in recent years. Its accurate estimate is important for examining human exposure to tobacco smoke. The objective of this article is to compare the bootstrap confidence intervals of nicotine equivalents with the standard confidence intervals derived from linear mixed model (LMM) and generalized estimation equation. We use percentile bootstrap method because it has practical value for real-life application and it works well with nicotine data. To preserve the within-subject correlation of nicotine equivalents between repeated measures, we bootstrap the repeated measures of each subject as a vector. The results indicate that the bootstrapped estimates in most cases give better estimates than the LMM and generalized estimation equation without bootstrap.  相似文献   

19.
Risk estimation is an important statistical question for the purposes of selecting a good estimator (i.e., model selection) and assessing its performance (i.e., estimating generalization error). This article introduces a general framework for cross-validation and derives distributional properties of cross-validated risk estimators in the context of estimator selection and performance assessment. Arbitrary classes of estimators are considered, including density estimators and predictors for both continuous and polychotomous outcomes. Results are provided for general full data loss functions (e.g., absolute and squared error, indicator, negative log density). A broad definition of cross-validation is used in order to cover leave-one-out cross-validation, V-fold cross-validation, Monte Carlo cross-validation, and bootstrap procedures. For estimator selection, finite sample risk bounds are derived and applied to establish the asymptotic optimality of cross-validation, in the sense that a selector based on a cross-validated risk estimator performs asymptotically as well as an optimal oracle selector based on the risk under the true, unknown data generating distribution. The asymptotic results are derived under the assumption that the size of the validation sets converges to infinity and hence do not cover leave-one-out cross-validation. For performance assessment, cross-validated risk estimators are shown to be consistent and asymptotically linear for the risk under the true data generating distribution and confidence intervals are derived for this unknown risk. Unlike previously published results, the theorems derived in this and our related articles apply to general data generating distributions, loss functions (i.e., parameters), estimators, and cross-validation procedures.  相似文献   

20.
The topic of this article is the estimation uncertainty of the Stock–Watson and Gonzalo–Granger permanent-transitory decompositions in the framework of the co-integrated vector autoregression. We suggest an approach to construct the confidence interval of the transitory component estimate in a given period (e.g., the latest observation) by conditioning on the observed data in that period. To calculate asymptotically valid confidence intervals, we use the delta method and two bootstrap variants. As an illustration, we analyze the uncertainty of (U.S.) output gap estimates in a system of output, consumption, and investment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号