首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The primary purpose of this paper is that of developing a sequential Monte Carlo approximation to an ideal bootstrap estimate of the parameter of interest. Using the concept of fixed-precision approximation, we construct a sequential stopping rule for determining the number of bootstrap samples to be taken in order to achieve a specified precision of the Monte Carlo approximation. It is shown that the sequential Monte Carlo approximation is asymptotically efficient in the problems of estimation of the bias and standard error of a given statistic. Efficient bootstrap resampling is discussed and a numerical study is carried out for illustrating the obtained theoretical results.  相似文献   

2.
Quasi-random sequences are known to give efficient numerical integration rules in many Bayesian statistical problems where the posterior distribution can be transformed into periodic functions on then-dimensional hypercube. From this idea we develop a quasi-random approach to the generation of resamples used for Monte Carlo approximations to bootstrap estimates of bias, variance and distribution functions. We demonstrate a major difference between quasi-random bootstrap resamples, which are generated by deterministic algorithms and have no true randomness, and the usual pseudo-random bootstrap resamples generated by the classical bootstrap approach. Various quasi-random approaches are considered and are shown via a simulation study to result in approximants that are competitive in terms of efficiency when compared with other bootstrap Monte Carlo procedures such as balanced and antithetic resampling.  相似文献   

3.
Alternative methods of estimating properties of unknown distributions include the bootstrap and the smoothed bootstrap. In the standard bootstrap setting, Johns (1988) introduced an importance resam¬pling procedure that results in more accurate approximation to the bootstrap estimate of a distribution function or a quantile. With a suitable “exponential tilting” similar to that used by Johns, we derived a smoothed version of importance resampling in the framework of the smoothed bootstrap. Smoothed importance resampling procedures were developed for the estimation of distribution functions of the Studentized mean, the Studentized variance, and the correlation coefficient. Implementation of these procedures are presented via simulation results which concentrate on the problem of estimation of distribution functions of the Studentized mean and Studentized variance for different sample sizes and various pre-specified smoothing bandwidths for the normal data; additional simulations were conducted for the estimation of quantiles of the distribution of the Studentized mean under an optimal smoothing bandwidth when the original data were simulated from three different parent populations: lognormal, t(3) and t(10). These results suggest that in cases where it is advantageous to use the smoothed bootstrap rather than the standard bootstrap, the amount of resampling necessary might be substantially reduced by the use of importance resampling methods and the efficiency gains depend on the bandwidth used in the kernel density estimation.  相似文献   

4.
Importance resampling is an approach that uses exponential tilting to reduce the resampling necessary for the construction of nonparametric bootstrap confidence intervals. The properties of bootstrap importance confidence intervals are well established when the data is a smooth function of means and when there is no censoring. However, in the framework of survival or time-to-event data, the asymptotic properties of importance resampling have not been rigorously studied, mainly because of the unduly complicated theory incurred when data is censored. This paper uses extensive simulation to show that, for parameter estimates arising from fitting Cox proportional hazards models, importance bootstrap confidence intervals can be constructed if the importance resampling probabilities of the records for the n individuals in the study are determined by the empirical influence function for the parameter of interest. Our results show that, compared to uniform resampling, importance resampling improves the relative mean-squared-error (MSE) efficiency by a factor of nine (for n = 200). The efficiency increases significantly with sample size, is mildly associated with the amount of censoring, but decreases slightly as the number of bootstrap resamples increases. The extra CPU time requirement for calculating importance resamples is negligible when compared to the large improvement in MSE efficiency. The method is illustrated through an application to data on chronic lymphocytic leukemia, which highlights that the bootstrap confidence interval is the preferred alternative to large sample inferences when the distribution of a specific covariate deviates from normality. Our results imply that, because of its computational efficiency, importance resampling is recommended whenever bootstrap methodology is implemented in a survival framework. Its use is particularly important when complex covariates are involved or the survival problem to be solved is part of a larger problem; for instance, when determining confidence bounds for models linking survival time with clusters identified in gene expression microarray data.  相似文献   

5.
Based on recent developments in the field of operations research, we propose two adaptive resampling algorithms for estimating bootstrap distributions. One algorithm applies the principle of the recently proposed cross-entropy (CE) method for rare event simulation, and does not require calculation of the resampling probability weights via numerical optimization methods (e.g., Newton's method), whereas the other algorithm can be viewed as a multi-stage extension of the classical two-step variance minimization approach. The two algorithms can be easily used as part of a general algorithm for Monte Carlo calculation of bootstrap confidence intervals and tests, and are especially useful in estimating rare event probabilities. We analyze theoretical properties of both algorithms in an idealized setting and carry out simulation studies to demonstrate their performance. Empirical results on both one-sample and two-sample problems as well as a real survival data set show that the proposed algorithms are not only superior to traditional approaches, but may also provide more than an order of magnitude of computational efficiency gains.  相似文献   

6.
Variance estimation under systematic sampling with probability proportional to size is known to be a difficult problem. We attempt to tackle this problem by the bootstrap resampling method. It is shown that the usual way to bootstrap fails to give satisfactory variance estimates. As a remedy, we propose a double bootstrap method which is based on certain working models and involves two levels of resampling. Unlike existing methods which deal exclusively with the Horvitz–Thompson estimator, the double bootstrap method can be used to estimate the variance of any statistic. We illustrate this within the context of both mean and median estimation. Empirical results based on five natural populations are encouraging.  相似文献   

7.
For estimating the distribution of a standardized statistic, the bootstrap estimate is known to be local asymptotic minimax. Various computational techniques have been developed to improve on the simulation efficiency of uniform resampling, the standard Monte Carlo approach to approximating the bootstrap estimate. Two new approaches are proposed which give accurate yet simple approximations to the bootstrap estimate. The second of the approaches even improves the convergence rate of the simulation error. A simulation study examines the performance of these two approaches in comparison with other modified bootstrap estimates.  相似文献   

8.
ABSTRACT

This article presents a new test for unit roots based on least absolute deviation estimation specially designed to work for time series with autoregressive errors. The methodology used is a bootstrap scheme based on estimating a model and then the innovations. The resampling part is performed under the null hypothesis and, as it is customary in bootstrap procedures, is automatic and does not rely on the calculation of any nuisance parameter. The validity of the procedure is established and the asymptotic distribution of the statistic proposed is proved to converge to the correct distribution. To analyze the performance of the test for finite samples, a Monte Carlo study is conducted showing a very good behavior in many different situations.  相似文献   

9.
We consider fitting the so‐called Emax model to continuous response data from clinical trials designed to investigate the dose–response relationship for an experimental compound. When there is insufficient information in the data to estimate all of the parameters because of the high dose asymptote being ill defined, maximum likelihood estimation fails to converge. We explore the use of either bootstrap resampling or the profile likelihood to make inferences about effects and doses required to give a particular effect, using limits on the parameter values to obtain the value of the maximum likelihood when the high dose asymptote is ill defined. The results obtained show these approaches to be comparable with or better than some others that have been used when maximum likelihood estimation fails to converge and that the profile likelihood method outperforms the method of bootstrap resampling used. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

11.
This paper presents a new random weighting-based adaptive importance resampling method to estimate the sampling distribution of a statistic. A random weighting-based cross-entropy procedure is developed to iteratively calculate the optimal resampling probability weights by minimizing the Kullback-Leibler distance between the optimal importance resampling distribution and a family of parameterized distributions. Subsequently, the random weighting estimation of the sampling distribution is constructed from the obtained optimal importance resampling distribution. The convergence of the proposed method is rigorously proved. Simulation and experimental results demonstrate that the proposed method can effectively estimate the sampling distribution of a statistic.  相似文献   

12.
Jun Shao 《Statistics》2013,47(3-4):203-237
This article reviews the applications of three resampling methods, the jackknife, the balanced repeated replication, and the bootstrap, in sample surveys. The sampling design under consideration is a stratified multistage sampling design. We discuss the implementation of the resampling methods; for example, the construction of balanced repeated replications and approximated balanced repeated replication estimators; four modified bootstrap algorithms to generate bootstrap samples; and three different ways of applying the resampling methods in the presence of imputed missing values. Asymptotic properties of the resampling estimators are discussed for two types of important survey estimators, functions of weighted averages and sample quantiles.  相似文献   

13.
The maximum likelihood, jackknife and bootstrap estimators of linkage disequilibrium, a measure of association in population genetics, are derived and compared. It is found that for point estimation, the resampling methods generate almost identical mean square errors. The maximum likelihood estimator could have bigger or smaller mean square errors depending on the parameters of the underlying population. However the bootstrap confidence interval is superior to the other two as the length of the intervals is shorter or the probability that the 95% confidence intervals include the true parameter is closer to 0.95. Although the standardised measure of linkage disequilibrium has a range from -1 to 1 regardless of marginal frequencies, it is shown that the distribution of this standardised measure is still not allele frequency independent under the multinomial sampling scheme.  相似文献   

14.
Resampling methods are a common measure to estimate the variance of a statistic of interest when data consist of nonresponse and imputation is used as compensation. Applying resampling methods usually means that subsamples are drawn from the original sample and that variance estimates are computed based on point estimators of several subsamples. However, newer resampling methods such as the rescaling bootstrap of Chipperfield and Preston [Efficient bootstrap for business surveys. Surv Methodol. 2007;33:167–172] include all elements of the original sample in the computation of its point estimator. Thus, procedures to consider imputation in resampling methods cannot be applied in the ordinary way. For such methods, modifications are necessary. This paper presents an approach applying newer resampling methods for imputed data. The Monte Carlo simulation study conducted in the paper shows that the proposed approach leads to reliable variance estimates in contrast to other modifications.  相似文献   

15.
The bootstrap is a intensive computer-based method originally mainly devoted to estimate the standard deviations, confidence intervals and bias of the studied statistic. This technique is useful in a wide variety of statistical procedures, however, its use for hypothesis testing, when the data structure is complex, is not straightforward and each case must be particularly treated. A general bootstrap method for hypothesis testing is studied. The considered method preserves the data structure of each group independently and the null hypothesis is only used in order to compute the bootstrap statistic values (not at the resampling, as usual). The asymptotic distribution is developed and several case studies are discussed.  相似文献   

16.
Resampling methods are proposed to estimate the distributions of sums of m -dependent possibly differently distributed real-valued random variables. The random variables are allowed to have varying mean values. A non parametric resampling method based on the moving blocks bootstrap is proposed for the case in which the mean values are smoothly varying or 'asymptotically equal'. The idea is to resample blocks in pairs. It is also confirmed that a 'circular' block resampling scheme can be used in the case where the mean values are 'asymptotically equal'. A central limit resampling theorem for each of the two cases is proved. The resampling methods have a potential application to time series analysis, to distinguish between two different forecasting models. This is illustrated with an example using Swedish export prices of coated paper products.  相似文献   

17.
Positive quadrant dependence is a specific dependence structure that is of practical importance in for example modelling dependencies in insurance and actuarial sciences. This dependence structure imposes a constraint on the copula function. The interest in this paper is to test for positive quadrant dependence. One way to assess the distribution of the test statistics under the null hypothesis of positive quadrant dependence is to resample from a constrained copula. This requires constrained estimation of a copula function. We show that this use of resampling under a constrained copula improves considerably the power performance of existing testing procedures. We propose two resampling procedures, one based on a parametric constrained copula estimation and one relying on nonparametric estimation of a positive quadrant dependence copula, and discuss their properties. The finite‐sample performances of the resulting testing procedures are evaluated via a simulation study that also includes comparisons with existing tests. Finally, a data set of Danish fire insurance claims is tested for positive quadrant dependence. The Canadian Journal of Statistics 41: 36–64; 2013 © 2012 Statistical Society of Canada  相似文献   

18.
The empirical best linear unbiased prediction approach is a popular method for the estimation of small area parameters. However, the estimation of reliable mean squared prediction error (MSPE) of the estimated best linear unbiased predictors (EBLUP) is a complicated process. In this paper we study the use of resampling methods for MSPE estimation of the EBLUP. A cross-sectional and time-series stationary small area model is used to provide estimates in small areas. Under this model, a parametric bootstrap procedure and a weighted jackknife method are introduced. A Monte Carlo simulation study is conducted in order to compare the performance of different resampling-based measures of uncertainty of the EBLUP with the analytical approximation. Our empirical results show that the proposed resampling-based approaches performed better than the analytical approximation in several situations, although in some cases they tend to underestimate the true MSPE of the EBLUP in a higher number of small areas.  相似文献   

19.
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets.  相似文献   

20.
In Statistics of Extremes, the estimation of parameters of extreme or even rare events is usually done under a semi-parametric framework. The estimators are based on the largest k-ordered statistics in the sample or on the excesses over a high level u. Although showing good asymptotic properties, most of those estimators present a strong dependence on k or u with high bias when the k increases or the level u decreases. The use of resampling methodologies has revealed to be promising in the reduction of the bias and in the choice of k or u. Different approaches for resampling need to be considered depending on whether we are in an independent or in a dependent setup. A great amount of investigation has been performed for the independent situation. The main objective of this article is to use bootstrap and jackknife methods in the context of dependence to obtain more stable estimators of a parameter that appears characterizing the degree of local dependence on extremes, the so-called extremal index. A simulation study illustrates the application of those methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号