首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The primary purpose of this paper is that of developing a sequential Monte Carlo approximation to an ideal bootstrap estimate of the parameter of interest. Using the concept of fixed-precision approximation, we construct a sequential stopping rule for determining the number of bootstrap samples to be taken in order to achieve a specified precision of the Monte Carlo approximation. It is shown that the sequential Monte Carlo approximation is asymptotically efficient in the problems of estimation of the bias and standard error of a given statistic. Efficient bootstrap resampling is discussed and a numerical study is carried out for illustrating the obtained theoretical results.  相似文献   

2.
This paper characterizes the finite-sample bias of the maximum likelihood estimator (MLE) in a reduced rank vector autoregression and suggests two simulation-based bias corrections. One is a simple bootstrap implementation that approximates the bias at the MLE. The other is an iterative root-finding algorithm implemented using stochastic approximation methods. Both algorithms are shown to be improvements over the MLE, measured in terms of mean square error and mean absolute deviation. An illustration to US macroeconomic time series is given.  相似文献   

3.
Approximate normality and unbiasedness of the maximum likelihood estimate (MLE) of the long-memory parameter H of a fractional Brownian motion hold reasonably well for sample sizes as small as 20 if the mean and scale parameter are known. We show in a Monte Carlo study that if the latter two parameters are unknown the bias and variance of the MLE of H both increase substantially. We also show that the bias can be reduced by using a parametric bootstrap procedure. In very large samples, maximum likelihood estimation becomes problematic because of the large dimension of the covariance matrix that must be inverted. To overcome this difficulty, we propose a maximum likelihood method based upon first differences of the data. These first differences form a short-memory process. We split the data into a number of contiguous blocks consisting of a relatively small number of observations. Computation of the likelihood function in a block then presents no computational problem. We form a pseudo-likelihood function consisting of the product of the likelihood functions in each of the blocks and provide a formula for the standard error of the resulting estimator of H. This formula is shown in a Monte Carlo study to provide a good approximation to the true standard error. The computation time required to obtain the estimate and its standard error from large data sets is an order of magnitude less than that required to obtain the widely used Whittle estimator. Application of the methodology is illustrated on two data sets.  相似文献   

4.
We develop nearly unbiased estimators for the Kumaraswamy distribution proposed by Kumaraswamy [Generalized probability density-function for double-bounded random-processes, J. Hydrol. 46 (1980), pp. 79–88], which has considerable attention in hydrology and related areas. We derive modified maximum-likelihood estimators that are bias-free to second order. As an alternative to the analytically bias-corrected estimators discussed, we consider a bias correction mechanism based on the parametric bootstrap. We conduct Monte Carlo simulations in order to investigate the performance of the corrected estimators. The numerical results show that the bias correction scheme yields nearly unbiased estimates.  相似文献   

5.
Concerning the estimation of linear parameters in small areas, a nested-error regression model is assumed for the values of the target variable in the units of a finite population. Then, a bootstrap procedure is proposed for estimating the mean squared error (MSE) of the EBLUP under the finite population setup. The consistency of the bootstrap procedure is studied, and a simulation experiment is carried out in order to compare the performance of two different bootstrap estimators with the approximation given by Prasad and Rao [Prasad, N.G.N. and Rao, J.N.K., 1990, The estimation of the mean squared error of small-area estimators. Journal of the American Statistical Association, 85, 163–171.]. In the numerical results, one of the bootstrap estimators shows a better bias behavior than the Prasad–Rao approximation for some of the small areas and not much worse in any case. Further, it shows less MSE in situations of moderate heteroscedasticity and under mispecification of the error distribution as normal when the true distribution is logistic or Gumbel. The proposed bootstrap method can be applied to more general types of parameters (linear of not) and predictors.  相似文献   

6.
In this paper we compare Bartlett-corrected, bootstrap, and fast double bootstrap tests on maximum likelihood estimates of cointegration parameters. The key result is that both the bootstrap and the Bartlett-corrected tests must be based on the unrestricted estimates of the cointegrating vectors: procedures based on the restricted estimates have almost no power. The small sample size bias of the asymptotic test appears so severe as to advise strongly against its use with the sample sizes commonly available; the fast double bootstrap test minimizes size bias, while the Bartlett-corrected test is somehow more powerful.  相似文献   

7.
Comments     

In this paper we compare Bartlett-corrected, bootstrap, and fast double bootstrap tests on maximum likelihood estimates of cointegration parameters. The key result is that both the bootstrap and the Bartlett-corrected tests must be based on the unrestricted estimates of the cointegrating vectors: procedures based on the restricted estimates have almost no power. The small sample size bias of the asymptotic test appears so severe as to advise strongly against its use with the sample sizes commonly available; the fast double bootstrap test minimizes size bias, while the Bartlett-corrected test is somehow more powerful.  相似文献   

8.
Alternative methods of estimating properties of unknown distributions include the bootstrap and the smoothed bootstrap. In the standard bootstrap setting, Johns (1988) introduced an importance resam¬pling procedure that results in more accurate approximation to the bootstrap estimate of a distribution function or a quantile. With a suitable “exponential tilting” similar to that used by Johns, we derived a smoothed version of importance resampling in the framework of the smoothed bootstrap. Smoothed importance resampling procedures were developed for the estimation of distribution functions of the Studentized mean, the Studentized variance, and the correlation coefficient. Implementation of these procedures are presented via simulation results which concentrate on the problem of estimation of distribution functions of the Studentized mean and Studentized variance for different sample sizes and various pre-specified smoothing bandwidths for the normal data; additional simulations were conducted for the estimation of quantiles of the distribution of the Studentized mean under an optimal smoothing bandwidth when the original data were simulated from three different parent populations: lognormal, t(3) and t(10). These results suggest that in cases where it is advantageous to use the smoothed bootstrap rather than the standard bootstrap, the amount of resampling necessary might be substantially reduced by the use of importance resampling methods and the efficiency gains depend on the bandwidth used in the kernel density estimation.  相似文献   

9.
This paper considers the issue of estimating the covariance matrix of ordinary least squares estimates in a linear regression model when heteroskedasticity is suspected. We perform Monte Carlo simulation on the White estimator, which is commonly used in.

empirical research, and also on some alternatives based on different bootstrapping schemes. Our results reveal that the White estimator can be considerably biased when the sample size is not very large, that bias correction via bootstrap does not work well, and that the weighted bootstrap estimators tend to display smaller biases than the White estimator and its variants, under both homoskedasticity and heteroskedasticity. Our results also reveal that the presence of (potentially) influential observations in the design matrix plays an important role in the finite-sample performance of the heteroskedasticity-consistent estimators.  相似文献   

10.
We analyse the finite-sample behaviour of two second-order bias-corrected alternatives to the maximum-likelihood estimator of the parameters in a multivariate normal regression model with general parametrization proposed by Patriota and Lemonte [A.G. Patriota and A.J. Lemonte, Bias correction in a multivariate regression model with genereal parameterization, Stat. Prob. Lett. 79 (2009), pp. 1655–1662]. The two finite-sample corrections we consider are the conventional second-order bias-corrected estimator and the bootstrap bias correction. We present the numerical results comparing the performance of these estimators. Our results reveal that analytical bias correction outperforms numerical bias corrections obtained from bootstrapping schemes.  相似文献   

11.
A statistical test procedure is proposed to check whether the parameters in the parametric component of the partially linear spatial autoregressive models satisfy certain linear constraint conditions, in which a residual-based bootstrap procedure is suggested to derive the p-value of the test. Some simulations are conducted to assess the performance of the test and the results show that the bootstrap approximation to the null distribution of the test statistic is valid and the test is of satisfactory power. Furthermore, a real-world example is given to demonstrate the application of the proposed test.  相似文献   

12.
During recent years, analysts have been relying on approximate methods of inference to estimate multilevel models for binary or count data. In an earlier study of random-intercept models for binary outcomes we used simulated data to demonstrate that one such approximation, known as marginal quasi-likelihood, leads to a substantial attenuation bias in the estimates of both fixed and random effects whenever the random effects are non-trivial. In this paper, we fit three-level random-intercept models to actual data for two binary outcomes, to assess whether refined approximation procedures, namely penalized quasi-likelihood and second-order improvements to marginal and penalized quasi-likelihood, also underestimate the underlying parameters. The extent of the bias is assessed by two standards of comparison: exact maximum likelihood estimates, based on a Gauss–Hermite numerical quadrature procedure, and a set of Bayesian estimates, obtained from Gibbs sampling with diffuse priors. We also examine the effectiveness of a parametric bootstrap procedure for reducing the bias. The results indicate that second-order penalized quasi-likelihood estimates provide a considerable improvement over the other approximations, but all the methods of approximate inference result in a substantial underestimation of the fixed and random effects when the random effects are sizable. We also find that the parametric bootstrap method can eliminate the bias but is computationally very intensive.  相似文献   

13.
The bootstrap variance estimate is widely used in semiparametric inferences. However, its theoretical validity is a well‐known open problem. In this paper, we provide a first theoretical study on the bootstrap moment estimates in semiparametric models. Specifically, we establish the bootstrap moment consistency of the Euclidean parameter, which immediately implies the consistency of t‐type bootstrap confidence set. It is worth pointing out that the only additional cost to achieve the bootstrap moment consistency in contrast with the distribution consistency is to simply strengthen the L1 maximal inequality condition required in the latter to the Lp maximal inequality condition for p≥1. The general Lp multiplier inequality developed in this paper is also of independent interest. These general conclusions hold for the bootstrap methods with exchangeable bootstrap weights, for example, non‐parametric bootstrap and Bayesian bootstrap. Our general theory is illustrated in the celebrated Cox regression model.  相似文献   

14.
Summary: This paper investigates mean squared errors for unobserved states in state space models when estimation uncertainty of hyperparameters is taken into account. Three alternative approximations to mean squared errors with estimation uncertainty are compared in a Monte Carlo experiment, where the random walk with noise model serves as DGP: A naive method which neglects estimation uncertainty completely, an approximation based on an expansion around the true state with respect to the estimated parameters, and a bootstrap approach. Overall, the bootstrap method performs best in the simulations. However, the gains are not systematic, and the computationally burden of this method is relatively high.*This paper represents the authors personal opinions and does not necessarily reflect the views of the Deutsche Bundesbank. I am grateful to Malte Knüppel, Jeong-Ryeol Kurz-Kim, Karl-Heinz Tödter and a referee for helpful comments. The computer programs for this paper were written in Ox and SsfPack, see Doornik (1998) and Koopman et al. (1999). The used SsfPack version is 2.2.  相似文献   

15.
Gōtze & Kūnsch (1990) announced that a certain version of the bootstrap percentile-t method, and the blocking method, can be used to improve on the normal approximation to the distribution of a Studentized statistic computed from dependent data. This paper shows that this result depends fundamentally on the method of Studentization. Indeed, if the percentile-t method is implemented naively, for dependent data, then it does not improve by an order of magnitude on the much simpler normal approximation despite all the computational effort that is required to implement it. On the other hand, if the variance estimator used for the percentile-t bootstrap is adjusted appropriately, then percentile-t can improve substantially on the normal approximation.  相似文献   

16.
In Statistics of Extremes, the estimation of parameters of extreme or even rare events is usually done under a semi-parametric framework. The estimators are based on the largest k-ordered statistics in the sample or on the excesses over a high level u. Although showing good asymptotic properties, most of those estimators present a strong dependence on k or u with high bias when the k increases or the level u decreases. The use of resampling methodologies has revealed to be promising in the reduction of the bias and in the choice of k or u. Different approaches for resampling need to be considered depending on whether we are in an independent or in a dependent setup. A great amount of investigation has been performed for the independent situation. The main objective of this article is to use bootstrap and jackknife methods in the context of dependence to obtain more stable estimators of a parameter that appears characterizing the degree of local dependence on extremes, the so-called extremal index. A simulation study illustrates the application of those methods.  相似文献   

17.
An extensive simulation study is conducted to compare the performance between balanced and antithetic resampling for the bootstrap in estimation of bias, variance, and percentiles when the statistic of interest is the median, the square root of the absolute value of the mean, or the median absolute deviations from the median. Simulation results reveal that balanced resampling provide better efficiencies in most cases; however, antithetic resampling is superior in estimating bias of the median. We also investigate the possibility of combining an existing efficient bootstrap computation of Efron (1990) with balanced or antithetic resampling for percentile estimation. Results indicate that the combination method does indeed offer gains in performance though the gains are much more dramatic for the bootstrap t statistic than for any of the three statistics of interest as described above.  相似文献   

18.
ABSTRACT

We derive analytic expressions for the biases, to O(n?1), of the maximum likelihood estimators of the parameters of the generalized Pareto distribution. Using these expressions to bias-correct the estimators in a selective manner is found to be extremely effective in terms of bias reduction, and can also result in a small reduction in relative mean squared error (MSE). In terms of remaining relative bias, the analytic bias-corrected estimators are somewhat less effective than their counterparts obtained by using a parametric bootstrap bias correction. However, the analytic correction out-performs the bootstrap correction in terms of remaining %MSE. It also performs credibly relative to other recently proposed estimators for this distribution. Taking into account the relative computational costs, this leads us to recommend the selective use of the analytic bias adjustment for most practical situations.  相似文献   

19.
Boundary and Bias Correction in Kernel Hazard Estimation   总被引:1,自引:0,他引:1  
A new class of local linear hazard estimators based on weighted least square kernel estimation is considered. The class includes the kernel hazard estimator of Ramlau-Hansen (1983), which has the same boundary correction property as the local linear regression estimator (see Fan & Gijbels, 1996). It is shown that all the local linear estimators in the class have the same pointwise asymptotic properties. We derive the multiplicative bias correction of the local linear estimator. In addition we propose a new bias correction technique based on bootstrap estimation of additive bias. This latter method has excellent theoretical properties. Based on an extensive simulation study where we compare the performance of competing estimators, we also recommend the use of the additive bias correction in applied work.  相似文献   

20.
A collection of six novel bootstrap algorithms, applied to probability-proportional-to-size samples, is explored for variance estimation, confidence interval and p-value production. Developed according to bootstrap fundamentals such as the mimicking principle and the plug-in rule, these algorithms make use of an empirical bootstrap population informed by sampled units each with assigned weight. Starting from the natural choice of Horvitz–Thompson (HT)-type weights, improvements based on calibration to known population features are fostered. Focusing on the population total as the parameter to be estimated and on the distribution of the HT estimator as the target of bootstrap estimation, simulation results are presented with the twofold objective of checking practical implementation and of investigating the statistical properties of the bootstrap estimates supplied by the algorithms explored.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号