首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers the issue of estimating the covariance matrix of ordinary least squares estimates in a linear regression model when heteroskedasticity is suspected. We perform Monte Carlo simulation on the White estimator, which is commonly used in.

empirical research, and also on some alternatives based on different bootstrapping schemes. Our results reveal that the White estimator can be considerably biased when the sample size is not very large, that bias correction via bootstrap does not work well, and that the weighted bootstrap estimators tend to display smaller biases than the White estimator and its variants, under both homoskedasticity and heteroskedasticity. Our results also reveal that the presence of (potentially) influential observations in the design matrix plays an important role in the finite-sample performance of the heteroskedasticity-consistent estimators.  相似文献   

2.
This paper considers the nonparametric regression model with an additive error that is correlated with the explanatory variables. Motivated by empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. However, the estimation of a nonparametric regression function by instrumental variables is an ill-posed linear inverse problem with an unknown but estimable operator. We provide a new estimator of the regression function that is based on projection onto finite dimensional spaces and that includes an iterative regularisation method (the Landweber–Fridman method). The optimal number of iterations and the convergence of the mean square error of the resulting estimator are derived under both strong and weak source conditions. A Monte Carlo exercise shows the impact of some parameters on the estimator and concludes on the reasonable finite sample performance of the new estimator.  相似文献   

3.
In this article, we develop a new and novel kernel density estimator for a sum of weighted averages from a single population based on utilizing the well defined kernel density estimator in conjunction with classic inversion theory. This idea is further developed for a kernel density estimator for the difference of weighed averages from two independent populations. The resulting estimator is “bootstrap-like” in terms of its properties with respect to the derivation of approximate confidence intervals via a “plug-in” approach. This new approach is distinct from the bootstrap methodology in that it is analytically and computationally feasible to provide an exact estimate of the distribution function through direct calculation. Thus, our approach eliminates the error due to Monte Carlo resampling that arises within the context of simulation based approaches that are oftentimes necessary in order to derive bootstrap-based confidence intervals for statistics involving weighted averages of i.i.d. random variables. We provide several examples and carry forth a simulation study to show that our kernel density estimator performs better than the standard central limit theorem based approximation in term of coverage probability.  相似文献   

4.
The operation of resampling from a bootstrap resample, encountered in applications of the double bootstrap, maybe viewed as resampling directly from the sample but using probability weights that are proportional to the numbers of times that sample values appear in the resample. This suggests an approximate approach to double-bootstrap Monte Carlo simulation, where weighted bootstrap methods are used to circumvent much of the labour involved in compounded Monte Carlo approximation. In the case of distribution estimation or, equivalently, confidence interval calibration, the new method may be used to reduce the computational labour. Moreover, the method produces the same order of magnitude of coverage error for confidence intervals, or level error for hypothesis tests, as a full application of the double bootstrap.  相似文献   

5.
In this article, we propose a new technique for constructing confidence intervals for the mean of a noisy sequence with multiple change-points. We use the weighted bootstrap to generalize the bootstrap aggregating or bagging estimator. A standard deviation formula for the bagging estimator is introduced, based on which smoothed confidence intervals are constructed. To further improve the performance of the smoothed interval for weak signals, we suggest a strategy of adaptively choosing between the percentile intervals and the smoothed intervals. A new intensity plot is proposed to visualize the pattern of the change-points. We also propose a new change-point estimator based on the intensity plot, which has superior performance in comparison with the state-of-the-art segmentation methods. The finite sample performance of the confidence intervals and the change-point estimator are evaluated through Monte Carlo studies and illustrated with a real data example.  相似文献   

6.
In this paper, the delete-mj jackknife estimator is proposed. This estimator is based on samples obtained from the original sample by successively removing mutually exclusive groups of unequal size. In a Monte Carlo simulation study, a hierarchical linear model was used to evaluate the role of nonnormal residuals and sample size on bias and efficiency of this estimator. It is shown that bias is reduced in exchange for a minor reduction in efficiency. The accompanying jackknife variance estimator even improves on both bias and efficiency, and, moreover, this estimator is mean-squared-error consistent, whereas the maximum likelihood equivalents are not.  相似文献   

7.
Empirical researchers face a trade-off between the lower resource costs associated with smaller samples and the increased confidence in the results gained from larger samples. Choice of sampling strategy is one tool researchers can use to reduce costs yet still attain desired confidence levels. This study uses Monte Carlo simulation to examine the impact of nine sampling strategies on the finite sample performance of the maximum likelihood logit estimator. The results show stratified random sampling with balanced strata sizes and a bias correction for choice-based sampling outperforms all other sampling strategies with respect to four small-sample performance measures.  相似文献   

8.
This is a study of the behaviors of the naive bootstrap and the Bayesian bootstrap clones designed to approximate the sampling distribution of the Aalen–Johansen estimator of a non-homogeneous censored Markov chain. The study shows that the approximations based on the Bayesian bootstrap clones and the naive bootstrap are first-order asymptotically equivalent. The two bootstrap methods are illustrated by a marketing example, and their performance is validated by a Monte Carlo experiment.  相似文献   

9.
Asymptotically valid inference in linear regression models is easily achieved under mild conditions using the well-known Eicker–White heteroskedasticity–robust covariance matrix estimator or one of its variant. In finite sample however, such inferences can suffer from substantial size distortion. Indeed, it is well established in the literature that the finite sample accuracy of a test may depend on which variant of the Eicker–White estimator is used, on the underlying data generating process (DGP) and on the desired level of the test.

This paper develops a new variant of the Eicker–White estimator which explicitly aims to minimize the finite sample null error in rejection probability (ERP) of the test. This is made possible by selecting the transformation of the squared residuals which results in the smallest possible ERP through a numerical algorithm based on the wild bootstrap. Monte Carlo evidence indicates that this new procedure achieves a level of robustness to the DGP, sample size and nominal testing level unequaled by any other Eicker–White estimator based asymptotic test.  相似文献   


10.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

11.
In this article we propose a variant of the Kaplan-Meier estimator which aims at reducing the bias by adding a bootstrap based correction term to the pertaining cumulative hazard function. For the mean lifetime it is demonstrated in a simulation study that the new estimator also has a smaller variance.  相似文献   

12.
We present a bootstrap Monte Carlo algorithm for computing the power function of the generalized correlation coefficient. The proposed method makes no assumptions about the form of the underlying probability distribution and may be used with observed data to approximate the power function and pilot data for sample size determination. In particular, the bootstrap power functions of the Pearson product moment correlation and the Spearman rank correlation are examined. Monte Carlo experiments indicate that the proposed algorithm is reliable and compares well with the asymptotic values. An example which demonstrates how this method can be used for sample size determination and power calculations is provided.  相似文献   

13.
The performances of data-driven bandwidth selection procedures in local polynomial regression are investigated by using asymptotic methods and simulation. The bandwidth selection procedures considered are based on minimizing 'prelimit' approximations to the (conditional) mean-squared error (MSE) when the MSE is considered as a function of the bandwidth h . We first consider approximations to the MSE that are based on Taylor expansions around h=0 of the bias part of the MSE. These approximations lead to estimators of the MSE that are accurate only for small bandwidths h . We also consider a bias estimator which instead of using small h approximations to bias naïvely estimates bias as the difference of two local polynomial estimators of different order and we show that this estimator performs well only for moderate to large h . We next define a hybrid bias estimator which equals the Taylor-expansion-based estimator for small h and the difference estimator for moderate to large h . We find that the MSE estimator based on this hybrid bias estimator leads to a bandwidth selection procedure with good asymptotic and, for our Monte Carlo examples, finite sample properties.  相似文献   

14.
In this paper, we consider the maximum likelihood estimator (MLE) of the scale parameter of the generalized exponential (GE) distribution based on a random censoring model. We assume the censoring distribution also follows a GE distribution. Since the estimator does not provide an explicit solution, we propose a simple method of deriving an explicit estimator by approximating the likelihood function. In order to compare the performance of the estimators, Monte Carlo simulation is conducted. The results show that the MLE and the approximate MLE are almost identical in terms of bias and variance.  相似文献   

15.
Boundary and Bias Correction in Kernel Hazard Estimation   总被引:1,自引:0,他引:1  
A new class of local linear hazard estimators based on weighted least square kernel estimation is considered. The class includes the kernel hazard estimator of Ramlau-Hansen (1983), which has the same boundary correction property as the local linear regression estimator (see Fan & Gijbels, 1996). It is shown that all the local linear estimators in the class have the same pointwise asymptotic properties. We derive the multiplicative bias correction of the local linear estimator. In addition we propose a new bias correction technique based on bootstrap estimation of additive bias. This latter method has excellent theoretical properties. Based on an extensive simulation study where we compare the performance of competing estimators, we also recommend the use of the additive bias correction in applied work.  相似文献   

16.
The aim of this paper is to compare passenger (pax) demand between airports based on the arithmetic mean (MPD) and the median pax demand (MePD). A three phases approach is applied. First phase, we use bootstrap procedures to estimate the distribution of the arithmetic MPD and the MePD for each block of routes distance; second phase, we use percentile, standard, bias corrected, and bias corrected accelerated methods to calculate bootstrap confidence bands for the MPD and the MePD; and third phase, we implement Monte Carlo (MC) experiments to analyse the finite sample performance of the applied bootstrap. Our results conclude that it is more meaningful to use the estimation of MePD rather than the estimation of MPD in the air transport industry. By carrying out MC experiments, we demonstrate that the bootstrap methods produce coverages close to the nominal for the MPD and the MePD.  相似文献   

17.
This article proposes a fast approximation for the small sample bias correction of the iterated bootstrap. The approximation adapts existing fast approximation techniques of the bootstrap p-value and quantile functions to the problem of estimating the bias function. We show an optimality result which holds under general conditions not requiring an asymptotic pivot. Monte Carlo evidence, from the linear instrumental variable model and the nonlinear GMM, suggest that in addition to its computational appeal and success in reducing the mean and median bias in identified models, the fast approximation provides scope for bias reduction in weakly identified configurations.  相似文献   

18.
We consider the issue of performing accurate small sample inference in beta autoregressive moving average model, which is useful for modeling and forecasting continuous variables that assume values in the interval (0,?1). The inferences based on conditional maximum likelihood estimation have good asymptotic properties, but their performances in small samples may be poor. This way, we propose bootstrap bias corrections of the point estimators and different bootstrap strategies for confidence interval improvements. Our Monte Carlo simulations show that finite sample inference based on bootstrap corrections is much more reliable than the usual inferences. We also presented an empirical application.  相似文献   

19.
Resampling methods are a common measure to estimate the variance of a statistic of interest when data consist of nonresponse and imputation is used as compensation. Applying resampling methods usually means that subsamples are drawn from the original sample and that variance estimates are computed based on point estimators of several subsamples. However, newer resampling methods such as the rescaling bootstrap of Chipperfield and Preston [Efficient bootstrap for business surveys. Surv Methodol. 2007;33:167–172] include all elements of the original sample in the computation of its point estimator. Thus, procedures to consider imputation in resampling methods cannot be applied in the ordinary way. For such methods, modifications are necessary. This paper presents an approach applying newer resampling methods for imputed data. The Monte Carlo simulation study conducted in the paper shows that the proposed approach leads to reliable variance estimates in contrast to other modifications.  相似文献   

20.
Multiple hypothesis testing is widely used to evaluate scientific studies involving statistical tests. However, for many of these tests, p values are not available and are thus often approximated using Monte Carlo tests such as permutation tests or bootstrap tests. This article presents a simple algorithm based on Thompson Sampling to test multiple hypotheses. It works with arbitrary multiple testing procedures, in particular with step-up and step-down procedures. Its main feature is to sequentially allocate Monte Carlo effort, generating more Monte Carlo samples for tests whose decisions are so far less certain. A simulation study demonstrates that for a low computational effort, the new approach yields a higher power and a higher degree of reproducibility of its results than previously suggested methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号