首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The bootstrap variance estimate is widely used in semiparametric inferences. However, its theoretical validity is a well‐known open problem. In this paper, we provide a first theoretical study on the bootstrap moment estimates in semiparametric models. Specifically, we establish the bootstrap moment consistency of the Euclidean parameter, which immediately implies the consistency of t‐type bootstrap confidence set. It is worth pointing out that the only additional cost to achieve the bootstrap moment consistency in contrast with the distribution consistency is to simply strengthen the L1 maximal inequality condition required in the latter to the Lp maximal inequality condition for p≥1. The general Lp multiplier inequality developed in this paper is also of independent interest. These general conclusions hold for the bootstrap methods with exchangeable bootstrap weights, for example, non‐parametric bootstrap and Bayesian bootstrap. Our general theory is illustrated in the celebrated Cox regression model.  相似文献   

2.
We investigate by simulation how the wild bootstrap and pairs bootstrap perform in t and F tests of regression parameters in the stochastic regression model, where explanatory variables are stochastic and not given and there exists no heteroskedasticity. The wild bootstrap procedure due to Davidson and Flachaire [The wild bootstrap, tamed at last, Working paper, IER#1000, Queen's University, 2001] with restricted residuals works best but its dominance is not strong compared to the result of Flachaire [Bootstrapping heteroskedastic regression models: wild bootstrap vs. pairs bootstrap, Comput. Statist. Data Anal. 49 (2005), pp. 361–376] in the fixed regression model where explanatory variables are fixed and there exists heteroskedasticity.  相似文献   

3.
The autoregressive Cauchy estimator uses the sign of the first lag as instrumental variable (IV); under independent and identically distributed (i.i.d.) errors, the resulting IV t-type statistic is known to have a standard normal limiting distribution in the unit root case. With unconditional heteroskedasticity, the ordinary least squares (OLS) t statistic is affected in the unit root case; but the paper shows that, by using some nonlinear transformation behaving asymptotically like the sign as instrument, limiting normality of the IV t-type statistic is maintained when the series to be tested has no deterministic trends. Neither estimation of the so-called variance profile nor bootstrap procedures are required to this end. The Cauchy unit root test has power in the same 1/T neighborhoods as the usual unit root tests, also for a wide range of magnitudes for the initial value. It is furthermore shown to be competitive with other, bootstrap-based, robust tests. When the series exhibit a linear trend, however, the null distribution of the Cauchy test for a unit root becomes nonstandard, reminiscent of the Dickey-Fuller distribution. In this case, inference robust to nonstationary volatility is obtained via the wild bootstrap.  相似文献   

4.
Traditional resampling methods for estimating sampling distributions sometimes fail, and alternative approaches are then needed. For example, if the classical central limit theorem does not hold and the naïve bootstrap fails, the m/n bootstrap, based on smaller-sized resamples, may be used as an alternative. An alternative to the naïve bootstrap, the sufficient bootstrap, which uses only the distinct observations in a bootstrap sample, is another recently proposed bootstrap approach that has been suggested to reduce the computational burden associated with bootstrapping. It works as long as naïve bootstrap does. However, if the naïve bootstrap fails, so will the sufficient bootstrap. In this paper, we propose combining the sufficient bootstrap with the m/n bootstrap in order to both regain consistent estimation of sampling distributions and to reduce the computational burden of the bootstrap. We obtain necessary and sufficient conditions for asymptotic normality of the proposed method, and propose new values for the resample size m. We compare the proposed method with the naïve bootstrap, the sufficient bootstrap, and the m/n bootstrap by simulation.  相似文献   

5.
Gōtze & Kūnsch (1990) announced that a certain version of the bootstrap percentile-t method, and the blocking method, can be used to improve on the normal approximation to the distribution of a Studentized statistic computed from dependent data. This paper shows that this result depends fundamentally on the method of Studentization. Indeed, if the percentile-t method is implemented naively, for dependent data, then it does not improve by an order of magnitude on the much simpler normal approximation despite all the computational effort that is required to implement it. On the other hand, if the variance estimator used for the percentile-t bootstrap is adjusted appropriately, then percentile-t can improve substantially on the normal approximation.  相似文献   

6.
Zhuqing Yu 《Statistics》2017,51(2):277-293
It has been found, under a smooth function model setting, that the n out of n bootstrap is inconsistent at stationary points of the smooth function, but that the m out of n bootstrap is consistent, provided that a correct convergence rate is specified of the plug-in smooth function estimator. By considering a more general moving-parameter framework, we show that neither of the above bootstrap methods is consistent uniformly over neighbourhoods of stationary points, so that anomalies often arise of coverages of bootstrap sets over certain subsets of parameter values. We propose a recentred bootstrap procedure for constructing confidence sets with uniformly correct coverages over compact sets containing stationary points. A weighted bootstrap procedure is also proposed as an alternative under more general circumstances. Unlike the m out of n bootstrap, both procedures do not require knowledge of the convergence rate of the smooth function estimator. Empirical performance of our procedures is illustrated with numerical examples.  相似文献   

7.
《Econometric Reviews》2013,32(3):215-228
Abstract

Decisions based on econometric model estimates may not have the expected effect if the model is misspecified. Thus, specification tests should precede any analysis. Bierens' specification test is consistent and has optimality properties against some local alternatives. A shortcoming is that the test statistic is not distribution free, even asymptotically. This makes the test unfeasible. There have been many suggestions to circumvent this problem, including the use of upper bounds for the critical values. However, these suggestions lead to tests that lose power and optimality against local alternatives. In this paper we show that bootstrap methods allow us to recover power and optimality of Bierens' original test. Bootstrap also provides reliable p-values, which have a central role in Fisher's theory of hypothesis testing. The paper also includes a discussion of the properties of the bootstrap Nonlinear Least Squares Estimator under local alternatives.  相似文献   

8.
In this paper, we have reviewed and proposed several interval estimators for estimating the difference of means of two skewed populations. Estimators include the ordinary-t, two versions proposed by Welch [17] and Satterthwaite [15], three versions proposed by Zhou and Dinh [18], Johnson [9], Hall [8], empirical likelihood (EL), bootstrap version of EL, median t proposed by Baklizi and Kibria [2] and bootstrap version of median t. A Monte Carlo simulation study has been conducted to compare the performance of the proposed interval estimators. Some real life health related data have been considered to illustrate the application of the paper. Based on our findings, some possible good interval estimators for estimating the mean difference of two populations have been recommended for the researchers.  相似文献   

9.
A method of bootstrapping the two-sample t-test after a Box-Cox transformation is proposed. The procedure is shown to be consistent and asymptotically as efficient as the non-bootstrapped Box-Cox t-test. Because the bootstrap samples are drawn without the assumption of the same distributional shapes,the procedure may be more robust against violation of this assumption. Simulation results support this conjecture.  相似文献   

10.
Distributions of a response y (height, for example) differ with values of a factor t (such as age). Given a response y* for a subject of unknown t*, the objective of inverse prediction is to infer the value of t* and to provide a defensible confidence set for it. Training data provide values of y observed on subjects at known values of t. Models relating the mean and variance of y to t can be formulated as mixed (fixed and random) models in terms of sets of functions of t, such as polynomial spline functions. A confidence set on t* can then be had as those hypothetical values of t for which y* is not detected as an outlier when compared to the model fit to the training data. With nonconstant variance, the p-values for these tests are approximate. This article describes how versatile models for this problem can be formulated in such a way that the computations can be accomplished with widely available software for mixed models, such as SAS PROC MIXED. Coverage probabilities of confidence sets on t* are illustrated in an example.  相似文献   

11.
Multivariate mixture regression models can be used to investigate the relationships between two or more response variables and a set of predictor variables by taking into consideration unobserved population heterogeneity. It is common to take multivariate normal distributions as mixing components, but this mixing model is sensitive to heavy-tailed errors and outliers. Although normal mixture models can approximate any distribution in principle, the number of components needed to account for heavy-tailed distributions can be very large. Mixture regression models based on the multivariate t distributions can be considered as a robust alternative approach. Missing data are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this paper, we propose a multivariate t mixture regression model with missing information to model heterogeneity in regression function in the presence of outliers and missing values. Along with the robust parameter estimation, our proposed method can be used for (i) visualization of the partial correlation between response variables across latent classes and heterogeneous regressions, and (ii) outlier detection and robust clustering even under the presence of missing values. We also propose a multivariate t mixture regression model using MM-estimation with missing information that is robust to high-leverage outliers. The proposed methodologies are illustrated through simulation studies and real data analysis.  相似文献   

12.
In this paper we consider and propose some confidence intervals for estimating the mean or difference of means of skewed populations. We extend the median t interval to the two sample problem. Further, we suggest using the bootstrap to find the critical points for use in the calculation of median t intervals. A simulation study has been made to compare the performance of the intervals and a real life example has been considered to illustrate the application of the methods.  相似文献   

13.
In this article, we assume that the distribution of the error terms is skew t in two-way analysis of variance (ANOVA). Skew t distribution is very flexible for modeling the symmetric and the skew datasets, since it reduces to the well-known normal, skew normal, and Student's t distributions. We obtain the estimators of the model parameters by using the maximum likelihood (ML) and the modified maximum likelihood (MML) methodologies. We also propose new test statistics based on these estimators for testing the equality of the treatment and the block means and also the interaction effect. The efficiencies of the ML and the MML estimators and the power values of the test statistics based on them are compared with the corresponding normal theory results via Monte Carlo simulation study. Simulation results show that the proposed methodologies are more preferable. We also show that the test statistics based on the ML estimators are more powerful than the test statistics based on the MML estimators as expected. However, power values of the test statistics based on the MML estimators are very close to the corresponding test statistics based on the ML estimators. At the end of the study, a real life example is given to show the implementation of the proposed methodologies.  相似文献   

14.
Traditionally, when applying the two-sample t test, some pre-testing occurs. That is, the theory-based assumptions of normal distributions as well as of homogeneity of the variances are often tested in applied sciences in advance of the tried-for t test. But this paper shows that such pre-testing leads to unknown final type-I- and type-II-risks if the respective statistical tests are performed using the same set of observations. In order to get an impression of the extension of the resulting misinterpreted risks, some theoretical deductions are given and, in particular, a systematic simulation study is done. As a result, we propose that it is preferable to apply no pre-tests for the t test and no t test at all, but instead to use the Welch-test as a standard test: its power comes close to that of the t test when the variances are homogeneous, and for unequal variances and skewness values |γ 1| < 3, it keeps the so called 20% robustness whereas the t test as well as Wilcoxon’s U test cannot be recommended for most cases.  相似文献   

15.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

16.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
We study various bootstrap and permutation methods for matched pairs, whose distributions can have different shapes even under the null hypothesis of no treatment effect. Although the data may not be exchangeable under the null, we investigate different permutation approaches as valid procedures for finite sample sizes. It will be shown that permutation or bootstrap schemes, which neglect the dependency structure in the data, are asymptotically valid. Simulation studies show that these new tests improve the power of the t-test under non-normality.  相似文献   

18.
A major use of the bootstrap methodology is in the construction of nonparametric confidence intervals. Although no consensus has yet been reached on the best way to proceed, theoretical and empirical evidence indicate that bootstra.‐t intervals provide a reasonable solution to this problem. However, when applied to small data sets, these intervals can be unusually wide and unstable. The author presents techniques for stabilizing bootstra.‐t intervals for small samples. His methods are motivated theoretically and investigated though simulations.  相似文献   

19.
In this paper we evaluate the power of the Mann-Whitney test in the shift model G(x) = F (x+θ) for all x , where the distribution of G is obtained by shifting F by an amount of θ.

The bootstrap method was used to evaluate the power of the Mann-Whitney test . A comparison among the bootstrap power , the asymptotic power of the Mann-Whitney test and the t-test power proved that the bootstrap is a better technique , because , it does not require the assumption of normality.  相似文献   

20.
Consider an inhomogeneous Poisson process X on [0, T] whose unk-nown intensity function “switches” from a lower function g* to an upper function h* at some unknown point ?* that has to be identified. We consider two known continuous functions g and h such that g*(t) ? g(t) < h(t) ? h*(t) for 0 ? t ? T. We describe the behavior of the generalized likelihood ratio and Wald’s tests constructed on the basis of a misspecified model in the asymptotics of large samples. The power functions are studied under local alternatives and compared numerically with help of simulations. We also show the following robustness result: the Type I error rate is preserved even though a misspecified model is used to construct tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号