首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 237 毫秒
1.
The linear regression model is commonly used in applications. One of the assumptions made is that the error variances are constant across all observations. This assumption, known as homoskedasticity, is frequently violated in practice. A commonly used strategy is to estimate the regression parameters by ordinary least squares and to compute standard errors that deliver asymptotically valid inference under both homoskedasticity and heteroskedasticity of an unknown form. Several consistent standard errors have been proposed in the literature, and evaluated in numerical experiments based on their point estimation performance and on the finite sample behaviour of associated hypothesis tests. We build upon the existing literature by constructing heteroskedasticity-consistent interval estimators and numerically evaluating their finite sample performance. Different bootstrap interval estimators are also considered. The numerical results favour the HC4 interval estimator.  相似文献   

2.
The assumption that all random errors in the linear regression model share the same variance (homoskedasticity) is often violated in practice. The ordinary least squares estimator of the vector of regression parameters remains unbiased, consistent and asymptotically normal under unequal error variances. Many practitioners then choose to base their inferences on such an estimator. The usual practice is to couple it with an asymptotically valid estimation of its covariance matrix, and then carry out hypothesis tests that are valid under heteroskedasticity of unknown form. We use numerical integration methods to compute the exact null distributions of some quasi-t test statistics, and propose a new covariance matrix estimator. The numerical results favor testing inference based on the estimator we propose.  相似文献   

3.
We consider the issue of performing testing inferences on the parameters that index the linear regression model under heteroskedasticity of unknown form. Quasi-t test statistics use asymptotically correct standard errors obtained from heteroskedasticity-consistent covariance matrix estimators. An alternative approach involves making an assumption about the functional form of the response variances and jointly modelling mean and dispersion effects. In this paper we compare the accuracy of testing inferences made using the two approaches. We consider several different quasi-t tests and also z tests performed after estimated generalized least squares estimation which was carried out using three different estimation strategies. The numerical evidence shows that some quasi-t tests are typically considerably less size distorted in small samples than the tests carried out after the jointly modelling of mean and dispersion effects. Finally, we present and discuss two empirical applications.  相似文献   

4.
The paper examines the behavior of a generalized version of the nonlinear IV unit root test proposed by Chang (2002) when the series’ errors exhibit nonstationary volatility. The leading case of such nonstationary volatility concerns structural breaks in the error variance. We show that the generalized test is not robust to variance changes in general, and illustrate the extent of the resulting size distortions in finite samples. More importantly, we show that pivotality is recovered when using Eicker-White heteroskedasticity-consistent standard errors. This contrasts with the case of Dickey-Fuller unit root tests, for which Eicker-White standard errors do not produce robustness and thus require computationally costly corrections such as the (wild) bootstrap or estimation of the so-called variance profile. The pivotal versions of the generalized IV tests – with or without the correct standard errors – do however have no power in $1/T$ -neighbourhoods of the null. We also study the validity of panel versions of the tests considered here.  相似文献   

5.
In this paper, we investigate the properties of the Granger causality test in stationary and stable vector autoregressive models under the presence of spillover effects, that is, causality in variance. The Wald test and the WW test (the Wald test with White's proposed heteroskedasticity-consistent covariance matrix estimator imposed) are analyzed. The investigation is undertaken by using Monte Carlo simulation in which two different sample sizes and six different kinds of data-generating processes are used. The results show that the Wald test over-rejects the null hypothesis both with and without the spillover effect, and that the over-rejection in the latter case is more severe in larger samples. The size properties of the WW test are satisfactory when there is spillover between the variables. Only when there is feedback in the variance is the size of the WW test slightly affected. The Wald test is shown to have higher power than the WW test when the errors follow a GARCH(1,1) process without a spillover effect. When there is a spillover, the power of both tests deteriorates, which implies that the spillover has a negative effect on the causality tests.  相似文献   

6.
Variational Bayes (VB) estimation is a fast alternative to Markov Chain Monte Carlo for performing approximate Baesian inference. This procedure can be an efficient and effective means of analyzing large datasets. However, VB estimation is often criticised, typically on empirical grounds, for being unable to produce valid statistical inferences. In this article we refute this criticism for one of the simplest models where Bayesian inference is not analytically tractable, that is, the Bayesian linear model (for a particular choice of priors). We prove that under mild regularity conditions, VB based estimators enjoy some desirable frequentist properties such as consistency and can be used to obtain asymptotically valid standard errors. In addition to these results we introduce two VB information criteria: the variational Akaike information criterion and the variational Bayesian information criterion. We show that variational Akaike information criterion is asymptotically equivalent to the frequentist Akaike information criterion and that the variational Bayesian information criterion is first order equivalent to the Bayesian information criterion in linear regression. These results motivate the potential use of the variational information criteria for more complex models. We support our theoretical results with numerical examples.  相似文献   

7.
The linear regression model is commonly used by practitioners to model the relationship between the variable of interest and a set of explanatory variables. The assumption that all error variances are the same (homoskedasticity) is oftentimes violated. Consistent regression standard errors can be computed using the heteroskedasticity-consistent covariance matrix estimator proposed by White (1980). Such standard errors, however, typically display nonnegligible systematic errors in finite samples, especially under leveraged data. Cribari-Neto et al. (2000) improved upon the White estimator by defining a sequence of bias-adjusted estimators with increasing accuracy. In this paper, we improve upon their main result by defining an alternative sequence of adjusted estimators whose biases vanish at a much faster rate. Hypothesis testing inference is also addressed. An empirical illustration is presented.  相似文献   

8.
We examine the asymptotic and small sample properties of model-based and robust tests of the null hypothesis of no randomized treatment effect based on the partial likelihood arising from an arbitrarily misspecified Cox proportional hazards model. When the distribution of the censoring variable is either conditionally independent of the treatment group given covariates or conditionally independent of covariates given the treatment group, the numerators of the partial likelihood treatment score and Wald tests have asymptotic mean equal to 0 under the null hypothesis, regardless of whether or how the Cox model is misspecified. We show that the model-based variance estimators used in the calculation of the model-based tests are not, in general, consistent under model misspecification, yet using analytic considerations and simulations we show that their true sizes can be as close to the nominal value as tests calculated with robust variance estimators. As a special case, we show that the model-based log-rank test is asymptotically valid. When the Cox model is misspecified and the distribution of censoring depends on both treatment group and covariates, the asymptotic distributions of the resulting partial likelihood treatment score statistic and maximum partial likelihood estimator do not, in general, have a zero mean under the null hypothesis. Here neither the fully model-based tests, including the log-rank test, nor the robust tests will be asymptotically valid, and we show through simulations that the distortion to test size can be substantial.  相似文献   

9.
Hotelling's T 2 test is known to be optimal under multivariate normality and is reasonably validity-robust when the assumption fails. However, some recently introduced robust test procedures have superior power properties and reasonable type I error control with non-normal populations. These, including the tests due to Tiku & Singh (1982), Tiku & Balakrishnan (1988) and Mudholkar & Srivastava (1999b, c), are asymptotically valid but are useful with moderate size samples only if the population dimension is small. A class of B-optimal modifications of the stepwise alternatives to Hotellings T 2 introduced by Mudholkar & Subbaiah (1980) are simple to implement and essentially equivalent to the T 2 test even with small samples. In this paper we construct and study the robust versions of these modified stepwise tests using trimmed means instead of sample means. We use the robust one- and two-sample trimmed- t procedures as in Mudholkar et al. (1991) and propose statistics based on combining them. The results of an extensive Monte Carlo experiment show that the robust alternatives provide excellent type I error control and a substantial gain in power.  相似文献   

10.
The article studies a time-varying coefficient time series model in which some of the covariates are measured with additive errors. In order to overcome the bias of estimator of the coefficient functions when measurement errors are ignored, we propose a modified least squares estimator based on wavelet procedures. The advantage of the wavelet method is to avoid the restrictive smoothness requirement for varying-coefficient functions of the traditional smoothing approaches, such as kernel and local polynomial methods. The asymptotic properties of the proposed wavelet estimators are established under the α-mixing conditions and without specifying the error distribution. These results can be used to make asymptotically valid statistical inference.  相似文献   

11.
Joachim Bellach 《Statistics》2013,47(2):277-291
Some ways of Studentizing the parametric c-sample tests (c≧2) for location are examined and their asymptotic properties established. A way of Studentizing the c-sample Puni test based on the ranks of the observations is proposed. The resulting test is shown to be asymptotically valid and consistent for a reasonable class of alternatives  相似文献   

12.
A commonly used procedure in a wide class of empirical applications is to impute unobserved regressors, such as expectations, from an auxiliary econometric model. This two-step (T-S) procedure fails to account for the fact that imputed regressors are measured with sampling error, so hypothesis tests based on the estimated covariance matrix of the second-step estimator are biased, even in large samples. We present a simple yet general method of calculating asymptotically correct standard errors in T-S models. The procedure may be applied even when joint estimation methods, such as full information maximum likelihood, are inappropriate or computationally infeasible. We present two examples from recent empirical literature in which these corrections have a major impact on hypothesis testing.  相似文献   

13.
The assumption that all errors share the same variance (homoskedasticity) is commonly violated in empirical analyses carried out using the linear regression model. A widely adopted modeling strategy is to perform point estimation by ordinary least squares and then perform testing inference based on these point estimators and heteroskedasticity-consistent standard errors. These tests, however, tend to be size-distorted when the sample size is small and the data contain atypical observations. Furno (1996 Furno , M. ( 1996 ). Small sample behavior of a robust heteroskedasticity consistent covariance matrix estimator . Journal of Statistical Computation and Simulation 54 : 115128 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) suggested performing point estimation using a weighted least squares mechanism in order to attenuate the effect of leverage points on the associated inference. In this article, we follow up on her proposal and define heteroskedasticity-consistent covariance matrix estimators based on residuals obtained using robust estimation methods. We report Monte Carlo simulation results (size and power) on the finite sample performance of different heteroskedasticity-robust tests. Overall, the results favor inference based on HC0 tests constructed using robust residuals.  相似文献   

14.
To solve the heteroscedastic problem in linear regression, many different heteroskedasticity-consistent covariance matrix estimators have been proposed, including HC0 estimator and its variants, such as HC1, HC2, HC3, HC4, HC5 and HC4m. Each variant of the HC0 estimator aims at correcting the tendency of underestimating the true variances. In this paper, a new variant of HC0 estimator, HC5m, which is a combination of HC5 and HC4m, is proposed. Both the numerical analysis and the empirical analysis show that the quasi-t inference based on HC5m is typically more reliable than inferences based on other covariance matrix estimators, regardless of the existence of high leverage points.  相似文献   

15.
The asymptotic distribution of the augmented Dickey–Fuller [ADF] test computed using heteroscedasticity-consistent (White) standard errors is examined. Conditions are given, under which the so-called DF-White test and the usual ADF test are asymptotically equivalent under the null hypothesis and under a local alternative. While the small-sample distribution of both tests react sensitively to the degree of persistence in the conditional variance, this is not the case with simple combinations of the ADF and the DF-White tests.  相似文献   

16.
Several tests for heteroskedasticity in linear regression models are examined. Asymptoticrobustness to heterokurticity, nonnormality and skewness is discussed. The finite sample eliability of asymptotically valid tests is investigated using Monte Carlo experiments. It is found that asymptotic critical values cannot, in general. be relied upon to give good agreement between nominal and actual finite sample significance levels. The use of the bootstrap overcomes this problem for general approaches that lead to asymptotically pivotal test statistics. Power comparisons are made for bootstrap tests and modified Glejser and Koenker tests are recommended.  相似文献   

17.
When some explanatory variables in a regression are correlated with the disturbance term, instrumental variable methods are typically employed to make reliable inferences. Furthermore, to avoid difficulties associated with weak instruments, identification-robust methods are often proposed. However, it is hard to assess whether an instrumental variable is valid in practice because instrument validity is based on the questionable assumption that some of them are exogenous. In this paper, we focus on structural models and analyze the effects of instrument endogeneity on two identification-robust procedures, the Anderson–Rubin (1949, AR) and the Kleibergen (2002, K) tests, with or without weak instruments. Two main setups are considered: (1) the level of “instrument” endogeneity is fixed (does not depend on the sample size) and (2) the instruments are locally exogenous, i.e. the parameter which controls instrument endogeneity approaches zero as the sample size increases. In the first setup, we show that both test procedures are in general consistent against the presence of invalid instruments (hence asymptotically invalid for the hypothesis of interest), whether the instruments are “strong” or “weak”. We also describe cases where test consistency may not hold, but the asymptotic distribution is modified in a way that would lead to size distortions in large samples. These include, in particular, cases where the 2SLS estimator remains consistent, but the AR and K tests are asymptotically invalid. In the second setup, we find (non-degenerate) asymptotic non-central chi-square distributions in all cases, and describe cases where the non-centrality parameter is zero and the asymptotic distribution remains the same as in the case of valid instruments (despite the presence of invalid instruments). Overall, our results underscore the importance of checking for the presence of possibly invalid instruments when applying “identification-robust” tests.  相似文献   

18.
Abstract. In the presence of missing covariates, standard model validation procedures may result in misleading conclusions. By building generalized score statistics on augmented inverse probability weighted complete‐case estimating equations, we develop a new model validation procedure to assess the adequacy of a prescribed analysis model when covariate data are missing at random. The asymptotic distribution and local alternative efficiency for the test are investigated. Under certain conditions, our approach provides not only valid but also asymptotically optimal results. A simulation study for both linear and logistic regression illustrates the applicability and finite sample performance of the methodology. Our method is also employed to analyse a coronary artery disease diagnostic dataset.  相似文献   

19.
This article considers the issue of performing tests in linear heteroskedastic models when the test statistic employs a consistent variance estimator. Several different estimators are considered, namely: HC0, HC1, HC2, HC3, and their bias-adjusted versions. The numerical evaluation is performed using numerical integration methods; the Imhof algorithm is used to that end. The results show that bias-adjustment of variance estimators used to construct test statistics delivers more reliable tests when they are performed for the HC0 and HC1 estimators, but the same does not hold for the HC3 estimator. Overall, the most reliable test is the HC3-based one.  相似文献   

20.
Typical panel data models make use of the assumption that the regression parameters are the same for each individual cross-sectional unit. We propose tests for slope heterogeneity in panel data models. Our tests are based on the conditional Gaussian likelihood function in order to avoid the incidental parameters problem induced by the inclusion of individual fixed effects for each cross-sectional unit. We derive the Conditional Lagrange Multiplier test that is valid in cases where N → ∞ and T is fixed. The test applies to both balanced and unbalanced panels. We expand the test to account for general heteroskedasticity where each cross-sectional unit has its own form of heteroskedasticity. The modification is possible if T is large enough to estimate regression coefficients for each cross-sectional unit by using the MINQUE unbiased estimator for regression variances under heteroskedasticity. All versions of the test have a standard Normal distribution under general assumptions on the error distribution as N → ∞. A Monte Carlo experiment shows that the test has very good size properties under all specifications considered, including heteroskedastic errors. In addition, power of our test is very good relative to existing tests, particularly when T is not large.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号