首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We analyze use of a quasi‐likelihood ratio statistic for a mixture model to test the null hypothesis of one regime versus the alternative of two regimes in a Markov regime‐switching context. This test exploits mixture properties implied by the regime‐switching process, but ignores certain implied serial correlation properties. When formulated in the natural way, the setting is nonstandard, involving nuisance parameters on the boundary of the parameter space, nuisance parameters identified only under the alternative, or approximations using derivatives higher than second order. We exploit recent advances by Andrews (2001) and contribute to the literature by extending the scope of mixture models, obtaining asymptotic null distributions different from those in the literature. We further provide critical values for popular models or bounds for tail probabilities that are useful in constructing conservative critical values for regime‐switching tests. We compare the size and power of our statistics to other useful tests for regime switching via Monte Carlo methods and find relatively good performance. We apply our methods to reexamine the classic cartel study of Porter (1983) and reaffirm Porter's findings.  相似文献   

2.
This paper establishes the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space. The boundary may be linear, curved, and/or kinked. Typically the asymptotic distribution is a function of a multivariate normal distribution in models without stochastic trends and a function of a multivariate Brownian motion in models with stochastic trends. The results apply to a wide variety of estimators and models. Examples treated in the paper are: (i) quasi-ML estimation of a random coefficients regression model with some coefficient variances equal to zero and (ii) LS estimation of an augmented Dickey-Fuller regression with unit root and time trend parameters on the boundary of the parameter space.  相似文献   

3.
We consider tests of a simple null hypothesis on a subset of the coefficients of the exogenous and endogenous regressors in a single‐equation linear instrumental variables regression model with potentially weak identification. Existing methods of subset inference (i) rely on the assumption that the parameters not under test are strongly identified, or (ii) are based on projection‐type arguments. We show that, under homoskedasticity, the subset Anderson and Rubin (1949) test that replaces unknown parameters by limited information maximum likelihood estimates has correct asymptotic size without imposing additional identification assumptions, but that the corresponding subset Lagrange multiplier test is size distorted asymptotically.  相似文献   

4.
Wavelet analysis is a new mathematical method developed as a unified field of science over the last decade or so. As a spatially adaptive analytic tool, wavelets are useful for capturing serial correlation where the spectrum has peaks or kinks, as can arise from persistent dependence, seasonality, and other kinds of periodicity. This paper proposes a new class of generally applicable wavelet‐based tests for serial correlation of unknown form in the estimated residuals of a panel regression model, where error components can be one‐way or two‐way, individual and time effects can be fixed or random, and regressors may contain lagged dependent variables or deterministic/stochastic trending variables. Our tests are applicable to unbalanced heterogenous panel data. They have a convenient null limit N(0,1) distribution. No formulation of an alternative model is required, and our tests are consistent against serial correlation of unknown form even in the presence of substantial inhomogeneity in serial correlation across individuals. This is in contrast to existing serial correlation tests for panel models, which ignore inhomogeneity in serial correlation across individuals by assuming a common alternative, and thus have no power against the alternatives where the average of serial correlations among individuals is close to zero. We propose and justify a data‐driven method to choose the smoothing parameter—the finest scale in wavelet spectral estimation, making the tests completely operational in practice. The data‐driven finest scale automatically converges to zero under the null hypothesis of no serial correlation and diverges to infinity as the sample size increases under the alternative, ensuring the consistency of our tests. Simulation shows that our tests perform well in small and finite samples relative to some existing tests.  相似文献   

5.
We introduce the class of conditional linear combination tests, which reject null hypotheses concerning model parameters when a data‐dependent convex combination of two identification‐robust statistics is large. These tests control size under weak identification and have a number of optimality properties in a conditional problem. We show that the conditional likelihood ratio test of Moreira, 2003 is a conditional linear combination test in models with one endogenous regressor, and that the class of conditional linear combination tests is equivalent to a class of quasi‐conditional likelihood ratio tests. We suggest using minimax regret conditional linear combination tests and propose a computationally tractable class of tests that plug in an estimator for a nuisance parameter. These plug‐in tests perform well in simulation and have optimal power in many strongly identified models, thus allowing powerful identification‐robust inference in a wide range of linear and nonlinear models without sacrificing efficiency if identification is strong.  相似文献   

6.
It is well known that the finite‐sample properties of tests of hypotheses on the co‐integrating vectors in vector autoregressive models can be quite poor, and that current solutions based on Bartlett‐type corrections or bootstrap based on unrestricted parameter estimators are unsatisfactory, in particular in those cases where also asymptotic χ2 tests fail most severely. In this paper, we solve this inference problem by showing the novel result that a bootstrap test where the null hypothesis is imposed on the bootstrap sample is asymptotically valid. That is, not only does it have asymptotically correct size, but, in contrast to what is claimed in existing literature, it is consistent under the alternative. Compared to the theory for bootstrap tests on the co‐integration rank (Cavaliere, Rahbek, and Taylor, 2012), establishing the validity of the bootstrap in the framework of hypotheses on the co‐integrating vectors requires new theoretical developments, including the introduction of multivariate Ornstein–Uhlenbeck processes with random (reduced rank) drift parameters. Finally, as documented by Monte Carlo simulations, the bootstrap test outperforms existing methods.  相似文献   

7.
Seemingly absent from the arsenal of currently available “nearly efficient” testing procedures for the unit root hypothesis, that is, tests whose asymptotic local power functions are virtually indistinguishable from the Gaussian power envelope, is a test admitting a (quasi‐)likelihood ratio interpretation. We study the large sample properties of a quasi‐likelihood ratio unit root test based on a Gaussian likelihood and show that this test is nearly efficient.  相似文献   

8.
This paper discusses a consistent bootstrap implementation of the likelihood ratio (LR) co‐integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates of the underlying vector autoregressive (VAR) model that obtain under the reduced rank null hypothesis. A full asymptotic theory is provided that shows that, unlike the bootstrap procedure in Swensen (2006) where a combination of unrestricted and restricted estimates from the VAR model is used, the resulting bootstrap data are I(1) and satisfy the null co‐integration rank, regardless of the true rank. This ensures that the bootstrap LR test is asymptotically correctly sized and that the probability that the bootstrap sequential procedure selects a rank smaller than the true rank converges to zero. Monte Carlo evidence suggests that our bootstrap procedures work very well in practice.  相似文献   

9.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

10.
This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite‐dimensional nuisance parameter. We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics. We apply our approach to construct tests based on quasi‐likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.  相似文献   

11.
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS's) for parameters that are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS's robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS's that are based on criterion functions that satisfy certain asymptotic stochastic quadratic expansions and that depend on the parameter that determines the strength of identification. This covers a class of models estimated using maximum likelihood (ML), least squares (LS), quantile, generalized method of moments, generalized empirical likelihood, minimum distance, and semi‐parametric estimators. The consistency/lack‐of‐consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic sizes (in a uniform sense) of standard and identification‐robust tests and CS's are established. The results are applied to the ARMA(1, 1) time series model estimated by ML and to the nonlinear regression model estimated by LS. In companion papers, the results are applied to a number of other models.  相似文献   

12.
An autoregressive model with Markov regime‐switching is analyzed that reflects on the properties of the quasi‐likelihood ratio test developed by Cho and White (2007). For such a model, we show that consistency of the quasi‐maximum likelihood estimator for the population parameter values, on which consistency of the test is based, does not hold. We describe a condition that ensures consistency of the estimator and discuss the consistency of the test in the absence of consistency of the estimator.  相似文献   

13.
This paper considers tests of the parameter on an endogenous variable in an instrumental variables regression model. The focus is on determining tests that have some optimal power properties. We start by considering a model with normally distributed errors and known error covariance matrix. We consider tests that are similar and satisfy a natural rotational invariance condition. We determine a two‐sided power envelope for invariant similar tests. This allows us to assess and compare the power properties of tests such as the conditional likelihood ratio (CLR), the Lagrange multiplier, and the Anderson–Rubin tests. We find that the CLR test is quite close to being uniformly most powerful invariant among a class of two‐sided tests. The finite‐sample results of the paper are extended to the case of unknown error covariance matrix and possibly nonnormal errors via weak instrument asymptotics. Strong instrument asymptotic results also are provided because we seek tests that perform well under both weak and strong instruments.  相似文献   

14.
This paper examines the problem of testing and confidence set construction for one‐dimensional functions of the coefficients in autoregressive (AR(p)) models with potentially persistent time series. The primary example concerns inference on impulse responses. A new asymptotic framework is suggested and some new theoretical properties of known procedures are demonstrated. I show that the likelihood ratio (LR) and LR± statistics for a linear hypothesis in an AR(p) can be uniformly approximated by a weighted average of local‐to‐unity and normal distributions. The corresponding weights depend on the weight placed on the largest root in the null hypothesis. The suggested approximation is uniform over the set of all linear hypotheses. The same family of distributions approximates the LR and LR± statistics for tests about impulse responses, and the approximation is uniform over the horizon of the impulse response. I establish the size properties of tests about impulse responses proposed by Inoue and Kilian (2002) and Gospodinov (2004), and theoretically explain some of the empirical findings of Pesavento and Rossi (2007). An adaptation of the grid bootstrap for impulse response functions is suggested and its properties are examined.  相似文献   

15.
In this article we introduce efficient Wald tests for testing the null hypothesis of the unit root against the alternative of the fractional unit root. In a local alternative framework, the proposed tests are locally asymptotically equivalent to the optimal Robinson Lagrange multiplier tests. Our results contrast with the tests for fractional unit roots, introduced by Dolado, Gonzalo, and Mayoral, which are inefficient. In the presence of short range serial correlation, we propose a simple and efficient two‐step test that avoids the estimation of a nonlinear regression model. In addition, the first‐order asymptotic properties of the proposed tests are not affected by the preestimation of short or long memory parameters.  相似文献   

16.
This paper considers issues related to estimation, inference, and computation with multiple structural changes that occur at unknown dates in a system of equations. Changes can occur in the regression coefficients and/or the covariance matrix of the errors. We also allow arbitrary restrictions on these parameters, which permits the analysis of partial structural change models, common breaks that occur in all equations, breaks that occur in a subset of equations, and so forth. The method of estimation is quasi‐maximum likelihood based on Normal errors. The limiting distributions are obtained under more general assumptions than previous studies. For testing, we propose likelihood ratio type statistics to test the null hypothesis of no structural change and to select the number of changes. Structural change tests with restrictions on the parameters can be constructed to achieve higher power when prior information is present. For computation, an algorithm for an efficient procedure is proposed to construct the estimates and test statistics. We also introduce a novel locally ordered breaks model, which allows the breaks in different equations to be related yet not occurring at the same dates.  相似文献   

17.
This paper applies some general concepts in decision theory to a linear panel data model. A simple version of the model is an autoregression with a separate intercept for each unit in the cross section, with errors that are independent and identically distributed with a normal distribution. There is a parameter of interest γ and a nuisance parameter τ, a N×K matrix, where N is the cross‐section sample size. The focus is on dealing with the incidental parameters problem created by a potentially high‐dimension nuisance parameter. We adopt a “fixed‐effects” approach that seeks to protect against any sequence of incidental parameters. We transform τ to (δ, ρ, ω), where δ is a J×K matrix of coefficients from the least‐squares projection of τ on a N×J matrix x of strictly exogenous variables, ρ is a K×K symmetric, positive semidefinite matrix obtained from the residual sums of squares and cross‐products in the projection of τ on x, and ω is a (NJ) ×K matrix whose columns are orthogonal and have unit length. The model is invariant under the actions of a group on the sample space and the parameter space, and we find a maximal invariant statistic. The distribution of the maximal invariant statistic does not depend upon ω. There is a unique invariant distribution for ω. We use this invariant distribution as a prior distribution to obtain an integrated likelihood function. It depends upon the observation only through the maximal invariant statistic. We use the maximal invariant statistic to construct a marginal likelihood function, so we can eliminate ω by integration with respect to the invariant prior distribution or by working with the marginal likelihood function. The two approaches coincide. Decision rules based on the invariant distribution for ω have a minimax property. Given a loss function that does not depend upon ω and given a prior distribution for (γ, δ, ρ), we show how to minimize the average—with respect to the prior distribution for (γ, δ, ρ)—of the maximum risk, where the maximum is with respect to ω. There is a family of prior distributions for (δ, ρ) that leads to a simple closed form for the integrated likelihood function. This integrated likelihood function coincides with the likelihood function for a normal, correlated random‐effects model. Under random sampling, the corresponding quasi maximum likelihood estimator is consistent for γ as N→∞, with a standard limiting distribution. The limit results do not require normality or homoskedasticity (conditional on x) assumptions.  相似文献   

18.
We propose a novel statistic for conducting joint tests on all the structural parameters in instrumental variables regression. The statistic is straightforward to compute and equals a quadratic form of the score of the concentrated log–likelihood. It therefore attains its minimal value equal to zero at the maximum likelihood estimator. The statistic has a χ2 limiting distribution with a degrees of freedom parameter equal to the number of structural parameters. The limiting distribution does not depend on nuisance parameters. The statistic overcomes the deficiencies of the Anderson–Rubin statistic, whose limiting distribution has a degrees of freedom parameter equal to the number of instruments, and the likelihood based, Wald, likelihood ratio, and Lagrange multiplier statistics, whose limiting distributions depend on nuisance parameters. Size and power comparisons reveal that the statistic is a (asymptotic) size–corrected likelihood ratio statistic. We apply the statistic to the Angrist–Krueger (1991) data and find similar results as in Staiger and Stock (1997).  相似文献   

19.
This paper considers tests for structural instability of short duration, such as at the end of the sample. The key feature of the testing problem is that the number, m, of observations in the period of potential change is relatively small—possibly as small as one. The well‐known F test of Chow (1960) for this problem only applies in a linear regression model with normally distributed iid errors and strictly exogenous regressors, even when the total number of observations, n+m, is large. We generalize the F test to cover regression models with much more general error processes, regressors that are not strictly exogenous, and estimation by instrumental variables as well as least squares. In addition, we extend the F test to nonlinear models estimated by generalized method of moments and maximum likelihood. Asymptotic critical values that are valid as n→∞ with m fixed are provided using a subsampling‐like method. The results apply quite generally to processes that are strictly stationary and ergodic under the null hypothesis of no structural instability.  相似文献   

20.
This paper applies some general concepts in decision theory to a simple instrumental variables model. There are two endogenous variables linked by a single structural equation; k of the exogenous variables are excluded from this structural equation and provide the instrumental variables (IV). The reduced‐form distribution of the endogenous variables conditional on the exogenous variables corresponds to independent draws from a bivariate normal distribution with linear regression functions and a known covariance matrix. A canonical form of the model has parameter vector (ρ, φ, ω), where φis the parameter of interest and is normalized to be a point on the unit circle. The reduced‐form coefficients on the instrumental variables are split into a scalar parameter ρand a parameter vector ω, which is normalized to be a point on the (k−1)‐dimensional unit sphere; ρmeasures the strength of the association between the endogenous variables and the instrumental variables, and ωis a measure of direction. A prior distribution is introduced for the IV model. The parameters φ, ρ, and ωare treated as independent random variables. The distribution for φis uniform on the unit circle; the distribution for ωis uniform on the unit sphere with dimension k‐1. These choices arise from the solution of a minimax problem. The prior for ρis left general. It turns out that given any positive value for ρ, the Bayes estimator of φdoes not depend on ρ; it equals the maximum‐likelihood estimator. This Bayes estimator has constant risk; because it minimizes average risk with respect to a proper prior, it is minimax. The same general concepts are applied to obtain confidence intervals. The prior distribution is used in two ways. The first way is to integrate out the nuisance parameter ωin the IV model. This gives an integrated likelihood function with two scalar parameters, φand ρ. Inverting a likelihood ratio test, based on the integrated likelihood function, provides a confidence interval for φ. This lacks finite sample optimality, but invariance arguments show that the risk function depends only on ρand not on φor ω. The second approach to confidence sets aims for finite sample optimality by setting up a loss function that trades off coverage against the length of the interval. The automatic uniform priors are used for φand ω, but a prior is also needed for the scalar ρ, and no guidance is offered on this choice. The Bayes rule is a highest posterior density set. Invariance arguments show that the risk function depends only on ρand not on φor ω. The optimality result combines average risk and maximum risk. The confidence set minimizes the average—with respect to the prior distribution for ρ—of the maximum risk, where the maximization is with respect to φand ω.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号