首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An asymptotic theory is developed for nonlinear regression with integrated processes. The models allow for nonlinear effects from unit root time series and therefore deal with the case of parametric nonlinear cointegration. The theory covers integrable and asymptotically homogeneous functions. Sufficient conditions for weak consistency are given and a limit distribution theory is provided. The rates of convergence depend on the properties of the nonlinear regression function, and are shown to be as slow as n1/4 for integrable functions, and to be generally polynomial in n1/2 for homogeneous functions. For regressions with integrable functions, the limiting distribution theory is mixed normal with mixing variates that depend on the sojourn time of the limiting Brownian motion of the integrated process.  相似文献   

2.
This paper develops an asymptotic theory of inference for an unrestricted two‐regime threshold autoregressive (TAR) model with an autoregressive unit root. We find that the asymptotic null distribution of Wald tests for a threshold are nonstandard and different from the stationary case, and suggest basing inference on a bootstrap approximation. We also study the asymptotic null distributions of tests for an autoregressive unit root, and find that they are nonstandard and dependent on the presence of a threshold effect. We propose both asymptotic and bootstrap‐based tests. These tests and distribution theory allow for the joint consideration of nonlinearity (thresholds) and nonstationary (unit roots). Our limit theory is based on a new set of tools that combine unit root asymptotics with empirical process methods. We work with a particular two‐parameter empirical process that converges weakly to a two‐parameter Brownian motion. Our limit distributions involve stochastic integrals with respect to this two‐parameter process. This theory is entirely new and may find applications in other contexts. We illustrate the methods with an application to the U.S. monthly unemployment rate. We find strong evidence of a threshold effect. The point estimates suggest that the threshold effect is in the short‐run dynamics, rather than in the dominate root. While the conventional ADF test for a unit root is insignificant, our TAR unit root tests are arguably significant. The evidence is quite strong that the unemployment rate is not a unit root process, and there is considerable evidence that the series is a stationary TAR process.  相似文献   

3.
This paper establishes the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space. The boundary may be linear, curved, and/or kinked. Typically the asymptotic distribution is a function of a multivariate normal distribution in models without stochastic trends and a function of a multivariate Brownian motion in models with stochastic trends. The results apply to a wide variety of estimators and models. Examples treated in the paper are: (i) quasi-ML estimation of a random coefficients regression model with some coefficient variances equal to zero and (ii) LS estimation of an augmented Dickey-Fuller regression with unit root and time trend parameters on the boundary of the parameter space.  相似文献   

4.
We consider the bootstrap unit root tests based on finite order autoregressive integrated models driven by iid innovations, with or without deterministic time trends. A general methodology is developed to approximate asymptotic distributions for the models driven by integrated time series, and used to obtain asymptotic expansions for the Dickey–Fuller unit root tests. The second‐order terms in their expansions are of stochastic orders Op(n−1/4) and Op(n−1/2), and involve functionals of Brownian motions and normal random variates. The asymptotic expansions for the bootstrap tests are also derived and compared with those of the Dickey–Fuller tests. We show in particular that the bootstrap offers asymptotic refinements for the Dickey–Fuller tests, i.e., it corrects their second‐order errors. More precisely, it is shown that the critical values obtained by the bootstrap resampling are correct up to the second‐order terms, and the errors in rejection probabilities are of order o(n−1/2) if the tests are based upon the bootstrap critical values. Through simulations, we investigate how effective is the bootstrap correction in small samples.  相似文献   

5.
This paper considers large N and large T panel data models with unobservable multiple interactive effects, which are correlated with the regressors. In earnings studies, for example, workers' motivation, persistence, and diligence combined to influence the earnings in addition to the usual argument of innate ability. In macroeconomics, interactive effects represent unobservable common shocks and their heterogeneous impacts on cross sections. We consider identification, consistency, and the limiting distribution of the interactive‐effects estimator. Under both large N and large T, the estimator is shown to be consistent, which is valid in the presence of correlations and heteroskedasticities of unknown form in both dimensions. We also derive the constrained estimator and its limiting distribution, imposing additivity coupled with interactive effects. The problem of testing additive versus interactive effects is also studied. In addition, we consider identification and estimation of models in the presence of a grand mean, time‐invariant regressors, and common regressors. Given identification, the rate of convergence and limiting results continue to hold.  相似文献   

6.
This paper characterizes empirically achievable limits for time series econometric modeling and forecasting. The approach involves the concept of minimal information loss in time series regression and the paper shows how to derive bounds that delimit the proximity of empirical measures to the true probability measure (the DGP) in models that are of econometric interest. The approach utilizes joint probability measures over the combined space of parameters and observables and the results apply for models with stationary, integrated, and cointegrated data. A theorem due to Rissanen is extended so that it applies directly to probabilities about the relative likelihood (rather than averages), a new way of proving results of the Rissanen type is demonstrated, and the Rissanen theory is extended to nonstationary time series with unit roots, near unit roots, and cointegration of unknown order. The corresponding bound for the minimal information loss in empirical work is shown not to be a constant, in general, but to be proportional to the logarithm of the determinant of the (possibility stochastic) Fisher–information matrix. In fact, the bound that determines proximity to the DGP is generally path dependent, and it depends specifically on the type as well as the number of regressors. For practical purposes, the proximity bound has the asymptotic form (K/2)log n, where K is a new dimensionality factor that depends on the nature of the data as well as the number of parameters in the model. When ‘good’ model selection principles are employed in modeling time series data, we are able to show that our proximity bound quantifies empirical limits even in situations where the models may be incorrectly specified. One of the main implications of the new result is that time trends are more costly than stochastic trends, which are more costly in turn than stationary regressors in achieving proximity to the true density. Thus, in a very real sense and quantifiable manner, the DGP is more elusive when there is nonstationarity in the data. The implications for prediction are explored and a second proximity theorem is given, which provides a bound that measures how close feasible predictors can come to the optimal predictor. Again, the bound has the asymptotic form (K/2)log n, showing that forecasting trends is fundamentally more difficult than forecasting stationary time series, even when the correct form of the model for the trends is known.  相似文献   

7.
We demonstrate the asymptotic equivalence between commonly used test statistics for out‐of‐sample forecasting performance and conventional Wald statistics. This equivalence greatly simplifies the computational burden of calculating recursive out‐of‐sample test statistics and their critical values. For the case with nested models, we show that the limit distribution, which has previously been expressed through stochastic integrals, has a simple representation in terms of χ2‐distributed random variables and we derive its density. We also generalize the limit theory to cover local alternatives and characterize the power properties of the test.  相似文献   

8.
In this paper, we propose a simple bias–reduced log–periodogram regression estimator, ^dr, of the long–memory parameter, d, that eliminates the first– and higher–order biases of the Geweke and Porter–Hudak (1983) (GPH) estimator. The bias–reduced estimator is the same as the GPH estimator except that one includes frequencies to the power 2k for k=1,…,r, for some positive integer r, as additional regressors in the pseudo–regression model that yields the GPH estimator. The reduction in bias is obtained using assumptions on the spectrum only in a neighborhood of the zero frequency. Following the work of Robinson (1995b) and Hurvich, Deo, and Brodsky (1998), we establish the asymptotic bias, variance, and mean–squared error (MSE) of ^dr, determine the asymptotic MSE optimal choice of the number of frequencies, m, to include in the regression, and establish the asymptotic normality of ^dr. These results show that the bias of ^dr goes to zero at a faster rate than that of the GPH estimator when the normalized spectrum at zero is sufficiently smooth, but that its variance only is increased by a multiplicative constant. We show that the bias–reduced estimator ^dr attains the optimal rate of convergence for a class of spectral densities that includes those that are smooth of order s≥1 at zero when r≥(s−2)/2 and m is chosen appropriately. For s>2, the GPH estimator does not attain this rate. The proof uses results of Giraitis, Robinson, and Samarov (1997). We specify a data–dependent plug–in method for selecting the number of frequencies m to minimize asymptotic MSE for a given value of r. Some Monte Carlo simulation results for stationary Gaussian ARFIMA (1, d, 1) and (2, d, 0) models show that the bias–reduced estimators perform well relative to the standard log–periodogram regression estimator.  相似文献   

9.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

10.
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.  相似文献   

11.
We consider semiparametric estimation of the memory parameter in a model that includes as special cases both long‐memory stochastic volatility and fractionally integrated exponential GARCH (FIEGARCH) models. Under our general model the logarithms of the squared returns can be decomposed into the sum of a long‐memory signal and a white noise. We consider periodogram‐based estimators using a local Whittle criterion function. We allow the optional inclusion of an additional term to account for possible correlation between the signal and noise processes, as would occur in the FIEGARCH model. We also allow for potential nonstationarity in volatility by allowing the signal process to have a memory parameter d*1/2. We show that the local Whittle estimator is consistent for d*∈(0,1). We also show that the local Whittle estimator is asymptotically normal for d*∈(0,3/4) and essentially recovers the optimal semiparametric rate of convergence for this problem. In particular, if the spectral density of the short‐memory component of the signal is sufficiently smooth, a convergence rate of n2/5−δ for d*∈(0,3/4) can be attained, where n is the sample size and δ>0 is arbitrarily small. This represents a strong improvement over the performance of existing semiparametric estimators of persistence in volatility. We also prove that the standard Gaussian semiparametric estimator is asymptotically normal if d*=0. This yields a test for long memory in volatility.  相似文献   

12.
Nonseparable panel models are important in a variety of economic settings, including discrete choice. This paper gives identification and estimation results for nonseparable models under time‐homogeneity conditions that are like “time is randomly assigned” or “time is an instrument.” Partial‐identification results for average and quantile effects are given for discrete regressors, under static or dynamic conditions, in fully nonparametric and in semiparametric models, with time effects. It is shown that the usual, linear, fixed‐effects estimator is not a consistent estimator of the identified average effect, and a consistent estimator is given. A simple estimator of identified quantile treatment effects is given, providing a solution to the important problem of estimating quantile treatment effects from panel data. Bounds for overall effects in static and dynamic models are given. The dynamic bounds provide a partial‐identification solution to the important problem of estimating the effect of state dependence in the presence of unobserved heterogeneity. The impact of T, the number of time periods, is shown by deriving shrinkage rates for the identified set as T grows. We also consider semiparametric, discrete‐choice models and find that semiparametric panel bounds can be much tighter than nonparametric bounds. Computationally convenient methods for semiparametric models are presented. We propose a novel inference method that applies in panel data and other settings and show that it produces uniformly valid confidence regions in large samples. We give empirical illustrations.  相似文献   

13.
正确设定股票市场资产价格动态模型对资产定价、投资组合和风险管理具有基本重要性。本文基于随机过程非参数统计推断方法,采用幂变差和门限幂变差构造检验统计量,首次对我国股票市场资产价格半鞅的构成进行检验和分析。采用上证指数、深证成指和沪深300指数日内高频数据的实证结果表明,我国股票市场资产价格过程包含布朗运动积分形成的扩散过程和跳过程,既包含有限活动Poisson大跳过程也包含无限活动Levy小跳过程。在跳扩散过程中引入无限活动Levy跳跃过程才能全面刻画我国股票市场资产价格的动态特征。这一结果为相关研究提供了基础性实证结论。  相似文献   

14.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   

15.
Fixed effects estimators of panel models can be severely biased because of the well‐known incidental parameters problem. We show that this bias can be reduced by using a panel jackknife or an analytical bias correction motivated by large T. We give bias corrections for averages over the fixed effects, as well as model parameters. We find large bias reductions from using these approaches in examples. We consider asymptotics where T grows with n, as an approximation to the properties of the estimators in econometric applications. We show that if T grows at the same rate as n, the fixed effects estimator is asymptotically biased, so that asymptotic confidence intervals are incorrect, but that they are correct for the panel jackknife. We show T growing faster than n1/3 suffices for correctness of the analytic correction, a property we also conjecture for the jackknife.  相似文献   

16.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

17.
ARCH and GARCH models directly address the dependency of conditional second moments, and have proved particularly valuable in modelling processes where a relatively large degree of fluctuation is present. These include financial time series, which can be particularly heavy tailed. However, little is known about properties of ARCH or GARCH models in the heavy–tailed setting, and no methods are available for approximating the distributions of parameter estimators there. In this paper we show that, for heavy–tailed errors, the asymptotic distributions of quasi–maximum likelihood parameter estimators in ARCH and GARCH models are nonnormal, and are particularly difficult to estimate directly using standard parametric methods. Standard bootstrap methods also fail to produce consistent estimators. To overcome these problems we develop percentile–t, subsample bootstrap approximations to estimator distributions. Studentizing is employed to approximate scale, and the subsample bootstrap is used to estimate shape. The good performance of this approach is demonstrated both theoretically and numerically.  相似文献   

18.
This paper studies the problem of identification and estimation in nonparametric regression models with a misclassified binary regressor where the measurement error may be correlated with the regressors. We show that the regression function is nonparametrically identified in the presence of an additional random variable that is correlated with the unobserved true underlying variable but unrelated to the measurement error. Identification for semiparametric and parametric regression functions follows straightforwardly from the basic identification result. We propose a kernel estimator based on the identification strategy, derive its large sample properties, and discuss alternative estimation procedures. We also propose a test for misclassification in the model based on an exclusion restriction that is straightforward to implement.  相似文献   

19.
We propose an estimation method for models of conditional moment restrictions, which contain finite dimensional unknown parameters (θ) and infinite dimensional unknown functions (h). Our proposal is to approximate h with a sieve and to estimate θ and the sieve parameters jointly by applying the method of minimum distance. We show that: (i) the sieve estimator of h is consistent with a rate faster than n‐1/4 under certain metric; (ii) the estimator of θ is √n consistent and asymptotically normally distributed; (iii) the estimator for the asymptotic covariance of the θ estimator is consistent and easy to compute; and (iv) the optimally weighted minimum distance estimator of θ attains the semiparametric efficiency bound. We illustrate our results with two examples: a partially linear regression with an endogenous nonparametric part, and a partially additive IV regression with a link function.  相似文献   

20.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号