首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
We show that it is possible to adapt to nonparametric disturbance autocorrelation in time series regression in the presence of long memory in both regressors and disturbances by using a smoothed nonparametric spectrum estimate in frequency–domain generalized least squares. When the collective memory in regressors and disturbances is sufficiently strong, ordinary least squares is not only asymptotically inefficient but asymptotically non–normal and has a slow rate of convergence, whereas generalized least squares is asymptotically normal and Gauss–Markov efficient with standard convergence rate. Despite the anomalous behavior of nonparametric spectrum estimates near a spectral pole, we are able to justify a standard construction of frequency–domain generalized least squares, earlier considered in case of short memory disturbances. A small Monte Carlo study of finite sample performance is included.  相似文献   

2.
This paper develops an asymptotic theory of inference for an unrestricted two‐regime threshold autoregressive (TAR) model with an autoregressive unit root. We find that the asymptotic null distribution of Wald tests for a threshold are nonstandard and different from the stationary case, and suggest basing inference on a bootstrap approximation. We also study the asymptotic null distributions of tests for an autoregressive unit root, and find that they are nonstandard and dependent on the presence of a threshold effect. We propose both asymptotic and bootstrap‐based tests. These tests and distribution theory allow for the joint consideration of nonlinearity (thresholds) and nonstationary (unit roots). Our limit theory is based on a new set of tools that combine unit root asymptotics with empirical process methods. We work with a particular two‐parameter empirical process that converges weakly to a two‐parameter Brownian motion. Our limit distributions involve stochastic integrals with respect to this two‐parameter process. This theory is entirely new and may find applications in other contexts. We illustrate the methods with an application to the U.S. monthly unemployment rate. We find strong evidence of a threshold effect. The point estimates suggest that the threshold effect is in the short‐run dynamics, rather than in the dominate root. While the conventional ADF test for a unit root is insignificant, our TAR unit root tests are arguably significant. The evidence is quite strong that the unemployment rate is not a unit root process, and there is considerable evidence that the series is a stationary TAR process.  相似文献   

3.
Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously‐distributed variable such as firm size. In addition, threshold models may be used as a parsimonious strategy for nonparametric function estimation. For example, the threshold autoregressive model (TAR) is popular in the nonlinear time series literature. Threshold models also emerge as special cases of more complex statistical frameworks, such as mixture models, switching models, Markov switching models, and smooth transition threshold models. It may be important to understand the statistical properties of threshold models as a preliminary step in the development of statistical tools to handle these more complicated structures. Despite the large number of potential applications, the statistical theory of threshold estimation is undeveloped. It is known that threshold estimates are super‐consistent, but a distribution theory useful for testing and inference has yet to be provided. This paper develops a statistical theory for threshold estimation in the regression context. We allow for either cross‐section or time series observations. Least squares estimation of the regression parameters is considered. An asymptotic distribution theory for the regression estimates (the threshold and the regression slopes) is developed. It is found that the distribution of the threshold estimate is nonstandard. A method to construct asymptotic confidence intervals is developed by inverting the likelihood ratio statistic. It is shown that this yields asymptotically conservative confidence regions. Monte Carlo simulations are presented to assess the accuracy of the asymptotic approximations. The empirical relevance of the theory is illustrated through an application to the multiple equilibria growth model of Durlauf and Johnson (1995).  相似文献   

4.
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model, based on the Gaussian likelihood conditional on initial values. We give conditions on the parameters such that the process Xt is fractional of order d and cofractional of order db; that is, there exist vectors β for which βXt is fractional of order db and no other fractionality order is possible. For b=1, the model nests the I(d−1) vector autoregressive model. We define the statistical model by 0 < bd, but conduct inference when the true values satisfy 0d0b0<1/2 and b0≠1/2, for which β0Xt is (asymptotically) a stationary process. Our main technical contribution is the proof of consistency of the maximum likelihood estimators. To this end, we prove weak convergence of the conditional likelihood as a continuous stochastic process in the parameters when errors are independent and identically distributed with suitable moment conditions and initial values are bounded. Because the limit is deterministic, this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of is mixed Gaussian, while for the remaining parameters it is Gaussian. The limit distribution of the likelihood ratio test for cointegration rank is a functional of fractional Brownian motion of type II. If b0<1/2, all limit distributions are Gaussian or chi‐squared. We derive similar results for the model with d = b, allowing for a constant term.  相似文献   

5.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

6.
We consider semiparametric estimation of the memory parameter in a model that includes as special cases both long‐memory stochastic volatility and fractionally integrated exponential GARCH (FIEGARCH) models. Under our general model the logarithms of the squared returns can be decomposed into the sum of a long‐memory signal and a white noise. We consider periodogram‐based estimators using a local Whittle criterion function. We allow the optional inclusion of an additional term to account for possible correlation between the signal and noise processes, as would occur in the FIEGARCH model. We also allow for potential nonstationarity in volatility by allowing the signal process to have a memory parameter d*1/2. We show that the local Whittle estimator is consistent for d*∈(0,1). We also show that the local Whittle estimator is asymptotically normal for d*∈(0,3/4) and essentially recovers the optimal semiparametric rate of convergence for this problem. In particular, if the spectral density of the short‐memory component of the signal is sufficiently smooth, a convergence rate of n2/5−δ for d*∈(0,3/4) can be attained, where n is the sample size and δ>0 is arbitrarily small. This represents a strong improvement over the performance of existing semiparametric estimators of persistence in volatility. We also prove that the standard Gaussian semiparametric estimator is asymptotically normal if d*=0. This yields a test for long memory in volatility.  相似文献   

7.
This paper explores a two-stage input control system for fixed capacity make-to-order manufacturing systems (with heavy job tardiness penalties), that selectively accepts incoming orders and holds the accepted ones in a pre-shop queue prior to releasing them to the shop floor. Single-stage input control systems that only allow orders to be delayed in a pre-shop queue (i.e. they do not allow some orders to be rejected) have been previously investigated and found to negatively impact overall due-date performance. The hypothesis motivating this research is that judiciously rejecting a subset of incoming orders can prevent the order release queue from being overloaded when a surge of demand occurs. The input control system is evaluated via experiments using a discrete-event simulation model of a fixed capacity manufacturing system. The experiments reported here suggest that holding orders in the pre-shop queue does not improve due date performance, and that judiciously rejecting orders on its own is a viable alternative mechanism of input control that can deliver improved performance.  相似文献   

8.
This paper discusses a consistent bootstrap implementation of the likelihood ratio (LR) co‐integration rank test and associated sequential rank determination procedure of Johansen (1996). The bootstrap samples are constructed using the restricted parameter estimates of the underlying vector autoregressive (VAR) model that obtain under the reduced rank null hypothesis. A full asymptotic theory is provided that shows that, unlike the bootstrap procedure in Swensen (2006) where a combination of unrestricted and restricted estimates from the VAR model is used, the resulting bootstrap data are I(1) and satisfy the null co‐integration rank, regardless of the true rank. This ensures that the bootstrap LR test is asymptotically correctly sized and that the probability that the bootstrap sequential procedure selects a rank smaller than the true rank converges to zero. Monte Carlo evidence suggests that our bootstrap procedures work very well in practice.  相似文献   

9.
A new method is proposed for constructing confidence intervals in autoregressive models with linear time trend. Interest focuses on the sum of the autoregressive coefficients because this parameter provides a useful scalar measure of the long‐run persistence properties of an economic time series. Since the type of the limiting distribution of the corresponding OLS estimator, as well as the rate of its convergence, depend in a discontinuous fashion upon whether the true parameter is less than one or equal to one (that is, trend‐stationary case or unit root case), the construction of confidence intervals is notoriously difficult. The crux of our method is to recompute the OLS estimator on smaller blocks of the observed data, according to the general subsampling idea of Politis and Romano (1994a), although some extensions of the standard theory are needed. The method is more general than previous approaches in that it works for arbitrary parameter values, but also because it allows the innovations to be a martingale difference sequence rather than i.i.d. Some simulation studies examine the finite sample performance.  相似文献   

10.
This paper reviews a set of recent studies that have attempted to measure the causal effect of education on labor market earnings by using institutional features of the supply side of the education system as exogenous determinants of schooling outcomes. A simple theoretical model that highlights the role of comparative advantage in the optimal schooling decision is presented and used to motivate an extended discussion of econometric issues, including the properties of ordinary least squares and instrumental variables estimators. A review of studies that have used compulsory schooling laws, differences in the accessibility of schools, and similar features as instrumental variables for completed education, reveals that the resulting estimates of the return to schooling are typically as big or bigger than the corresponding ordinary least squares estimates. One interpretation of this finding is that marginal returns to education among the low‐education subgroups typically affected by supply‐side innovations tend to be relatively high, reflecting their high marginal costs of schooling, rather than low ability that limits their return to education.  相似文献   

11.
A systems cointegration rank test is proposed that is applicable for vector autoregressive (VAR) processes with a structural shift at unknown time. The structural shift is modeled as a simple shift in the level of the process. It is proposed to estimate the break date first on the basis of a full unrestricted VAR model. Two alternative estimators are considered and their asymptotic properties are derived. In the next step the deterministic part of the process including the shift size is estimated and the series are adjusted by subtracting the estimated deterministic part. A Johansen type test for the cointegrating rank is applied to the adjusted series. The test statistic is shown to have a well‐known asymptotic null distribution that does not depend on the break date. The performance of the procedure in small samples is investigated by simulations.  相似文献   

12.
This paper investigates asymptotic properties of the maximum likelihood estimator and the quasi‐maximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with different spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have ‐rate of convergence and be asymptotically normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.  相似文献   

13.
In this paper we investigate methods for testing the existence of a cointegration relationship among the components of a nonstationary fractionally integrated (NFI) vector time series. Our framework generalizes previous studies restricted to unit root integrated processes and permits simultaneous analysis of spurious and cointegrated NFI vectors. We propose a modified F‐statistic, based on a particular studentization, which converges weakly under both hypotheses, despite the fact that OLS estimates are only consistent under cointegration. This statistic leads to a Wald‐type test of cointegration when combined with a narrow band GLS‐type estimate. Our semiparametric methodology allows consistent testing of the spurious regression hypothesis against the alternative of fractional cointegration without prior knowledge on the memory of the original series, their short run properties, the cointegrating vector, or the degree of cointegration. This semiparametric aspect of the modelization does not lead to an asymptotic loss of power, permitting the Wald statistic to diverge faster under the alternative of cointegration than when testing for a hypothesized cointegration vector. In our simulations we show that the method has comparable power to customary procedures under the unit root cointegration setup, and maintains good properties in a general framework where other methods may fail. We illustrate our method testing the cointegration hypothesis of nominal GNP and simple‐sum (M1, M2, M3) monetary aggregates.  相似文献   

14.
An asymptotic theory is developed for nonlinear regression with integrated processes. The models allow for nonlinear effects from unit root time series and therefore deal with the case of parametric nonlinear cointegration. The theory covers integrable and asymptotically homogeneous functions. Sufficient conditions for weak consistency are given and a limit distribution theory is provided. The rates of convergence depend on the properties of the nonlinear regression function, and are shown to be as slow as n1/4 for integrable functions, and to be generally polynomial in n1/2 for homogeneous functions. For regressions with integrable functions, the limiting distribution theory is mixed normal with mixing variates that depend on the sojourn time of the limiting Brownian motion of the integrated process.  相似文献   

15.
This paper considers inference in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2009a) that standard fixed critical value, subsampling, and m out of n bootstrap methods often have incorrect asymptotic size in such models. This paper introduces general methods of constructing tests and confidence intervals that have correct asymptotic size. In particular, we consider a hybrid subsampling/fixed‐critical‐value method and size‐correction methods. The paper discusses two examples in detail. They are (i) confidence intervals in an autoregressive model with a root that may be close to unity and conditional heteroskedasticity of unknown form and (ii) tests and confidence intervals based on a post‐conservative model selection estimator.  相似文献   

16.
Cryptosporidium human dose‐response data from seven species/isolates are used to investigate six models of varying complexity that estimate infection probability as a function of dose. Previous models attempt to explicitly account for virulence differences among C. parvum isolates, using three or six species/isolates. Four (two new) models assume species/isolate differences are insignificant and three of these (all but exponential) allow for variable human susceptibility. These three human‐focused models (fractional Poisson, exponential with immunity and beta‐Poisson) are relatively simple yet fit the data significantly better than the more complex isolate‐focused models. Among these three, the one‐parameter fractional Poisson model is the simplest but assumes that all Cryptosporidium oocysts used in the studies were capable of initiating infection. The exponential with immunity model does not require such an assumption and includes the fractional Poisson as a special case. The fractional Poisson model is an upper bound of the exponential with immunity model and applies when all oocysts are capable of initiating infection. The beta Poisson model does not allow an immune human subpopulation; thus infection probability approaches 100% as dose becomes huge. All three of these models predict significantly (>10x) greater risk at the low doses that consumers might receive if exposed through drinking water or other environmental exposure (e.g., 72% vs. 4% infection probability for a one oocyst dose) than previously predicted. This new insight into Cryptosporidium risk suggests additional inactivation and removal via treatment may be needed to meet any specified risk target, such as a suggested 10?4 annual risk of Cryptosporidium infection.  相似文献   

17.
In general linear modeling, an alternative to the method of least squares (LS) is the least absolute deviations (LAD) procedure. Although LS is more widely used, the LAD approach yields better estimates in the presence of outliers. In this paper, we examine the performance of LAD estimators for the parameters of the first-order autoregressive model in the presence of outliers. A simulation study compared these estimates with those given by LS. The general conclusion is that LAD does not deal successfully with additive outliers. A simple procedure is proposed which allows exception reporting when outliers occur.  相似文献   

18.
Local to unity limit theory is used in applications to construct confidence intervals (CIs) for autoregressive roots through inversion of a unit root test (Stock (1991)). Such CIs are asymptotically valid when the true model has an autoregressive root that is local to unity (ρ = 1 + c/n), but are shown here to be invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Failure at the boundary implies that these CIs have zero asymptotic coverage probability in the stationary case and vicinities of unity that are wider than O(n−1/3). The inversion methods of Hansen (1999) and Mikusheva (2007) are asymptotically valid in such cases. Implications of these results for predictive regression tests are explored. When the predictive regressor is stationary, the popular Campbell and Yogo (2006) CIs for the regression coefficient have zero coverage probability asymptotically, and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious cautionary implications for the use of the procedures in empirical practice.  相似文献   

19.
The delta method and continuous mapping theorem are among the most extensively used tools in asymptotic derivations in econometrics. Extensions of these methods are provided for sequences of functions that are commonly encountered in applications and where the usual methods sometimes fail. Important examples of failure arise in the use of simulation‐based estimation methods such as indirect inference. The paper explores the application of these methods to the indirect inference estimator (IIE) in first order autoregressive estimation. The IIE uses a binding function that is sample size dependent. Its limit theory relies on a sequence‐based delta method in the stationary case and a sequence‐based implicit continuous mapping theorem in unit root and local to unity cases. The new limit theory shows that the IIE achieves much more than (partial) bias correction. It changes the limit theory of the maximum likelihood estimator (MLE) when the autoregressive coefficient is in the locality of unity, reducing the bias and the variance of the MLE without affecting the limit theory of the MLE in the stationary case. Thus, in spite of the fact that the IIE is a continuously differentiable function of the MLE, the limit distribution of the IIE is not simply a scale multiple of the MLE, but depends implicitly on the full binding function mapping. The unit root case therefore represents an important example of the failure of the delta method and shows the need for an implicit mapping extension of the continuous mapping theorem.  相似文献   

20.
Risk‐based, background, and laboratory quantitation limit‐derived standards for carcinogenic polycyclic aromatic hydrocarbons (cPAHs) in residential and nonresidential soils vary across the northeast region of the United States. The magnitude and extent of this variation, however, have not been systematically studied. This article examines the technical basis and methodology used by eight northeastern states in the development of risk‐based screening values, guidelines, and standards for cPAHs in soils. Exposure pathways, human receptors, algorithms, and input variables used by each state in the calculation of acceptable human health risks are identified and reviewed within the context of environmental policy and regulatory impacts. Emphasis is placed on a comparative analysis of multipathway exposures (incidental ingestion, dermal contact, and particulate inhalation) and key science‐policy decisions that have led to the promulgation and adoption of different exposure criteria for cPAHs in the Northeast. More than 425 data points and 20 distinct exposure factors across eight state programs, 18 age subgroups, six activity scenarios, and three exposure pathways were systematically evaluated. Risk‐based values for one state varied either above or below risk‐based, background or laboratory quantitation limit‐derived standards of another state for the same cPAH and receptor. Standards for cPAHs in soils were found to differ significantly across the northeast region—in some cases, by one or two orders of magnitude. While interstate differences can be expected to persist, future changes in federal guidance could mean a shift in risk drivers, compliance status, or calculated cumulative risks for individual properties impacted by PAH releases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号