首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We analyze the modifications that occur in indirect inference when a nuisance parameter is not identified under the null hypothesis. We develop a testing procedure adapted to this simulation-based estimation method, and detail its use for detecting the threshold effect in threshold moving average models with contemporaneous and lagged asymmetries. In contrast to existing threshold models, these models allow taking into account the presence of asymmetric effects of current and lagged random shocks. We use them to measure the persistence of shocks to U.S. output.  相似文献   

2.
In this paper, we show a maximum likelihood estimation procedure in the Box-Cox model when a lagged dependent variable is included among explanatory variables and the first observation of the dependent variable is random. It is shown in a numerical example that a test of a coefficientof the lagged dependent variable is sensitive to whether the first observation of the dependentvariable is random or not.  相似文献   

3.
This article applies different approaches to distinguish state dependence from unobserved heterogeneity and serial correlation and, hence, test for state dependence in consumer brand choices. First, we apply a simple method proposed by Chamberlain, which involves lagged exogenous variables only. Second, we also estimate a lagged-dependent-variable specification proposed by Wooldridge. Third, we use the estimation approach suggested by Wooldridge to estimate a model with both lagged dependent and exogenous variables to distinguish between the two different sources of choice dynamics, state dependence and lagged effects of the exogenous variables. Our analysis reveals that the best approach is to use models with both lagged dependent and exogenous variables. Our findings include strong evidence for state dependence in five out of the six product categories studied in this article.  相似文献   

4.
We propose a modification of local polynomial estimation which improves the efficiency of the conventional method when the observation errors are correlated. The procedure is based on a pre-transformation of the data as a generalization of the pre-whitening procedure introduced by Xiao et al. [(2003), ‘More Efficient Local Polynomial Estimation in Nonparametric Regression with Autocorrelated Errors’, Journal of the American Statistical Association, 98, 980–992]. While these authors assumed a linear process representation for the error process, we avoid any structural assumption. We further allow the regressors and the errors to be dependent. More importantly, we show that the inclusion of both leading and lagged variables in the approximation of the error terms outperforms the best approximation based on lagged variables only. Establishing its asymptotic distribution, we show that the proposed estimator is more efficient than the standard local polynomial estimator. As a by-product we prove a suitable version of a central limit theorem which allows us to improve the asymptotic normality result for local polynomial estimators by Masry and Fan [(1997), ‘Local Polynomial Estimation of Regression Functions for Mixing Processes’, Scandinavian Journal of Statistics, 24, 165–179]. A simulation study confirms the efficiency of our estimator on finite samples. An application to climate data also shows that our new method leads to an estimator with decreased variability.  相似文献   

5.
This paper introduces a novel way of differentiating a unit root from stationary alternatives using so-called “Bridge” estimators; this estimation procedure can potentially generate exact zero estimates of parameters. We exploit this property and treat this as a model selection problem. We show that Bridge estimators can select the correct model with probability tending to 1. They estimate “zero” parameter on the lagged dependent variable as zero (nonstationarity), if this is nonzero (stationary), estimate the coefficient with standard normal limit. In this sense, we extend the statistics literature as well, since that literature only deals with model selection among only stationary variables.  相似文献   

6.
In the case where the lagged dependent variables are included in the regression model, it is known that the ordinary least squares estimates (OLSE) are biased in small sample and that the bias increases as the number of the irrelevant variables increases. In this paper, based on the bootstrap methods, an attempt is made to obtain the unbiased estimates in autoregressive and non-Gaussian cases. We propose the residual-based bootstrap method in this paper. Some simulation studies are performed to examine whether the proposed estimation procedure works well or not. We obtain the results that it is possible to recover the true parameter values and that the proposed procedure gives us the less biased estimators than OLSE. This paper is a substantial revision of Tanizaki (2000). The normality assumption is adopted in Tanizaki (2000), but it is not required in this paper. The authors are grateful to an anonymous referee for valuable suggestions and comments. This research was partially supported by Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research (C)(2) #14530033, 2002–2005, for H. Tanizaki and Grants-in-Aid for the 21st Century COE Program.  相似文献   

7.
Exploratory methods for determining appropriate lagged vsrlables in a vector nonlinear time series model are investigated. The first is a multivariate extension of the R statistic considered by Granger and Lin (1994), which is based on an estimate of the mutual information criterion. The second method uses Kendall's ρ and partial ρ statistics for lag determination. The methods provide nonlinear analogues of the autocorrelation and partial autocorrelation matrices for a vector time series. Simulation studies indicate that the R statistic reliabiy identifies appropriate lagged nonlinear moving average terms in a vector time series, while Kendall's ρ and partial ρ statistics have some power in identifying appropirate lagged nonlinear moving average and autoregressive terms, respectively, when the nonlinear relationship between lagged variables is monotonic. For illustration, the methods are applied to set of annual temperature and tree ring measurements at Campito Mountain In California.  相似文献   

8.
Farmers in Sub-Saharan Africa have lower agricultural technology adoption rates compared to the rest of the world. It is believed that the past season yield affects a farmer''s capacity to take on the riskier improved seed variety; but this effect has not been studied. We quantify the effect of past season yield on improved corn seed use in future seasons while addressing the impact of the seed variety on yield. We develop a maximum likelihood method that addresses the fact that farmers self-select into a technology resulting in its effect on yield being endogenous. The method is unique since it models both lagged and endogenous effects in correlated discrete and continuous outcomes simultaneously. Due to the prescence of the lagged effect in a three year dataset, we also propose a solution to the initial conditions problem and demonstrate with simulations its effectiveness. We used survey longitudinal data collected from Kenyan corn farmers for three years. Our results show that higher past season yield increased the likelihood of adoption in future seasons. The simulation and empirical studies indicate that ignoring the self selection of improved seed use biases the results; we obtain a different sign in the covariance.  相似文献   

9.
Evidence presented by Fomby and Guilkey (1983) suggests that Hatanaka's estimator of the coefficients in the lagged dependent variable-serial correlation regression model performs poorly, not because of poor selection of the estimate of the autocorrelation coefficient, but because of the lack of a first observation correction. This study conducts a Monte Carlo investigationof the small sample efficiency gains obtainable from a first observation correction suggested by Harvey (1981). Results presented here indicate that substantial gains result from the first observation correction. However, in comparing Hatanaka's procedure with first observation correction to maximum likelihood search, it appears that ignoring the determinantal term of the full likelihood function causes some loss of small sample efficiency. Thus, when computer costsand programming constraints are not binding, maximum likelihood search is to be recommended. In contrast, users who have access to only rudimentary least squares programs would be well served when using Hatanaka's two-step procedure with first  相似文献   

10.
In this paper, we present a test of independence between the response variable, which can be discrete or continuous, and a continuous covariate after adjusting for heteroscedastic treatment effects. The method involves first augmenting each pair of the data for all treatments with a fixed number of nearest neighbours as pseudo‐replicates. Then a test statistic is constructed by taking the difference of two quadratic forms. The statistic is equivalent to the average lagged correlations between the response and nearest neighbour local estimates of the conditional mean of response given the covariate for each treatment group. This approach effectively eliminates the need to estimate the nonlinear regression function. The asymptotic distribution of the proposed test statistic is obtained under the null and local alternatives. Although using a fixed number of nearest neighbours pose significant difficulty in the inference compared to that allowing the number of nearest neighbours to go to infinity, the parametric standardizing rate for our test statistics is obtained. Numerical studies show that the new test procedure has robust power to detect nonlinear dependency in the presence of outliers that might result from highly skewed distributions. The Canadian Journal of Statistics 38: 408–433; 2010 © 2010 Statistical Society of Canada  相似文献   

11.
The lagged identification rate is the probability of identifying an individual given its identification some time lag earlier. The lagged association rate is the probability that two individuals are associated given their association some time lag earlier. Models of lagged identification and association rates fit by maximizing the sums of non independent log-likelihoods have approximately unbiased parameter estimates. Simulations suggest that: Akaike-Information-Criterion often selects the true model of lagged identification rate data; quasi-Akaike-Information-Criterion performs better for lagged association rates; and confidence intervals for parameters are best obtained by bootstrap methods for lagged identification rates and quasi-likelihood or jackknife methods for lagged association rates.  相似文献   

12.
We deal with smoothed estimators for conditional probability functions of discrete-valued time series { Yt } under two different settings. When the conditional distribution of Yt given its lagged values falls in a parametric family and depends on exogenous random variables, a smoothed maximum (partial) likelihood estimator for the unknown parameter is proposed. While there is no prior information on the distribution, various nonparametric estimation methods have been compared and the adjusted Nadaraya–Watson estimator stands out as it shares the advantages of both Nadaraya–Watson and local linear regression estimators. The asymptotic normality of the estimators proposed has been established in the manner of sparse asymptotics, which shows that the smoothed methods proposed outperform their conventional, unsmoothed, parametric counterparts under very mild conditions. Simulation results lend further support to this assertion. Finally, the new method is illustrated via a real data set concerning the relationship between the number of daily hospital admissions and the levels of pollutants in Hong Kong in 1994–1995. An ad hoc model selection procedure based on a local Akaike information criterion is proposed to select the significant pollutant indices.  相似文献   

13.
Until recently, a difficulty with applying the Durbin-Watson (DW) test to the dynamic linear regression model has been the lack of appropriate critical values. Inder (1986) used a modified small-disturbance distribution (SDD) to find approximate critical values. King and Wu (1991) showed that the exact SDD of the DW statistic is equivalent to the distribution of the DW statistic from the regression with the lagged dependent variables replaced by their means. Unfortunately, these means are unknown although they could be estimated by the actual variable values. This provides a justification for using the exact critical values of the DW statistic from the regression with the lagged dependent variables treated as non-stochastic regressors. Extensive Monte Carlo experiments are reported in this paper. They show that this approach leads to reasonably accurate critical values, particularly when two lags of the dependent variable are present. Robustness to non-normality is also investigated.  相似文献   

14.
Until recently, a difficulty with applying the Durbin-Watson (DW) test to the dynamic linear regression model has been the lack of appropriate critical values. Inder (1986) used a modified small-disturbance distribution (SDD) to find approximate critical values. King and Wu (1991) showed that the exact SDD of the DW statistic is equivalent to the distribution of the DW statistic from the regression with the lagged dependent variables replaced by their means. Unfortunately, these means are unknown although they could be estimated by the actual variable values. This provides a justification for using the exact critical values of the DW statistic from the regression with the lagged dependent variables treated as non-stochastic regressors. Extensive Monte Carlo experiments are reported in this paper. They show that this approach leads to reasonably accurate critical values, particularly when two lags of the dependent variable are present. Robustness to non-normality is also investigated.  相似文献   

15.
Asymmetric behaviour in both mean and variance is often observed in real time series. The approach we adopt is based on double threshold autoregressive conditionally heteroscedastic (DTARCH) model with normal innovations. This model allows threshold nonlinearity in mean and volatility to be modelled as a result of the impact of lagged changes in assets and squared shocks, respectively. A methodology for building DTARCH models is proposed based on genetic algorithms (GAs). The most important structural parameters, that is regimes and thresholds, are searched for by GAs, while the remaining structural parameters, that is the delay parameters and models orders, vary in some pre-specified intervals and are determined using exhaustive search and an Asymptotic Information Criterion (AIC) like criterion. For each structural parameters trial set, a DTARCH model is fitted that maximizes the (penalized) likelihood (AIC criterion). For this purpose the iteratively weighted least squares algorithm is used. Then the best model according to the AIC criterion is chosen. Extension to the double threshold generalized ARCH (DTGARCH) model is also considered. The proposed methodology is checked using both simulated and market index data. Our findings show that our GAs-based procedure yields results that comparable to that reported in the literature and concerned with real time series. As far as artificial time series are considered, the proposed procedure seems to be able to fit the data quite well. In particular, a comparison is performed between the present procedure and the method proposed by Tsay [Tsay, R.S., 1989, Testing and modeling threshold autoregressive processes. Journal of the American Statistical Association, Theory and Methods, 84, 231–240.] for estimating the delay parameter. The former almost always yields better results than the latter. However, adopting Tsay's procedure as a preliminary stage for finding the appropriate delay parameter may save computational time specially if the delay parameter may vary in a large interval.  相似文献   

16.
A new family of statistics is proposed to test for the presence of serial correlation in linear regression models. The tests are based on partial sums of lagged cross-products of regression residuals that define a class of interesting Gaussian processes. These processes are characterized in terms of regressor functions, the serial-correlation structure, the distribution of the noise process, and the order of the lag of the cross-products of residuals. It is shown that these four factors affect the lagged residual processes independently. Large-sample distributional results are presented for test statistics under the null hypothesis of no serial correlation or for alternatives from a range of interesting hypotheses. Some indication of the circumstances to which the asymptotic results apply in finite-sample situations and of those to which they should be applied with some caution are obtained through a simulation study. Tables of selected quantiles of the proposed tests are also given. The tests are illustrated with two examples taken from the empirical literature. It is also proposed that plots of lagged residual processes be used as diagnostic tools to gain insight into the correlation structure of residuals derived from regression fits.  相似文献   

17.
It is well-known that Ordinary Least Squares (OLS) yields inconsistent estimates if applied to a regression equation with lagged dependent variables and correlated errors. Bias expressions which appear in the literature usually assume the exogenous variables to be non-stochastic. Due to this assumption the numerical sizes of these expressions cannot be determined. Further, the analysis is mostly restricted to very simple models. In this paper the problem of calculating the asymptotic bias of OLS is generalized to stationary dynamic regression models, where the errors follow a stationary ARMA process. A general bias expression is derived and a method is introduced by which its actual size can be computed numerically.  相似文献   

18.
Summary.  We develop an efficient way to select the best subset autoregressive model with exogenous variables and generalized autoregressive conditional heteroscedasticity errors. One main feature of our method is to select important autoregressive and exogenous variables, and at the same time to estimate the unknown parameters. The method proposed uses the stochastic search idea. By adopting Markov chain Monte Carlo techniques, we can identify the best subset model from a large of number of possible choices. A simulation experiment shows that the method is very effective. Misspecification in the mean equation can also be detected by our model selection method. In the application to the stock-market data of seven countries, the lagged 1 US return is found to have a strong influence on the other stock-market returns.  相似文献   

19.
非对称单位根检验已成为时间序列分析中重要研究领域之一。而当随机干扰项之间具有一般性的自相关时,非对称单位根检验式中,由于不同的滞后阶会对统计量检验势产生至关重要的影响,因此采用残差块形自助法(RBB)对非对称单位根EG检验进行有效的改进研究,并对RBB法的适用性进行了模拟。结果表明:RBB法不仅在一定程度上降低了检验水平扭曲,而且大大提高了EG法的检验势。  相似文献   

20.
Standard serial correlation tests are derived assuming that the disturbances are homoscedastic, but this study shows that asympotic critical values are not accurate when this assumption is violated. Asymptotic critical values for the ARCH(2)-corrected LM, BP and BL tests are valid only when the underlying ARCH process is strictly stationary, whereas Wooldridge's robust LM test has good properties overall. These tests exhibit similar bahaviour even when the underlying process is GARCH (1,1). When the regressors include lagged dependent variables, the rejection frequencies under both the null and alternative hypotheses depend on the coefficientsof the lagged dependent variables and the other model parameters. They appear to be robust across various disturbance distributions under the null hypothesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号