首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
!t is well-known that Johansen's multiple cointegration tests' results and those of Johansen and Juselius' tests for restricrions on cointegrating vectors and their weights have far-reaching implications for economic modelling and analysis. Therefore, it is important to ensure that the tests have desirable finite sample properties. Although the statistics are derived under Gaussian distribution,the asympotic results are derived under a much wider class of distributions. Using simulation, this paper investigates the effect of non-normal disturbances on these tests in finite samples. Further, ARCH/GARCH type conditional heteroskedasticity is present in many economic and financial time series. This paper examines the finite properties of the tests when the error term follows ARCH/GARCH type processes. From the evidence, it appears that researchers should not be overly concerned by the possibility of small departures from non-normality when using Johansen's suggested techniques even in finite samples. ARCH and GARCH effects may be more problematic, however. In particular it becomes more important ro test whether the restriction implicit in the integrated (or near-integrated) ARCH-type Drocess actually holds in time series for the application of the cointegraiion rank tests and the test for restrictions on cointegrating weights. The tests for restrictions on cointegrating vectors apper to be robust for non-normal errors and for all ARCH and GARCH type processes considered.  相似文献   

2.
We obtain semiparametric efficiency bounds for estimation of a location parameter in a time series model where the innovations are stationary and ergodic conditionally symmetric martingale differences but otherwise possess general dependence and distributions of unknown form. We then describe an iterative estimator that achieves this bound when the conditional density functions of the sample are known. Finally, we develop a “semi-adaptive” estimator that achieves the bound when these densities are unknown by the investigator. This estimator employs nonparametric kernel estimates of the densities. Monte Carlo results are reported.  相似文献   

3.
《Econometric Reviews》2013,32(3):229-257
Abstract

We obtain semiparametric efficiency bounds for estimation of a location parameter in a time series model where the innovations are stationary and ergodic conditionally symmetric martingale differences but otherwise possess general dependence and distributions of unknown form. We then describe an iterative estimator that achieves this bound when the conditional density functions of the sample are known. Finally, we develop a “semi-adaptive” estimator that achieves the bound when these densities are unknown by the investigator. This estimator employs nonparametric kernel estimates of the densities. Monte Carlo results are reported.  相似文献   

4.
This paper proposes a wavelet (spectral) approach to estimate the parameters of a linear regression model where the regressand and the regressors are persistent processes and contain a measurement error. We propose a wavelet filtering approach which does not require instruments and yields unbiased estimates for the intercept and the slope parameters. Our Monte Carlo results also show that the wavelet approach is particularly effective when measurement errors for the regressand and the regressor are serially correlated. With this paper, we hope to bring a fresh perspective and stimulate further theoretical research in this area.  相似文献   

5.
The problem of whether seasonal decomposition should be used prior to or after aggregation of time series is quite old. We tackle the problem by using a state-space representation and the variance/covariance structure of a simplified one-component model. The variances of the estimated components in a two-series system are compared for direct and indirect approaches and also to a multivariate method. The covariance structure between the two time series is important for the relative efficiency. Indirect estimation is always best when the series are independent, but when the series or the measurement errors are negatively correlated, direct estimation may be much better in the above sense. Some covariance structures indicate that direct estimation should be used while others indicate that an indirect approach is more efficient. Signal-to-noise ratios and relative variances are used for inference.  相似文献   

6.
Standard methods for inference in cointegrating systems require all the variables to have exact unit roots and are not at all robust even to slight violations of this condition. In this article, I consider an alternative approach to inference in a cointegrating system. This involves testing the hypothesis that a cointegrating vector takes on a specified value by testing for the stationarity of the associated residual. Confidence sets for the cointegrating vector can be constructed by exploiting the equivalence between tests and confidence sets. This method has the advantage that it remains valid even if the regressors have roots that are not exactly equal to unity.  相似文献   

7.
This article studies dynamic panel data models in which the long run outcome for a particular cross-section is affected by a weighted average of the outcomes in the other cross-sections. We show that imposing such a structure implies a model with several cointegrating relationships that, unlike in the standard case, are nonlinear in the coe?cients to be estimated. Assuming that the weights are exogenously given, we extend the dynamic ordinary least squares methodology and provide a dynamic two-stage least squares estimator. We derive the large sample properties of our proposed estimator under a set of low-level assumptions. Then our methodology is applied to US financial market data, which consist of credit default swap spreads, as well as firm-specific and industry data. We construct the economic space using a “closeness” measure for firms based on input–output matrices. Our estimates show that this particular form of spatial correlation of credit default swap spreads is substantial and highly significant.  相似文献   

8.
Bootstrapping time series models   总被引:1,自引:0,他引:1  
This paper surveys recent development in bootstrap methods and the modifications needed for their applicability in time series models. The paper discusses some guidelines for empirical researchers in econometric analysis of time series. Different sampling schemes for bootstrap data generation and different forms of bootstrap test statistics are discussed. The paper also discusses the applicability of direct bootstrapping of data in dynamic models and cointegrating regression models. It is argued that bootstrapping residuals is the preferable approach. The bootstrap procedures covered include the recursive bootstrap, the moving block bootstrap and the stationary bootstrap.  相似文献   

9.
Efficient sequential estimation of the intensity rates of a continuous-time finite Markov process is discussed. An information inequality which gives a lower bound for the variance of an unbiased estimator of a function of the intensity rates is obtained and it is used to define an efficient estimator. All closed efficient sequential sampling schemes are characterized.  相似文献   

10.
This paper surveys recent development in bootstrap methods and the modifications needed for their applicability in time series models. The paper discusses some guidelines for empirical researchers in econometric analysis of time series. Different sampling schemes for bootstrap data generation and different forms of bootstrap test statistics are discussed. The paper also discusses the applicability of direct bootstrapping of data in dynamic models and cointegrating regression models. It is argued that bootstrapping residuals is the preferable approach. The bootstrap procedures covered include the recursive bootstrap, the moving block bootstrap and the stationary bootstrap.  相似文献   

11.
Since structural changes in a possibly transformed financial time series may contain important information for investors and analysts, we consider the following problem of sequential econometrics. For a given time series we aim at detecting the first change-point where a jump of size a occurs, i.e., the mean changes from, say, m 0to m 0+ a and returns to m 0after a possibly short period s. To address this problem, we study a Shewhart-type control chart based on a sequential version of the sigma filter, which extends kernel smoothers by employing stochastic weights depending on the process history to detect jumps in the data more accurately than classical approaches. We study both theoretical properties and performance issues. Concerning the statistical properties, it is important to know whether the normed delay time of the considered control chart is bounded, at least asymptotically. Extending known results for linear statistics employing deterministic weighting schemes, we establish an upper bound which holds if the memory of the chart tends to infinity. The performance of the proposed control charts is studied by simulations. We confine ourselves to some special models which try to mimic important features of real time series. Our empirical results provide some evidence that jump-preserving weights are preferable under certain circumstances.  相似文献   

12.
We consider cross-sectional aggregation of time series with long-range dependence. This question arises for instance from the statistical analysis of networks where aggregation is defined via routing matrices. Asymptotically, aggregation turns out to increase dependence substantially, transforming a hyperbolic decay of autocorrelations to a slowly varying rate. This effect has direct consequences for statistical inference. For instance, unusually slow rates of convergence for nonparametric trend estimators and nonstandard formulas for optimal bandwidths are obtained. The situation changes, when time-dependent aggregation is applied. Suitably chosen time-dependent aggregation schemes can preserve a hyperbolic rate or even eliminate autocorrelations completely.  相似文献   

13.
An effective and efficient search algorithm has been developed to select from an 1(1) system zero-non-zero patterned cointegrating and loading vectors in a subset VECM, Bq(l)y(t-1) + Bq-1 (L)Ay(t) = ε( t ) , where the long term impact matrix Bq(l) contains zero entries. The algorithm can be applied to higher order integrated systems. The Finnish money-output model presented by Johansen and Juselius (1990) and the United States balanced growth model presented by King, Plosser, Stock and Watson (1991) are used to demonstrate the usefulness of this algorithm in examining the cointegrating relationships in vector time series.  相似文献   

14.
The aim of this paper is to study the concept of separability in multiple nonstationary time series displaying both common stochastic trends and common stochastic cycles. When modeling the dynamics of multiple time series for a panel of several entities such as countries, sectors, firms, imposing some form of separability and commonalities is often required to restrict the dimension of the parameter space. For this purpose we introduce the concept of common feature separation and investigate the relationships between separation in cointegration and separation in serial correlation common features. Loosely speaking we investigate whether a set of time series can be partitioned into subsets such that there are serial correlation common features within the sub-groups only. The paper investigates three issues. First, it provides conditions for separating joint cointegrating vectors into marginal cointegrating vectors as well as separating joint short-term dynamics into marginal short-term dynamics. Second, conditions for making permanent-transitory decompositions based on marginal systems are given. Third, issues of weak exogeneity are considered. Likelihood ratio type tests for the different hypotheses under study are proposed. An empirical analysis of the link between economic fluctuations in the United States and Canada shows the practical relevance of the approach proposed in this paper.  相似文献   

15.
We address the task of choosing prior weights for models that are to be used for weighted model averaging. Models that are very similar should usually be given smaller weights than models that are quite distinct. Otherwise, the importance of a model in the weighted average could be increased by augmenting the set of models with duplicates of the model or virtual duplicates of it. Similarly, the importance of a particular model feature (a certain covariate, say) could be exaggerated by including many models with that feature. Ways of forming a correlation matrix that reflects the similarity between models are suggested. Then, weighting schemes are proposed that assign prior weights to models on the basis of this matrix. The weighting schemes give smaller weights to models that are more highly correlated. Other desirable properties of a weighting scheme are identified, and we examine the extent to which these properties are held by the proposed methods. The weighting schemes are applied to real data, and prior weights, posterior weights and Bayesian model averages are determined. For these data, empirical Bayes methods were used to form the correlation matrices that yield the prior weights. Predictive variances are examined, as empirical Bayes methods can result in unrealistically small variances.  相似文献   

16.
An effective and efficient search algorithm has been developed to select from an 1(1) system zero-non-zero patterned cointegrating and loading vectors in a subset VECM, B q (l)y(t-1) + B q-1 (L)Ay(t) = ?( t ) , where the long term impact matrix Bq(l) contains zero entries. The algorithm can be applied to higher order integrated systems. The Finnish money-output model presented by Johansen and Juselius (1990) and the United States balanced growth model presented by King, Plosser, Stock and Watson (1991) are used to demonstrate the usefulness of this algorithm in examining the cointegrating relationships in vector time series.  相似文献   

17.
The problem of the estimation of the linear combination of weights, c′w, in a singular spring balance weighing design when the error structure takes the form E(ee′) =s?2G has been studied. A lower bound for the variance of the estimated linear combination of weights is obtained and a necessary and sufficient condition for this lower bound to be attained is given. The general results are applied to the case of the total of the weights. For a specified form for G, some optimum spring balance weighing designs for the estimated total weight are found.  相似文献   

18.
This paper surveys various shrinkage, smoothing and selection priors from a unifying perspective and shows how to combine them for Bayesian regularisation in the general class of structured additive regression models. As a common feature, all regularisation priors are conditionally Gaussian, given further parameters regularising model complexity. Hyperpriors for these parameters encourage shrinkage, smoothness or selection. It is shown that these regularisation (log-) priors can be interpreted as Bayesian analogues of several well-known frequentist penalty terms. Inference can be carried out with unified and computationally efficient MCMC schemes, estimating regularised regression coefficients and basis function coefficients simultaneously with complexity parameters and measuring uncertainty via corresponding marginal posteriors. For variable and function selection we discuss several variants of spike and slab priors which can also be cast into the framework of conditionally Gaussian priors. The performance of the Bayesian regularisation approaches is demonstrated in a hazard regression model and a high-dimensional geoadditive regression model.  相似文献   

19.
This paper investigates the lag length selection problem of a vector error correction model by using a convergent information criterion and tools based on the Box–Pierce methodology recently proposed in the literature. The performances of these approaches for selecting the optimal lag length are compared via Monte Carlo experiments. The effects of misspecified deterministic trend or cointegrating rank on the lag length selection are studied. Noting that processes often exhibit nonlinearities, the cases of iid and conditionally heteroscedastic errors will be considered. Strategies that can avoid misleading situations are proposed.  相似文献   

20.
Consider a life testing experiment in which n units are put on test, successive failure times are recorded, and the observation is terminated either at a specified number r of failures or a specified time T whichever is reached first. This mixture of type I and type II censoring schemes, called hybrid censoring, is of wide use. Under this censoring scheme and the assumption of an exponential life distribution, the distribution of the maximum likelihood estimator of the mean life 6 is derived. It is then used to construct an exact lower confidence bound for θ.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号