首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
This paper provides a theoretical overview of Wald tests for Granger causality in levels vector autoregressions (VAR's) and Johansen-type error correction models (ECM's). The theory is based on results in Toda and Phillips (1991a) and allows for stochastic and deterministic trends as well as arbitrary degrees of cointegration. We recommend some operational procedures for conducting Granger causality tests that are based on the Gaussian maximum likelihood estimation of ECM's. These procedures are applicable in the important practical case of testing the causal effects of one variable on another group of variables and vice versa. This paper also investigates the sampling properties of these testing procedures through simulation exercises. Three sequential causality tests in ECM's are compared with conventional causality tests in levels and differences VAR's.  相似文献   

2.
安徽通过改革开放以来三十一年的发展,金融发展在一定程度上促进了经济的增长,相关实证分析显示:金融规模的扩张和股票市场的发展不同程度地促进了经济的增长,虽然股票市场的发展滞后到第七期对经济增长的贡献才超过了金融规模的扩张对经济增长的贡献,但其贡献度一直在增长,这表明,虽然金融规模的扩张对经济增长的贡献仍然较大,但是成立只有十几年的股票市场的发展对经济增长的贡献具有很大的潜力;而金融发展的效率对经济增长的贡献有限,且有些时期对经济增长呈负作用;保险市场的发展并没有表现出对经济增长的显著促进作用。基于实证分析的结果,为了安徽的金融发展更快更全面地促进经济增长,安徽股票市场的发展应摆在比传统金融中介发展更优先的位置,应继续加大企业从股票市场的融资力度,同时应大力发展多元化金融体系来提高金融发展效率,加快保险机构改革并提高保险业经营水平。  相似文献   

3.
Previous literature has shown that the addition of an untested surplus-lag Granger causality test can provide highly robust to stationary, non stationary, long memory, and structural break processes in the forcing variables. This study extends this approach to the partial unit root framework by simulation. Results show good size and power. Therefore, the surplus-lag approach is also robust to partial unit root processes.  相似文献   

4.
卢学法  申绘芳 《统计教育》2008,(9):11-13,51
本文根据协整理论与误差修正模型,利用1978-2007年度数据以杭州市为例对城镇居民收入与消费的协整关系进行了实证分析。结果表明:杭州市城镇居民家庭的人均消费与收入之间存在单向长期稳定的因果关系,即人均可支配收入变动是影响人均消费变动的原因,而人均消费变动不是影响人均可支配收入变动的原因。因此,从长期来看,要刺激城镇居民消费来拉动经济增长,必须增加城镇居民的可支配收入。  相似文献   

5.
In this paper, we use simulated data to investigate the power of different causality tests in a two-dimensional vector autoregressive (VAR) model. The data are presented in a nonlinear environment that is modelled using a logistic smooth transition autoregressive function. We use both linear and nonlinear causality tests to investigate the unidirection causality relationship and compare the power of these tests. The linear test is the commonly used Granger causality F test. The nonlinear test is a non-parametric test based on Baek and Brock [A general test for non-linear Granger causality: Bivariate model. Tech. Rep., Iowa State University and University of Wisconsin, Madison, WI, 1992] and Hiemstra and Jones [Testing for linear and non-linear Granger causality in the stock price–volume relation, J. Finance 49(5) (1994), pp. 1639–1664]. When implementing the nonlinear test, we use separately the original data, the linear VAR filtered residuals, and the wavelet decomposed series based on wavelet multiresolution analysis. The VAR filtered residuals and the wavelet decomposition series are used to extract the nonlinear structure of the original data. The simulation results show that the non-parametric test based on the wavelet decomposition series (which is a model-free approach) has the highest power to explore the causality relationship in nonlinear models.  相似文献   

6.
Fitting cross-classified multilevel models with binary response is challenging. In this setting a promising method is Bayesian inference through Integrated Nested Laplace Approximations (INLA), which performs well in several latent variable models. We devise a systematic simulation study to assess the performance of INLA with cross-classified binary data under different scenarios defined by the magnitude of the variances of the random effects, the number of observations, the number of clusters, and the degree of cross-classification. In the simulations INLA is systematically compared with the popular method of Maximum Likelihood via Laplace Approximation. By an application to the classical salamander mating data, we compare INLA with the best performing methods. Given the computational speed and the generally good performance, INLA turns out to be a valuable method for fitting logistic cross-classified models.  相似文献   

7.

Ordinal data are often modeled using a continuous latent response distribution, which is partially observed through windows of adjacent intervals defined by cutpoints. In this paper we propose the beta distribution as a model for the latent response. The beta distribution has several advantages over the other common distributions used, e.g. , normal and logistic. In particular, it enables separate modeling of location and dispersion effects which is essential in the Taguchi method of robust design. First, we study the problem of estimating the location and dispersion parameters of a single beta distribution (representing a single treatment) from ordinal data assuming known equispaced cutpoints. Two methods of estimation are compared: the maximum likelihood method and the method of moments. Two methods of treating the data are considered: in raw discrete form and in smoothed continuousized form. A large scale simulation study is carried out to compare the different methods. The mean square errors of the estimates are obtained under a variety of parameter configurations. Comparisons are made based on the ratios of the mean square errors (called the relative efficiencies). No method is universally the best, but the maximum likelihood method using continuousized data is found to perform generally well, especially for estimating the dispersion parameter. This method is also computationally much faster than the other methods and does not experience convergence difficulties in case of sparse or empty cells. Next, the problem of estimating unknown cutpoints is addressed. Here the multiple treatments setup is considered since in an actual application, cutpoints are common to all treatments, and must be estimated from all the data. A two-step iterative algorithm is proposed for estimating the location and dispersion parameters of the treatments, and the cutpoints. The proposed beta model and McCullagh's (1980) proportional odds model are compared by fitting them to two real data sets.  相似文献   

8.
This article considers the problem of statistical classification involving multivariate normal populations and compares the performance of the linear discriminant function (LDF) and the Euclidean distance function (EDF), Although the LDF is quite popular and robust, it has been established (Marco, Young and Turner, 1989) that under certain non-trivial conditions, the EDF is "equivalent" to the LDF, in terms of equal probabilities of misclassifica-tion (error rates). Thus it follows that under those conditions the sample EDF could perform better than the sample LDF, since the sample EDF involves estimation of fewer parameters. Sindation results, also from the above paper; seemed to support this hypothesis. This article compares the two sample discriminant functions through asymptotic expansions of error rates, and identifies situations when the sample EDF should perform better than the sample LDF. Results from simulation experiments are also reported and discussed.  相似文献   

9.
The purpose of this study was to evaluate the effect of residual variability and carryover on average bioequivalence (ABE) studies performed under a 22 crossover design. ABE is usually assessed by means of the confidence interval inclusion principle. Here, the interval under consideration was the standard 'shortest' interval, which is the mainstream approach in practice. The evaluation was performed by means of a simulation study under different combinations of carryover and residual variability besides of formulation effect and sample size. The evaluation was made in terms of percentage of ABE declaration, coverage and interval precision. As is well known, high levels of variability distort the ABE procedures, particularly its type II error control (i.e. high variabilities make difficult to declare bioequivalence when it holds). The effect of carryover is modulated by variability and is especially disturbing for the type I error control. In the presence of carryover, the risk of erroneously declaring bioequivalence may become high, especially for low variabilities and large sample sizes. We end up with some hints concerning the controversy about pretesting for carryover before performing ABE analysis.  相似文献   

10.
Relative risks (RRs) are often considered as preferred measures of association in randomized controlled trials especially when the binary outcome of interest is common. To directly estimate RRs, log-binomial regression has been recommended. Although log-binomial regression is a special case of generalized linear models, it does not respect the natural parameter constraints, and maximum likelihood estimation is often subject to numerical instability that leads to convergence problems. Alternative methods for solving log-binomial regression convergence problems have been proposed. A Bayesian approach also was introduced, but the comparison between this method and frequentist methods has not been fully explored. We compared five frequentist and one Bayesian methods for estimating RRs under a variety of scenario. Based on our simulation study, there is not a method that can perform well based on different statistical properties, but COPY 1000 and modified log-Poisson regression can be considered in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号