首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit root hypothesis, is based solely on the posterior density function, without the need of imposing positive probabilities to sets of zero Lebesgue measure. Furthermore, it is conducted under strict observance of the likelihood principle. It was designed mainly for testing sharp null hypotheses and it is called FBST for Full Bayesian Significance Test.  相似文献   

2.
Abstract

Although no universally accepted definition of causality exists, in practice one is often faced with the question of statistically assessing causal relationships in different settings. We present a uniform general approach to causality problems derived from the axiomatic foundations of the Bayesian statistical framework. In this approach, causality statements are viewed as hypotheses, or models, about the world and the fundamental object to be computed is the posterior distribution of the causal hypotheses, given the data and the background knowledge. Computation of the posterior, illustrated here in simple examples, may involve complex probabilistic modeling but this is no different than in any other Bayesian modeling situation. The main advantage of the approach is its connection to the axiomatic foundations of the Bayesian framework, and the general uniformity with which it can be applied to a variety of causality settings, ranging from specific to general cases, or from causes of effects to effects of causes.  相似文献   

3.
李素芳  朱慧明 《统计研究》2013,30(1):96-104
 现有门限协整检验方法由于模型似然函数具有多峰、不连续特征,导致冗余参数识别存在困难,最优化计算相对复杂。本文提出基于非线性误差修正模型的贝叶斯门限协整分析,结合参数的后验条件分布设计MCMC抽样方案,进行贝叶斯门限协整检验;并利用Monte Carlo仿真研究了贝叶斯门限协整检验的有限样本性质,发现贝叶斯门限协整检验方法具有良好的有限样本性质。同时,利用不同期限的美国利率序列进行了实证研究,结果发现1个月与3个月利率之间、3个月与6个月利率之间以及3个月与1年利率之间均存在门限协整关系。研究结果表明:贝叶斯门限协整检验方法解决了冗余参数识别的难题,使计算变得相对简单,并提高了估计的精确度和检验的准确性。  相似文献   

4.
Interval censoring appears when the event of interest is only known to have occurred within a random time interval. Estimation and hypothesis testing procedures for interval-censored data are surveyed. We distinguish between frequentist and Bayesian approaches. Computational aspects for every proposed method are described and solutions with S-Plus, whenever are feasible, are mentioned. Three real data sets are analyzed.  相似文献   

5.
We propose a Bayesian computation and inference method for the Pearson-type chi-squared goodness-of-fit test with right-censored survival data. Our test statistic is derived from the classical Pearson chi-squared test using the differences between the observed and expected counts in the partitioned bins. In the Bayesian paradigm, we generate posterior samples of the model parameter using the Markov chain Monte Carlo procedure. By replacing the maximum likelihood estimator in the quadratic form with a random observation from the posterior distribution of the model parameter, we can easily construct a chi-squared test statistic. The degrees of freedom of the test equal the number of bins and thus is independent of the dimensionality of the underlying parameter vector. The test statistic recovers the conventional Pearson-type chi-squared structure. Moreover, the proposed algorithm circumvents the burden of evaluating the Fisher information matrix, its inverse and the rank of the variance–covariance matrix. We examine the proposed model diagnostic method using simulation studies and illustrate it with a real data set from a prostate cancer study.  相似文献   

6.
A message coming out of the recent Bayesian literature on cointegration is that it is important to elicit a prior on the space spanned by the cointegrating vectors (as opposed to a particular identified choice for these vectors). In previous work, such priors have been found to greatly complicate computation. In this article, we develop algorithms to carry out efficient posterior simulation in cointegration models. In particular, we develop a collapsed Gibbs sampling algorithm which can be used with just-identifed models and demonstrate that it has very large computational advantages relative to existing approaches. For over-identifed models, we develop a parameter-augmented Gibbs sampling algorithm and demonstrate that it also has attractive computational properties.  相似文献   

7.
In this article, we propose Bayesian methodology to obtain parameter estimates of the mixture of distributions belonging to the normal and biparametric Weibull families, modeling the mean and the variance parameters. Simulated studies and applications show the performance of the proposed models.  相似文献   

8.
This article deals with the Granger non causality test in cointegrated vector autoregressive processes. We propose a new testing procedure that yields an asymptotically standard distribution and performs well in small samples by combining the standard Wald test and the generalized inverse procedure. We also propose a few simple modifications to the test statistics in order to help our procedure perform better in finite samples. Monte Carlo simulations show that our procedure works better than the conventional approach.  相似文献   

9.
In this work, an approach to the Bayesian estimation in a bisexual Galton-Watson process is considered. First we study an important parametric case assuming offspring distribution belonging to the bivariate series power family of distributions and then, we continue to investigate the nonparametric case. In both situations, Bayes estimators under weighted squared error loss function, for means, variances and covariance of the off spring distribution are obtained. For the superadditive case, the Bayes estimation of the asymptotic growth rate is also considered. Illustrative examples are given.  相似文献   

10.
Regularization methods for simultaneous variable selection and coefficient estimation have been shown to be effective in quantile regression in improving the prediction accuracy. In this article, we propose the Bayesian bridge for variable selection and coefficient estimation in quantile regression. A simple and efficient Gibbs sampling algorithm was developed for posterior inference using a scale mixture of uniform representation of the Bayesian bridge prior. This is the first work to discuss regularized quantile regression with the bridge penalty. Both simulated and real data examples show that the proposed method often outperforms quantile regression without regularization, lasso quantile regression, and Bayesian lasso quantile regression.  相似文献   

11.
Summary The scientific attitude towards statistical method has always pursued two basic objectives: identifying false assumptions and selecting, amongst the likely assertions, those which are most consistent with a given system. The methodological demarcation between rejection of a statistical statement, because it is ?false?, or exclusion, because it is ?least probable?, lies in the fundamental premises of inferential procedures. In the first class we find the methods proposed by Fisher, Neyman and Pearson; in the second one, the Bayesian techniques. Even if different inferential theories may coexist, any particular solution has a limit of validity strictly bouded, to the conventional procedural rules on which it is based. Invited paper at the Conference on ?Statistical Tests: Methodology and Econometric Applications?, held in Bologna, Italy, 27–28 May 1993.  相似文献   

12.
Since the pioneering work by Koenker and Bassett [27], quantile regression models and its applications have become increasingly popular and important for research in many areas. In this paper, a random effects ordinal quantile regression model is proposed for analysis of longitudinal data with ordinal outcome of interest. An efficient Gibbs sampling algorithm was derived for fitting the model to the data based on a location-scale mixture representation of the skewed double-exponential distribution. The proposed approach is illustrated using simulated data and a real data example. This is the first work to discuss quantile regression for analysis of longitudinal data with ordinal outcome.  相似文献   

13.
 当检验方程含有确定性趋势时,基于残差的Panel协整检验通过全样本回归来消除确定性趋势的方法会导致检验统计量出现Nickell偏差,必须使用标准化系数进行修正,但这些标准化系数却依赖于真实设定参数。本文将Breitung和Das(2005)等使用部分样本回归消除确定性趋势的方法应用到Panel协整检验,提出了基于准残差的Panel协整检验,我们证明了这种新的检验统计量的渐近正态性,模拟的结果也表明其具有非常良好的小样本表现。  相似文献   

14.
The paper is concerned with direct tests of the rational expectations hypothesis (REH) in the presence of stationary and non-stationary variables. Alternative methods of converting qualitative survey responses into quantitative expectations series are examined. Testing of orthogonality and the issue of generated regressors for models estimated by two step methods are re-evaluated when the variable to be explained is stationary. A methodological approach for testing the REH is provided for models using qualitative response data when there are unit roots and cointegration, and alternative reasons are examined for rejecting the null hypothesis of orthogonality. The usefulness of cointegration analysis for both the probability and regression conversion procedures is also analysed. Cointegration is found to be directly applicable for the probability conversion approach with uniform, normal and logistic distributions of expectations and for the linear regressicn conversion approach. In the light of new techniques, an existing empirical example testing the REH for British manufacturing firms is re-examined and tested over an extended data set.  相似文献   

15.
The paper is concerned with direct tests of the rational expectations hypothesis (REH) in the presence of stationary and non-stationary variables. Alternative methods of converting qualitative survey responses into quantitative expectations series are examined. Testing of orthogonality and the issue of generated regressors for models estimated by two step methods are re-evaluated when the variable to be explained is stationary. A methodological approach for testing the REH is provided for models using qualitative response data when there are unit roots and cointegration, and alternative reasons are examined for rejecting the null hypothesis of orthogonality. The usefulness of cointegration analysis for both the probability and regression conversion procedures is also analysed. Cointegration is found to be directly applicable for the probability conversion approach with uniform, normal and logistic distributions of expectations and for the linear regressicn conversion approach. In the light of new techniques, an existing empirical example testing the REH for British manufacturing firms is re-examined and tested over an extended data set.  相似文献   

16.
本文分别在线性Engle-Granger协整模型和非线性指数平滑迁移自回归误差修正模型 (ESTAR-ECM) 的框架下,对我国名义利率与通货膨胀率序列进行了长期均衡关系的检验。发现线性协整模型不能捕捉到我国名义利率与通货膨胀率的长期均衡关系,而对于ESTAR-ECM模型,无论利用商业银行1年期贷款利率还是7天期银行间同业拆借利率作为名义利率的代理变量,均证实名义利率与通货膨胀率具有长期稳定的均衡关系,表明“费雪效应”在我国是成立的。但由于“费雪效应”系数小于1,表明名义利率与通货膨胀率之间仅存在弱的“费雪效应”。其意义在于,我国利率政策对稳定通胀预期、抑制通货膨胀具有一定的正面效应,但由于利率对通货膨胀反应不足,导致完全依靠利率政策控制目前较高的通货膨胀有一定的困难。  相似文献   

17.
Summary In this paper we introduce a class of prior distributions for contingency tables with given marginals. We are interested in the structrre of concordance/discordance of such tables. There is actually a minor limitation in that the marginals are required to assume only rational values. We do argue, though, that this is not a serious drawback for all applicatory purposes. The posterior and predictive distributions given anM-sample are computed. Examples of Bayesian estimates of some classical indices of concordance are also given. Moreover, we show how to use simulation in order to overcome some difficulties which arise in the computation of the posterior distribution.  相似文献   

18.
Tests of significance are often made in situations where the standard assumptions underlying the probability calculations do not hold. As a result, the reported significance levels become difficult to interpret. This article sketches an alternative interpretation of a reported significance level, valid in considerable generality. This level locates the given data set within the spectrum of other data sets derived from the given one by an appropriate class of transformations. If the null hypothesis being tested holds, the derived data sets should be equivalent to the original one. Thus, a small reported significance level indicates an unusual data set. This development parallels that of randomization tests, but there is a crucial technical difference: our approach involves permuting observed residuals; the classical randomization approach involves permuting unobservable, or perhaps nonexistent, stochastic disturbance terms.  相似文献   

19.
A multivariate model that allows for both a time-varying cointegrating matrix and time-varying cointegrating rank is presented. The model addresses the issue that, in real data, the validity of a constant cointegrating relationship may be questionable. The model nests the submodels implied by alternative cointegrating matrix ranks and allows for transitions between stationarity and nonstationarity, and cointegrating and noncointegrating relationships in accordance with the observed behavior of the data. A Bayesian test of cointegration is also developed. The model is used to assess the validity of the Fisher effect and is also applied to equity market data.  相似文献   

20.
This report is about the analysis of stochastic processes of the form R = S + N, where S is a “smooth” functional and N is noise. The proposed methods derive from the assumption that the observed R-values and unobserved values of R, the assumed inferential objectives of the analysis, are linearly related through Taylor series expansions of observed about unobserved values. The expansion errors and all other priori unspecified quantities have a joint multivariate normal distribution which expresses the prior uncertainty about their values. The results include interpolators, predictors, and derivative estimates, with credibility-interval estimates automatically generated in each case. An analysis of an acid-rain wet-deposition time series is included to indicate the efficacy of the proposed method. It was this problem which led to the methodological developments reported in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号