首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article considers the order selection problem of periodic autoregressive models. Our main goal is the adaptation of the Bayesian Predictive Density Criterion (PDC), established by Djuric' and Kay (1992 Djuric' , P. M. , Kay , S. M. ( 1992 ). Order selection of autoregressive models . IEEE Transactions on Signal Processing 40 : 28292833 . [Google Scholar]) for selecting the order of a stationary autoreg-ressive model, to deal with the order identification problem of a periodic autoregressive model. The performance of the established criterion, (P-PDC), is compared, via simulation studies, to the performances of some well-known existing criteria.  相似文献   

2.
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006 Campbell, J. Y. and Yogo, M. (2006), “Efficient Tests of Stock Return Predictability,” Journal of Financial Economics, 81, 2760.[Crossref], [Web of Science ®] [Google Scholar]). Supplementary materials for this article are available online.  相似文献   

3.
4.
This article develops a novel asymptotic theory for panel models with common shocks. We assume that contemporaneous correlation can be generated by both the presence of common regressors among units and weak spatial dependence among the error terms. Several characteristics of the panel are considered: cross-sectional and time-series dimensions can either be fixed or large; factors can either be observable or unobservable; the factor model can describe either a cointegration relationship or a spurious regression, and we also consider the stationary case. We derive the rate of convergence and the limit distributions for the ordinary least square (OLS) estimates of the model parameters under all the aforementioned cases.  相似文献   

5.
Typical panel data models make use of the assumption that the regression parameters are the same for each individual cross-sectional unit. We propose tests for slope heterogeneity in panel data models. Our tests are based on the conditional Gaussian likelihood function in order to avoid the incidental parameters problem induced by the inclusion of individual fixed effects for each cross-sectional unit. We derive the Conditional Lagrange Multiplier test that is valid in cases where N → ∞ and T is fixed. The test applies to both balanced and unbalanced panels. We expand the test to account for general heteroskedasticity where each cross-sectional unit has its own form of heteroskedasticity. The modification is possible if T is large enough to estimate regression coefficients for each cross-sectional unit by using the MINQUE unbiased estimator for regression variances under heteroskedasticity. All versions of the test have a standard Normal distribution under general assumptions on the error distribution as N → ∞. A Monte Carlo experiment shows that the test has very good size properties under all specifications considered, including heteroskedastic errors. In addition, power of our test is very good relative to existing tests, particularly when T is not large.  相似文献   

6.
Stochastic volatility models have been widely appreciated in empirical finance such as option pricing, risk management, etc. Recent advances of Markov chain Monte Carlo (MCMC) techniques made it possible to fit all kinds of stochastic volatility models of increasing complexity within Bayesian framework. In this article, we propose a new Bayesian model selection procedure based on Bayes factor and a classical thermodynamic integration technique named path sampling to select an appropriate stochastic volatility model. The performance of the developed procedure is illustrated with an application to the daily pound/dollar exchange rates data set.  相似文献   

7.
Generalized method of moments (GMM) has been an important innovation in econometrics. Its usefulness has motivated a search for good inference procedures based on GMM. This article presents a novel method of bootstrapping for GMM based on resampling from the empirical likelihood distribution that imposes the moment restrictions. We show that this approach yields a large-sample improvement and is efficient, and give examples. We also discuss the development of GMM and other recent work on improved inference.  相似文献   

8.
现代金融经济学中连续时间模型能够更方便地描述重要经济变量的动态过程如股价、汇率和利率等。为连续时间模型提出了一种高频数据驱动的二阶段估计方法,增强了连续时间扩展模型的弹性和可操作性。以Vasicek模型为例给出了该方法的应用实例,首先在第一阶段使用实现波动率方法估计出模型的扩散项参数,然后使用实际数据的稳态分布的前向方程估计漂移项参数。此方法对模型初始设定和优化算法依赖程度低,结果较为稳定可靠。  相似文献   

9.
A Bayesian model consists of two elements: a sampling model and a prior density. The problem of selecting a prior density is nothing but the problem of selecting a Bayesian model where the sampling model is fixed. A predictive approach is used through a decision problem where the loss function is the squared L 2 distance between the sampling density and the posterior predictive density, because the aim of the method is to choose the prior that provides a posterior predictive density as good as possible. An algorithm is developed for solving the problem; this algorithm is based on Lavine's linearization technique.  相似文献   

10.
This paper provides a semiparametric framework for modeling multivariate conditional heteroskedasticity. We put forward latent stochastic volatility (SV) factors as capturing the commonality in the joint conditional variance matrix of asset returns. This approach is in line with common features as studied by Engle and Kozicki (1993), and it allows us to focus on identication of factors and factor loadings through first- and second-order conditional moments only. We assume that the time-varying part of risk premiums is based on constant prices of factor risks, and we consider a factor SV in mean model. Additional specification of both expectations and volatility of future volatility of factors provides conditional moment restrictions, through which the parameters of the model are all identied. These conditional moment restrictions pave the way for instrumental variables estimation and GMM inference.  相似文献   

11.
This paper provides a semiparametric framework for modeling multivariate conditional heteroskedasticity. We put forward latent stochastic volatility (SV) factors as capturing the commonality in the joint conditional variance matrix of asset returns. This approach is in line with common features as studied by Engle and Kozicki (1993 Engle , R. F. , Kozicki , S. ( 1993 ). Testing for common features . Journal of Business and Economic Statistics 11 ( 4 ): 369395 . [CSA] [CROSSREF] [Taylor & Francis Online], [Web of Science ®] [Google Scholar]), and it allows us to focus on identication of factors and factor loadings through first- and second-order conditional moments only. We assume that the time-varying part of risk premiums is based on constant prices of factor risks, and we consider a factor SV in mean model. Additional specification of both expectations and volatility of future volatility of factors provides conditional moment restrictions, through which the parameters of the model are all identied. These conditional moment restrictions pave the way for instrumental variables estimation and GMM inference.  相似文献   

12.
Fuel coefficients of cement production—one for each process of production—are estimated by explicitly accounting for the multiple-kiln structure of cement plants. Unobserved heterogeneity across plants is found to be significant. Furthermore, since the estimable model is nonlinear in exogenous variables and parameters, a fixed-effects estimator for nonlinear regression is used to obtain the estimates.  相似文献   

13.
Mixed linear models describe the dependence via random effects in multivariate normal survival data. Recently they have received considerable attention in the biomedical literature. They model the conditional survival times, whereas the alternative frailty model uses the conditional hazard rate. We develop an inferential method for the mixed linear model via Lee and Nelder's (1996) hierarchical-likelihood (h-likelihood). Simulation and a practical example are presented to illustrate the new method.  相似文献   

14.
煤炭大数据指数编制及经验模态分解模型研究   总被引:1,自引:0,他引:1  
基于开放性数据源、连续观测昨多变量数据编制的大数据指数,与传统的统计调查指数存在的差异不仅在于数据本身的无限扩张,而且在于编制方法以及分解研究的规则、模型方面的差异。在大数据背景下,率先尝试性地提出大数据指数的定义和数据假设,将"互联网大数据指数"引入煤炭交易价格指数综合编制太原煤炭交易大数据指数,从而反映煤炭价格的变动趋势;导入经验模态分解模型,对所编制的煤炭大数据指数进行分解研究,尝试比较与传统的统计调查指数的差异。研究表明:新编制的煤炭价格大数据指数要比太原煤炭交易价格指数更为敏感和迅速,能更好地反映煤炭价格的变动趋势。随着"互联网+"和大数据战略的逐渐普及,基于互联网大数据编制的综合指数会影响到更多领域,将成为经济管理和社会发展各个领域的晴雨表和指示器;与传统统计调查指数逐步融合、互补或者升级,成为宏观经济大数据指数的重要组成部分。  相似文献   

15.
Order selection is an important step in the application of finite mixture models. Classical methods such as AIC and BIC discourage complex models with a penalty directly proportional to the number of mixing components. In contrast, Chen and Khalili propose to link the penalty to two types of overfitting. In particular, they introduce a regularization penalty to merge similar subpopulations in a mixture model, where the shrinkage idea of regularized regression is seamlessly employed. However, the new method requires an effective and efficient algorithm. When the popular expectation-maximization (EM)-algorithm is used, we need to maximize a nonsmooth and nonconcave objective function in the M-step, which is computationally challenging. In this article, we show that such an objective function can be transformed into a sum of univariate auxiliary functions. We then design an iterative thresholding descent algorithm (ITD) to efficiently solve the associated optimization problem. Unlike many existing numerical approaches, the new algorithm leads to sparse solutions and thereby avoids undesirable ad hoc steps. We establish the convergence of the ITD and further assess its empirical performance using both simulations and real data examples.  相似文献   

16.
This article describes a method for computing approximate statistics for large data sets, when exact computations may not be feasible. Such situations arise in applications such as climatology, data mining, and information retrieval (search engines). The key to our approach is a modular approximation to the cumulative distribution function (cdf) of the data. Approximate percentiles (as well as many other statistics) can be computed from this approximate cdf. This enables the reduction of a potentially overwhelming computational exercise into smaller, manageable modules. We illustrate the properties of this algorithm using a simulated data set. We also examine the approximation characteristics of the approximate percentiles, using a von Mises functional type approach. In particular, it is shown that the maximum error between the approximate cdf and the actual cdf of the data is never more than 1% (or any other preset level). We also show that under assumptions of underlying smoothness of the cdf, the approximation error is much lower in an expected sense. Finally, we derive bounds for the approximation error of the percentiles themselves. Simulation experiments show that these bounds can be quite tight in certain circumstances.  相似文献   

17.
Abstract.  We propose a global smoothing method based on polynomial splines for the estimation of functional coefficient regression models for non-linear time series. Consistency and rate of convergence results are given to support the proposed estimation method. Methods for automatic selection of the threshold variable and significant variables (or lags) are discussed. The estimated model is used to produce multi-step-ahead forecasts, including interval forecasts and density forecasts. The methodology is illustrated by simulations and two real data examples.  相似文献   

18.
Very little is known about the local power of second generation panel unit root tests that are robust to cross-section dependence. This article derives the local asymptotic power functions of the cross-section argumented Dickey–Fuller Cross-section Augmented Dickey-Fuller (CADF) and CIPS tests of Pesaran (2007), which are among the most popular tests around.  相似文献   

19.
Abstract.  Much recent methodological progress in the analysis of infectious disease data has been due to Markov chain Monte Carlo (MCMC) methodology. In this paper, it is illustrated that rejection sampling can also be applied to a family of inference problems in the context of epidemic models, avoiding the issues of convergence associated with MCMC methods. Specifically, we consider models for epidemic data arising from a population divided into households. The models allow individuals to be potentially infected both from outside and from within the household. We develop methodology for selection between competing models via the computation of Bayes factors. We also demonstrate how an initial sample can be used to adjust the algorithm and improve efficiency. The data are assumed to consist of the final numbers ultimately infected within a sample of households in some community. The methods are applied to data taken from outbreaks of influenza.  相似文献   

20.
This paper examines the use of Dirichlet process mixtures for curve fitting. An important modelling aspect in this setting is the choice between constant and covariate‐dependent weights. By examining the problem of curve fitting from a predictive perspective, we show the advantages of using covariate‐dependent weights. These advantages are a result of the incorporation of covariate proximity in the latent partition. However, closer examination of the partition yields further complications, which arise from the vast number of total partitions. To overcome this, we propose to modify the probability law of the random partition to strictly enforce the notion of covariate proximity, while still maintaining certain properties of the Dirichlet process. This allows the distribution of the partition to depend on the covariate in a simple manner and greatly reduces the total number of possible partitions, resulting in improved curve fitting and faster computations. Numerical illustrations are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号