首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In this paper we examine the relative increase in mean square forecast error fro fitting a weakly stationary process to the series of interest when in fact the true model is a so-called perturbed long-memory process recently introduced by Granger and Marmol (1997). This model has the property of being unidentifiable from a white noise process on the basis of the correlogram and the usual rule-of-thumbs in the Box-Jenkins methodology. We prove that this kind of missspecification can lead to serious errors in terms of forecasting. We also show that corrections based on the AR(1) model can in some cases partially solve the problem. Received: March 15, 1999; revised version: February 14, 2000  相似文献   

2.
3.
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood‐based ABC procedures.  相似文献   

4.
In this article, we analyze the transient behavior of the workload process in a Lévy-driven queue. We are interested in the value of the workload process at a random epoch; this epoch is distributed as the sum of independent exponential random variables. We consider both cases of spectrally one-sided Lévy input processes, for which we succeed in deriving explicit results. As an application, we approximate the mean and the Laplace transform of the workload process after a deterministic time.  相似文献   

5.
Many recent articles have found that atheoretical forecasting methods using many predictors give better predictions for key macroeconomic variables than various small-model methods. The practical relevance of these results is open to question, however, because these articles generally use ex post revised data not available to forecasters and because no comparison is made to best actual practice. We provide some evidence on both of these points using a new large dataset of vintage data synchronized with the Fed’s Greenbook forecast. This dataset consist of a large number of variables as observed at the time of each Greenbook forecast since 1979. We compare realtime, large dataset predictions to both simple univariate methods and to the Greenbook forecast. For inflation we find that univariate methods are dominated by the best atheoretical large dataset methods and that these, in turn, are dominated by Greenbook. For GDP growth, in contrast, we find that once one takes account of Greenbook’s advantage in evaluating the current state of the economy, neither large dataset methods, nor the Greenbook process offers much advantage over a univariate autoregressive forecast.  相似文献   

6.
有限数据下Lee-Carter模型在人口死亡率预测中的应用   总被引:2,自引:0,他引:2       下载免费PDF全文
Lee-Carter模型是当今世界上最流行的死亡率建模与预测模型,传统的Lee-Carter模型在样本量很大时才能得到较好的效果,而中国的死亡率数据量较少,且部分年限的数据缺失,从而难以达到较好的预测效果。本文基于Li等(2004)提出的有限数据死亡率建模方法,同时考虑样本量不足的影响,采用韩猛等(2010)提出的“双随机过程”建模,构建了有限数据下中国人口死亡率的预测模型,并用于对未来死亡率变动趋势和人口寿命的预测,最后将预测结果与保险公司采用的死亡率改善因子以及社会养老保险个人账户中采用的计发月数进行对比分析,给出了若干相关结论和有关死亡率风险管理的建议。  相似文献   

7.
In this article, we develop a mixed frequency dynamic factor model in which the disturbances of both the latent common factor and of the idiosyncratic components have time-varying stochastic volatilities. We use the model to investigate business cycle dynamics in the euro area and present three sets of empirical results. First, we evaluate the impact of macroeconomic releases on point and density forecast accuracy and on the width of forecast intervals. Second, we show how our setup allows to make a probabilistic assessment of the contribution of releases to forecast revisions. Third, we examine point and density out of sample forecast accuracy. We find that introducing stochastic volatility in the model contributes to an improvement in both point and density forecast accuracy. Supplementary materials for this article are available online.  相似文献   

8.
Abstract. In this paper, we study the detailed distributional properties of integrated non-Gaussian Ornstein–Uhlenbeck (intOU) processes. Both exact and approximate results are given. We emphasize the study of the tail behaviour of the intOU process. Our results have many potential applications in financial economics, as OU processes are used as models of instantaneous variance in stochastic volatility (SV) models. In this case, an intOU process can be regarded as a model of integrated variance. Hence, the tail behaviour of the intOU process will determine the tail behaviour of returns generated by SV models.  相似文献   

9.
In this paper, we propose a data-driven model selection approach for the nonparametric estimation of covariance functions under very general moments assumptions on the stochastic process. Observing i.i.d replications of the process at fixed observation points, we select the best estimator among a set of candidates using a penalized least squares estimation procedure with a fully data-driven penalty function, extending the work in Bigot et al. (Electron J Stat 4:822–855, 2010). We then provide a practical application of this estimate for a Kriging interpolation procedure to forecast rainfall data.  相似文献   

10.
In this article, we investigate an algorithm for the fast O(N) and approximate simulation of long memory (LM) processes of length N using the discrete wavelet transform. The algorithm generates stationary processes and is based on the notion that we can improve standard wavelet-based simulation schemes by noting that the decorrelation property of wavelet transforms is not perfect for certain LM process. The method involves the simulation of circular autoregressive process of order one. We demonstrate some of the statistical properties of the processes generated, with some focus on four commonly used LM processes. We compare this simulation method with the white noise wavelet simulation scheme of Percival and Walden [Percival, D. and Walden, A., 2000, Wavelet Methods for Time Series Analysis (Cambridge: Cambridge University Press).].  相似文献   

11.
Pattern Matching     
An important subtask of the pattern discovery process is pattern matching, where the pattern sought is already known and we want to determine how often and where it occurs in a sequence. In this paper we review the most practical techniques to find patterns of different kinds. We show how regular expressions can be searched for with general techniques, and how simpler patterns can be dealt with more simply and efficiently. We consider exact as well as approximate pattern matching. Also we cover both sequential searching, where the sequence cannot be preprocessed, and indexed searching, where we have a data structure built over the sequence to speed up the search.  相似文献   

12.
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material.  相似文献   

13.
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task.

We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also.  相似文献   

14.
1 solution to the dimensionality problem raised by projection of individual age-specific fertility rates is the use of parametric curves to approximate the annual age-specific rates and a multivariate time series model to forecast the curve parameters. Such a method reduces the number of time series to be modeled for women 14-45 years of age from 32 to 40 (the number of curve parameters). In addition, the curves force even longterm fertility projections to exhibit the same smooth distribution across age as historical data. The data base used to illustrate this approach was age-specific fertility rates for US white women in 1921-84. An important advantage of this model is that it permits investigation of the interactions among the total fertility rate, the mean age of childbearing, and the standard deviation of age at childbearing. In the analysis of this particular data base, the contemporaneous relationship between the mean and standard deviation of age at childbearing was the only significant relationship. The addition of bias forecasts to the forecast gamma curve improves forecast accuracy, especially 1-2 years ahead. The most recent US Census Bureau projections have combined a time series model with longterm projections based on demographic judgment. These official projections yielded a slightly higher ultimate mean age and slightly lower standard deviation than those resulting from the model described in this paper.  相似文献   

15.
We derive an exact formula for the covariance between the sampled autocovariances at any two lags for a finite time series realisation from a general stationary autoregressive moving average process. We indicate, through one particular example, how this result can be used to deduce analogous formulae for any nonstationary model of the ARUMA class, a generalisation of the ARIMA models. Such formulae then allow us to obtain approximate expressions for the convariances between all pairs of serial correlations for finite realisations from the ARUMA model. We also note that, in the limit as the series length n → ∞, our results for the ARMA class retrieve those of Bartlett (1946). Finally, we investigate an improvement to the approximation that is obtained by applying Bartlett's general asymptotic formula to finite series realisations. That such an improvement should exist can immediately be seen by consideration of out results for the simplest case of a white noise process. However, we deduce the final improved approapproximation, for general models, in two ways - from (corrected) results due to Davies and Newbold (1980), and by an alternative approach to theirs.  相似文献   

16.
Eden UT  Brown EN 《Statistica Sinica》2008,18(4):1293-1310
Neural spike trains, the primary communication signals in the brain, can be accurately modeled as point processes. For many years, significant theoretical work has been done on the construction of exact and approximate filters for state estimation from point process observations in continuous-time. We have previously developed approximate filters for state estimation from point process observations in discrete-time and applied them in the study of neural systems. Here, we present a coherent framework for deriving continuous-time filters from their discrete-counterparts. We present an accessible derivation of the well-known unnormalized conditional density equation for state evolution, construct a new continuous-time filter based on a Gaussian approximation, and propose a method for assessing the validity of the approximation following an approach by Brockett and Clark. We apply these methods to the problem of reconstructing arm reaching movements from simulated neural spiking activity from the primary motor cortex. This work makes explicit the connections between adaptive point process filters for analyzing neural spiking activity in continuous-time, and standard continuous-time filters for state estimation from continuous and point process observations.  相似文献   

17.
The exact distribution of a renewal counting process is not easy to compute and is rarely of closed form. In this article, we approximate the distribution of a renewal process using families of generalized Poisson distributions. We first compute approximations to the first several moments of the renewal process. In some cases, a closed form approximation is obtained. It is found that each family considered has its own strengths and weaknesses. Some new families of generalized Poisson distributions are recommended. Theorems are obtained determining when these variance to mean ratios are less than (or exceed) one without having to find the mean and variance. Some numerical comparisons are also made.  相似文献   

18.
A number of volatility forecasting studies have led to the perception that the ARCH- and Stochastic Volatility-type models provide poor out-of-sample forecasts of volatility. This is primarily based on the use of traditional forecast evaluation criteria concerning the accuracy and the unbiasedness of forecasts. In this paper we provide an analytical assessment of volatility forecasting performance. We use the volatility and log volatility framework to prove how the inherent noise in the approximation of the true- and unobservable-volatility by the squared return, results in a misleading forecast evaluation, inflating the observed mean squared forecast error and invalidating the Diebold-Mariano statistic. We analytically characterize this noise and explicitly quantify its effects assuming normal errors. We extend our results using more general error structures such as the Compound Normal and the Gram-Charlier classes of distributions. We argue that evaluation problems are likely to be exacerbated by non-normality of the shocks and that non-linear and utility-based criteria can be more suitable for the evaluation of volatility forecasts.  相似文献   

19.
The Bernoulli and Poisson processes are two popular discrete count processes; however, both rely on strict assumptions. We instead propose a generalized homogenous count process (which we name the Conway–Maxwell–Poisson or COM-Poisson process) that not only includes the Bernoulli and Poisson processes as special cases, but also serves as a flexible mechanism to describe count processes that approximate data with over- or under-dispersion. We introduce the process and an associated generalized waiting time distribution with several real-data applications to illustrate its flexibility for a variety of data structures. We consider model estimation under different scenarios of data availability, and assess performance through simulated and real datasets. This new generalized process will enable analysts to better model count processes where data dispersion exists in a more accommodating and flexible manner.  相似文献   

20.
We consider a two-class processor sharing queueing system with impatient customers. The system operates under the discriminatory processor sharing (DPS) scheduling. The arrival process of each class customers is the Poisson process and the service requirement of a customer is exponentially distributed. The reneging rate of a customer is a constant. To analyze the performance of the system, we develop a time scale decomposition approach to approximate the joint queue-length distribution of each class customers. Via a numerical experiment, we show that the time scale decomposition approach gives a fairly good approximation of the queue-length distribution and the expected queue length.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号