首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
吴建华等 《统计研究》2015,32(9):97-103
在宏观经济和金融资本市场上广泛存在着非线性时变参数时间序列,而当前的研究主要关注静态参数状态空间模型的估计。本文通过引入变点分析,改进了静态参数的粒子学习滤波技术,提出了变点粒子学习滤波技术,用于估计时变参数状态空间模型。并且利用模拟实验同经典的变结构IMM滤波技术进行了对比,结果显示,本文提出的变点粒子学习滤波在动态模拟样本数据方面具有更大的优势。可以用于对股票价格和成交量的联合动态轨迹进行实时的模拟追踪。  相似文献   

3.
Several bivariate beta distributions have been proposed in the literature. In particular, Olkin and Liu [A bivariate beta distribution. Statist Probab Lett. 2003;62(4):407–412] proposed a 3 parameter bivariate beta model which Arnold and Ng [Flexible bivariate beta distributions. J Multivariate Anal. 2011;102(8):1194–1202] extend to 5 and 8 parameter models. The 3 parameter model allows for only positive correlation, while the latter models can accommodate both positive and negative correlation. However, these come at the expense of a density that is mathematically intractable. The focus of this research is on Bayesian estimation for the 5 and 8 parameter models. Since the likelihood does not exist in closed form, we apply approximate Bayesian computation, a likelihood free approach. Simulation studies have been carried out for the 5 and 8 parameter cases under various priors and tolerance levels. We apply the 5 parameter model to a real data set by allowing the model to serve as a prior to correlated proportions of a bivariate beta binomial model. Results and comparisons are then discussed.  相似文献   

4.
The two parameter estimator proposed by Özkale and Kaç?ranlar [The restricted and unrestricted two parameter estimators. Comm Statist Theory Methods. 2007;36(15):2707–2725] is a general estimator which includes the ordinary least squares, the ridge and the Liu estimators as special cases. In the present paper we introduce Almon two parameter estimator based on the two parameter estimation procedure to deal with the problem of multicollinearity for the distiributed lag models. This estimator outperforms the Almon estimator according to the matrix mean square error criterion. Moreover, a numerical example and a Monte Carlo simulation experiment are presented by using different estimators of the biasing parameters.  相似文献   

5.
We demonstrate how to perform direct simulation from the posterior distribution of a class of multiple changepoint models where the number of changepoints is unknown. The class of models assumes independence between the posterior distribution of the parameters associated with segments of data between successive changepoints. This approach is based on the use of recursions, and is related to work on product partition models. The computational complexity of the approach is quadratic in the number of observations, but an approximate version, which introduces negligible error, and whose computational cost is roughly linear in the number of observations, is also possible. Our approach can be useful, for example within an MCMC algorithm, even when the independence assumptions do not hold. We demonstrate our approach on coal-mining disaster data and on well-log data. Our method can cope with a range of models, and exact simulation from the posterior distribution is possible in a matter of minutes.  相似文献   

6.
Fitting stochastic kinetic models represented by Markov jump processes within the Bayesian paradigm is complicated by the intractability of the observed-data likelihood. There has therefore been considerable attention given to the design of pseudo-marginal Markov chain Monte Carlo algorithms for such models. However, these methods are typically computationally intensive, often require careful tuning and must be restarted from scratch upon receipt of new observations. Sequential Monte Carlo (SMC) methods on the other hand aim to efficiently reuse posterior samples at each time point. Despite their appeal, applying SMC schemes in scenarios with both dynamic states and static parameters is made difficult by the problem of particle degeneracy. A principled approach for overcoming this problem is to move each parameter particle through a Metropolis-Hastings kernel that leaves the target invariant. This rejuvenation step is key to a recently proposed \(\hbox {SMC}^2\) algorithm, which can be seen as the pseudo-marginal analogue of an idealised scheme known as iterated batch importance sampling. Computing the parameter weights in \(\hbox {SMC}^2\) requires running a particle filter over dynamic states to unbiasedly estimate the intractable observed-data likelihood up to the current time point. In this paper, we propose to use an auxiliary particle filter inside the \(\hbox {SMC}^2\) scheme. Our method uses two recently proposed constructs for sampling conditioned jump processes, and we find that the resulting inference schemes typically require fewer state particles than when using a simple bootstrap filter. Using two applications, we compare the performance of the proposed approach with various competing methods, including two global MCMC schemes.  相似文献   

7.
In the present paper, minimum Hellinger distance estimates for parameters of a bilinear time series model are presented. The probabilistic properties such as stationarity, existence of moments of the stationary distribution and strong mixing property of the model are well known (see for instance [J. Liu, A note on causality and invertibility of a general bilinear time series model, Adv. Appl. Probab. 22 (1990) 247–250; J. Liu, P.J. Brockwell, On the general bilinear time series model, J. Appl. Probab. 25 (1988) 553–564; D.T. Pham, The mixing property of bilinear and generalised random coefficients autoregressive models, Stoch. Process Appl. 23 (1986) 291–300]). We establish, under some mild conditions, the consistency and the asymptotic normality of the minimum Hellinger distance estimates of the parameters of the model.  相似文献   

8.
Asymmetric behaviour in both mean and variance is often observed in real time series. The approach we adopt is based on double threshold autoregressive conditionally heteroscedastic (DTARCH) model with normal innovations. This model allows threshold nonlinearity in mean and volatility to be modelled as a result of the impact of lagged changes in assets and squared shocks, respectively. A methodology for building DTARCH models is proposed based on genetic algorithms (GAs). The most important structural parameters, that is regimes and thresholds, are searched for by GAs, while the remaining structural parameters, that is the delay parameters and models orders, vary in some pre-specified intervals and are determined using exhaustive search and an Asymptotic Information Criterion (AIC) like criterion. For each structural parameters trial set, a DTARCH model is fitted that maximizes the (penalized) likelihood (AIC criterion). For this purpose the iteratively weighted least squares algorithm is used. Then the best model according to the AIC criterion is chosen. Extension to the double threshold generalized ARCH (DTGARCH) model is also considered. The proposed methodology is checked using both simulated and market index data. Our findings show that our GAs-based procedure yields results that comparable to that reported in the literature and concerned with real time series. As far as artificial time series are considered, the proposed procedure seems to be able to fit the data quite well. In particular, a comparison is performed between the present procedure and the method proposed by Tsay [Tsay, R.S., 1989, Testing and modeling threshold autoregressive processes. Journal of the American Statistical Association, Theory and Methods, 84, 231–240.] for estimating the delay parameter. The former almost always yields better results than the latter. However, adopting Tsay's procedure as a preliminary stage for finding the appropriate delay parameter may save computational time specially if the delay parameter may vary in a large interval.  相似文献   

9.
Real count data time series often show the phenomenon of the underdispersion and overdispersion. In this paper, we develop two extensions of the first-order integer-valued autoregressive process with Poisson innovations, based on binomial thinning, for modeling integer-valued time series with equidispersion, underdispersion, and overdispersion. The main properties of the models are derived. The methods of conditional maximum likelihood, Yule–Walker, and conditional least squares are used for estimating the parameters, and their asymptotic properties are established. We also use a test based on our processes for checking if the count time series considered is overdispersed or underdispersed. The proposed models are fitted to time series of the weekly number of syphilis cases and monthly counts of family violence illustrating its capabilities in challenging the overdispersed and underdispersed count data.  相似文献   

10.
In this paper, changepoint analysis is applied to stochastic volatility (SV) models which aim to understand the locations and movements of high frequency FX financial time series. Bayesian inference using the Markov Chain Monte Carlo method is performed using a process called variable dimension for SV parameters. Interesting results are that FX series have locations where one or more positions of the sequence correspond to systemic changes, and overall non-stationarity, in the returns process. Furthermore, we found that the changepoint locations provide an informative estimate for all FX series. Importantly in most cases, the detected changepoints can be identified with economic factors relevant to the country concerned. This helps support the fact that macroeconomics news and the movement in financial price are positively related.  相似文献   

11.
Even though integer-valued time series are common in practice, the methods for their analysis have been developed only in recent past. Several models for stationary processes with discrete marginal distributions have been proposed in the literature. Such processes assume the parameters of the model to remain constant throughout the time period. However, this need not be true in practice. In this paper, we introduce non-stationary integer-valued autoregressive (INAR) models with structural breaks to model a situation, where the parameters of the INAR process do not remain constant over time. Such models are useful while modelling count data time series with structural breaks. The Bayesian and Markov Chain Monte Carlo (MCMC) procedures for the estimation of the parameters and break points of such models are discussed. We illustrate the model and estimation procedure with the help of a simulation study. The proposed model is applied to the two real biometrical data sets.  相似文献   

12.
Time-varying coefficient models with autoregressive and moving-average–generalized autoregressive conditional heteroscedasticity structure are proposed for examining the time-varying effects of risk factors in longitudinal studies. Compared with existing models in the literature, the proposed models give explicit patterns for the time-varying coefficients. Maximum likelihood and marginal likelihood (based on a Laplace approximation) are used to estimate the parameters in the proposed models. Simulation studies are conducted to evaluate the performance of these two estimation methods, which is measured in terms of the Kullback–Leibler divergence and the root mean square error. The marginal likelihood approach leads to the more accurate parameter estimates, although it is more computationally intensive. The proposed models are applied to the Framingham Heart Study to investigate the time-varying effects of covariates on coronary heart disease incidence. The Bayesian information criterion is used for specifying the time series structures of the coefficients of the risk factors.  相似文献   

13.
One way that has been used for identifying and estimating threshold autoregressive (TAR) models for nonlinear time series follows the Markov chain Monte Carlo (MCMC) approach via the Gibbs sampler. This route has major computational difficulties, specifically, in getting convergence to the parameter distributions. In this article, a new procedure for identifying a TAR model and for estimating its parameters is developed by following the reversible jump MCMC procedure. It is found that the proposed procedure conveys a Markov chain with convergence properties.  相似文献   

14.
A new method for detecting the parameter changes in generalized autoregressive heteroskedasticity GARCH (1,1) model is proposed. In the proposed method, time series observations are divided into several segments and a GARCH (1,1) model is fitted to each segment. The goodness-of-fit of the global model composed of these local GARCH (1,1) models is evaluated using the corresponding information criterion (IC). The division that minimizes IC defines the best model. Furthermore, since the simultaneous estimation of all possible models requires huge computational time, a new time-saving algorithm is proposed. Simulation results and empirical results both indicate that the proposed method is useful in analysing financial data.  相似文献   

15.
In this article, variance stabilizing filters are discussed. A new filter with nice properties is proposed which makes use of moving averages and moving standard deviations, the latter smoothed with the Hodrick-Prescott filter. This filter is compared to a GARCH-type filter. An ARIMA model is estimated for the filtered GDP series, and the parameter estimates are used in forecasting the unfiltered series. These forecasts compare well with those of ARIMA, ARFIMA, and GARCH models based on the unfiltered data. The filter does not color white noise.  相似文献   

16.

Two-piece location-scale models are used for modeling data presenting departures from symmetry. In this paper, we propose an objective Bayesian methodology for the tail parameter of two particular distributions of the above family: the skewed exponential power distribution and the skewed generalised logistic distribution. We apply the proposed objective approach to time series models and linear regression models where the error terms follow the distributions object of study. The performance of the proposed approach is illustrated through simulation experiments and real data analysis. The methodology yields improvements in density forecasts, as shown by the analysis we carry out on the electricity prices in Nordpool markets.

  相似文献   

17.
It is well known that in a traditional outlier-free situation, the generalized quasi-likelihood (GQL) approach [B.C. Sutradhar, On exact quasilikelihood inference in generalized linear mixed models, Sankhya: Indian J. Statist. 66 (2004), pp. 261–289] performs very well to obtain the consistent as well as the efficient estimates for the parameters involved in the generalized linear mixed models (GLMMs). In this paper, we first examine the effect of the presence of one or more outliers on the GQL estimation for the parameters in such GLMMs, especially in two important models such as count and binary mixed models. The outliers appear to cause serious biases and hence inconsistency in the estimation. As a remedy, we then propose a robust GQL (RGQL) approach in order to obtain the consistent estimates for the parameters in the GLMMs in the presence of one or more outliers. An extensive simulation study is conducted to examine the consistency performance of the proposed RGQL approach.  相似文献   

18.
Negative binomial regression (NBR) and Poisson regression (PR) applications have become very popular in the analysis of count data in recent years. However, if there is a high degree of relationship between the independent variables, the problem of multicollinearity arises in these models. We introduce new two-parameter estimators (TPEs) for the NBR and the PR models by unifying the two-parameter estimator (TPE) of Özkale and Kaç?ranlar [The restricted and unrestricted two-parameter estimators. Commun Stat Theory Methods. 2007;36:2707–2725]. These new estimators are general estimators which include maximum likelihood (ML) estimator, ridge estimator (RE), Liu estimator (LE) and contraction estimator (CE) as special cases. Furthermore, biasing parameters of these estimators are given and a Monte Carlo simulation is done to evaluate the performance of these estimators using mean square error (MSE) criterion. The benefits of the new TPEs are also illustrated in an empirical application. The results show that the new proposed TPEs for the NBR and the PR models are better than the ML estimator, the RE and the LE.  相似文献   

19.
In this paper we are concerned with the recursive estimation of bilinear models. Some methods from linear time invariant systems are adapted to suit bilinear time series models. The time-varying Kalman filter and associated parameter estimation algorithm is carried on the bilinear time series models. The methods are illustrated with examples.  相似文献   

20.

Structural change in any time series is practically unavoidable, and thus correctly detecting breakpoints plays a pivotal role in statistical modelling. This research considers segmented autoregressive models with exogenous variables and asymmetric GARCH errors, GJR-GARCH and exponential-GARCH specifications, which utilize the leverage phenomenon to demonstrate asymmetry in response to positive and negative shocks. The proposed models incorporate skew Student-t distribution and prove the advantages of the fat-tailed skew Student-t distribution versus other distributions when structural changes appear in financial time series. We employ Bayesian Markov Chain Monte Carlo methods in order to make inferences about the locations of structural change points and model parameters and utilize deviance information criterion to determine the optimal number of breakpoints via a sequential approach. Our models can accurately detect the number and locations of structural change points in simulation studies. For real data analysis, we examine the impacts of daily gold returns and VIX on S&P 500 returns during 2007–2019. The proposed methods are able to integrate structural changes through the model parameters and to capture the variability of a financial market more efficiently.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号