首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Structural change in any time series is practically unavoidable, and thus correctly detecting breakpoints plays a pivotal role in statistical modelling. This research considers segmented autoregressive models with exogenous variables and asymmetric GARCH errors, GJR-GARCH and exponential-GARCH specifications, which utilize the leverage phenomenon to demonstrate asymmetry in response to positive and negative shocks. The proposed models incorporate skew Student-t distribution and prove the advantages of the fat-tailed skew Student-t distribution versus other distributions when structural changes appear in financial time series. We employ Bayesian Markov Chain Monte Carlo methods in order to make inferences about the locations of structural change points and model parameters and utilize deviance information criterion to determine the optimal number of breakpoints via a sequential approach. Our models can accurately detect the number and locations of structural change points in simulation studies. For real data analysis, we examine the impacts of daily gold returns and VIX on S&P 500 returns during 2007–2019. The proposed methods are able to integrate structural changes through the model parameters and to capture the variability of a financial market more efficiently.

  相似文献   

2.
Many naturally occurring phenomena can be effectively modeled using self-similar processes. In such applications, accurate estimation of the scaling exponent is vital, since it is this index which characterizes the nature of the self-similarity. Although estimation of the scaling exponent has been extensively studied, previous work has generally assumed that this parameter is constant. Such an assumption may be unrealistic in settings where it is evident that the nature of the self-similarity changes as the phenomenon evolves. For such applications, the scaling exponent must be allowed to vary as a function of time, and a procedure must be available which provides a statistical characterization of this progression. In what follows, we propose and describe such a procedure. Our method uses wavelets to construct local estimates of time-varying scaling exponents for locally self-similar processes. We establish a consistency result for these estimates. We investigate the effectiveness of our procedure in a simulation study, and demonstrate its applicability in the analyses of a hydrological and a geophysical time series, each of which exhibit locally self-similar behavior.  相似文献   

3.
The GARCH and stochastic volatility (SV) models are two competing, well-known and often used models to explain the volatility of financial series. In this paper, we consider a closed form estimator for a stochastic volatility model and derive its asymptotic properties. We confirm our theoretical results by a simulation study. In addition, we propose a set of simple, strongly consistent decision rules to compare the ability of the GARCH and the SV model to fit the characteristic features observed in high frequency financial data such as high kurtosis and slowly decaying autocorrelation function of the squared observations. These rules are based on a number of moment conditions that is allowed to increase with sample size. We show that our selection procedure leads to choosing the model that fits best, or the simplest model under equivalence, with probability one as the sample size increases. The finite sample size behavior of our procedure is analyzed via simulations. Finally, we provide an application to stocks in the Dow Jones industrial average index.  相似文献   

4.
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task.

We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also.  相似文献   

5.
This paper introduces a multivariate long-memory model with structural breaks. In the proposed framework, the time series exhibits possibly fractional orders of integration which are allowed to be different in each subsample. The break date is endogenously determined using a procedure that minimizes the residual sum of squares (RSS). Monte Carlo experiments show that this method for detecting breaks performs well in large samples. As an illustration, we estimate a trivariate VAR including prices, employment and GDP in both the US and Mexico. For the subsample preceding the break, our findings are similar to those of earlier studies based on a standard VAR approach in both the countries, such that the variables exhibit integer degrees of integration. On the contrary, the series is found to be fractionally integrated after the break, with the fractional differencing parameters being higher than one in the case of Mexico.  相似文献   

6.
Environmental variables have an important effect on the reliability of many products such as coatings and polymeric composites. Long-term prediction of the performance or service life of such products must take into account the probabilistic/stochastic nature of the outdoor weather. In this article, we propose a time series modeling procedure to model the time series data of daily accumulated degradation. Daily accumulated degradation is the total amount of degradation accrued within one day and can be obtained by using a degradation rate model for the product and the weather data. The fitted model of the time series can then be used to estimate the future distribution of cumulative degradation over a period of time, and to compute reliability measures such as the probability of failure. The modeling technique and estimation method are illustrated using the degradation of a solar reflector material. We also provide a method to construct approximate confidence intervals for the probability of failure.  相似文献   

7.
This paper concerns model selection for autoregressive time series when the observations are contaminated with trend. We propose an adaptive least absolute shrinkage and selection operator (LASSO) type model selection method, in which the trend is estimated by B-splines, the detrended residuals are calculated, and then the residuals are used as if they were observations to optimize an adaptive LASSO type objective function. The oracle properties of such an adaptive LASSO model selection procedure are established; that is, the proposed method can identify the true model with probability approaching one as the sample size increases, and the asymptotic properties of estimators are not affected by the replacement of observations with detrended residuals. The intensive simulation studies of several constrained and unconstrained autoregressive models also confirm the theoretical results. The method is illustrated by two time series data sets, the annual U.S. tobacco production and annual tree ring width measurements.  相似文献   

8.
Summary.  When an individual player or team enjoys periods of good form, and when these occur, is a widely observed phenomenon typically called 'streakiness'. It is interesting to assess which team is a streaky team, or who is a streaky player in sports. Such competitors might have a large number of successes during some periods and few or no successes during other periods. Thus, their success rate is not constant over time. We provide a Bayesian binary segmentation procedure for locating changepoints and the associated success rates simultaneously for these competitors. The procedure is based on a series of nested hypothesis tests each using the Bayes factor or the Bayesian information criterion. At each stage, we only need to compare a model with one changepoint with a model based on a constant success rate. Thus, the method circumvents the computational complexity that we would normally face in problems with an unknown number of changepoints. We apply the procedure to data corresponding to sports teams and players from basketball, golf and baseball.  相似文献   

9.
Procedures for detecting change points in sequences of correlated observations (e.g., time series) can help elucidate their complicated structure. Current literature on the detection of multiple change points emphasizes the analysis of sequences of independent random variables. We address the problem of an unknown number of variance changes in the presence of long-range dependence (e.g., long memory processes). Our results are also applicable to time series whose spectrum slowly varies across octave bands. An iterated cumulative sum of squares procedure is introduced in order to look at the multiscale stationarity of a time series; that is, the variance structure of the wavelet coefficients on a scale by scale basis. The discrete wavelet transform enables us to analyze a given time series on a series of physical scales. The result is a partitioning of the wavelet coefficients into locally stationary regions. Simulations are performed to validate the ability of this procedure to detect and locate multiple variance changes. A ‘time’ series of vertical ocean shear measurements is also analyzed, where a variety of nonstationary features are identified.  相似文献   

10.
This article develops an asymmetric volatility model that takes into consideration the structural breaks in the volatility process. Break points and other parameters of the model are estimated using MCMC and Gibbs sampling techniques. Models with different number of break points are compared using the Bayes factor and BIC. We provide a formal test and hence a new procedure for Bayes factor computation to choose between models with different number of breaks. The procedure is illustrated using simulated as well as real data sets. The analysis shows an evidence to the fact that the financial crisis in the market from the first week of September 2008 has caused a significant break in the structure of the return series of two major NYSE indices viz., S & P 500 and Dow Jones. Analysis of the USD/EURO exchange rate data also shows an evidence of structural break around the same time.  相似文献   

11.
The article considers nonparametric inference for quantile regression models with time-varying coefficients. The errors and covariates of the regression are assumed to belong to a general class of locally stationary processes and are allowed to be cross-dependent. Simultaneous confidence tubes (SCTs) and integrated squared difference tests (ISDTs) are proposed for simultaneous nonparametric inference of the latter models with asymptotically correct coverage probabilities and Type I error rates. Our methodologies are shown to possess certain asymptotically optimal properties. Furthermore, we propose an information criterion that performs consistent model selection for nonparametric quantile regression models of nonstationary time series. For implementation, a wild bootstrap procedure is proposed, which is shown to be robust to the dependent and nonstationary data structure. Our method is applied to studying the asymmetric and time-varying dynamic structures of the U.S. unemployment rate since the 1940s. Supplementary materials for this article are available online.  相似文献   

12.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

13.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

14.
15.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   

16.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS   总被引:4,自引:0,他引:4  
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is "small" relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method.  相似文献   

17.
Recursive methods are commonly used to solve Yule—Walker equations for autoregrsssive parameters given an autocovariance function. The reverse procedure can be extended to the efficient solution of various sets of equations which arise in time series analysis. Those presented in this paper include computation of the autocovariance function of an ARMA model, and the Cramer—Wold factorization.  相似文献   

18.
It is an important problem to compare two time series in many applications. In this paper, a computational bootstrap procedure is proposed to test if two dependent stationary time series have the same autocovariance structures. The blocks of blocks bootstrap on bivariate time series is employed to estimate the covariance matrix which is necessary in order to construct the proposed test statistic. Without much additional effort, the bootstrap critical values can also be computed as a byproduct from the same bootstrap procedure. The asymptotic distribution of the test statistic under the null hypothesis is obtained. A simulation study is conducted to examine the finite sample performance of the test. The simulation results show that the proposed procedure with the bootstrap critical values performs well empirically and is especially useful when time series are short and non-normal. The proposed test is applied to an analysis of a real data set to understand the relationship between the input and output signals of a chemical process.  相似文献   

19.
Summary. We describe a model-based approach to analyse space–time surveillance data on meningococcal disease. Such data typically comprise a number of time series of disease counts, each representing a specific geographical area. We propose a hierarchical formulation, where latent parameters capture temporal, seasonal and spatial trends in disease incidence. We then add—for each area—a hidden Markov model to describe potential additional (autoregressive) effects of the number of cases at the previous time point. Different specifications for the functional form of this autoregressive term are compared which involve the number of cases in the same or in neighbouring areas. The two states of the Markov chain can be interpreted as representing an 'endemic' and a 'hyperendemic' state. The methodology is applied to a data set of monthly counts of the incidence of meningococcal disease in the 94 départements of France from 1985 to 1997. Inference is carried out by using Markov chain Monte Carlo simulation techniques in a fully Bayesian framework. We emphasize that a central feature of our model is the possibility of calculating—for each region and each time point—the posterior probability of being in a hyperendemic state, adjusted for global spatial and temporal trends, which we believe is of particular public health interest.  相似文献   

20.
Time series within fields such as finance and economics are often modelled using long memory processes. Alternative studies on the same data can suggest that series may actually contain a ‘changepoint’ (a point within the time series where the data generating process has changed). These models have been shown to have elements of similarity, such as within their spectrum. Without prior knowledge this leads to an ambiguity between these two models, meaning it is difficult to assess which model is most appropriate. We demonstrate that considering this problem in a time-varying environment using the time-varying spectrum removes this ambiguity. Using the wavelet spectrum, we then use a classification approach to determine the most appropriate model (long memory or changepoint). Simulation results are presented across a number of models followed by an application to stock cross-correlations and US inflation. The results indicate that the proposed classification outperforms an existing hypothesis testing approach on a number of models and performs comparatively across others.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号