首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 727 毫秒
1.
Principal component regression uses principal components (PCs) as regressors. It is particularly useful in prediction settings with high-dimensional covariates. The existing literature treating of Bayesian approaches is relatively sparse. We introduce a Bayesian approach that is robust to outliers in both the dependent variable and the covariates. Outliers can be thought of as observations that are not in line with the general trend. The proposed approach automatically penalises these observations so that their impact on the posterior gradually vanishes as they move further and further away from the general trend, corresponding to a concept in Bayesian statistics called whole robustness. The predictions produced are thus consistent with the bulk of the data. The approach also exploits the geometry of PCs to efficiently identify those that are significant. Individual predictions obtained from the resulting models are consolidated according to model-averaging mechanisms to account for model uncertainty. The approach is evaluated on real data and compared to its nonrobust Bayesian counterpart, the traditional frequentist approach and a commonly employed robust frequentist method. Detailed guidelines to automate the entire statistical procedure are provided. All required code is made available, see ArXiv:1711.06341.  相似文献   

2.
We develop a discrete-time affine stochastic volatility model with time-varying conditional skewness (SVS). Importantly, we disentangle the dynamics of conditional volatility and conditional skewness in a coherent way. Our approach allows current asset returns to be asymmetric conditional on current factors and past information, which we term contemporaneous asymmetry. Conditional skewness is an explicit combination of the conditional leverage effect and contemporaneous asymmetry. We derive analytical formulas for various return moments that are used for generalized method of moments (GMM) estimation. Applying our approach to S&P500 index daily returns and option data, we show that one- and two-factor SVS models provide a better fit for both the historical and the risk-neutral distribution of returns, compared to existing affine generalized autoregressive conditional heteroscedasticity (GARCH), and stochastic volatility with jumps (SVJ) models. Our results are not due to an overparameterization of the model: the one-factor SVS models have the same number of parameters as their one-factor GARCH competitors and less than the SVJ benchmark.  相似文献   

3.
There has recently been growing interest in modeling and estimating alternative continuous time multivariate stochastic volatility models. We propose a continuous time fractionally integrated Wishart stochastic volatility (FIWSV) process, and derive the conditional Laplace transform of the FIWSV model in order to obtain a closed form expression of moments. A two-step procedure is used, namely estimating the parameter of fractional integration via the local Whittle estimator in the first step, and estimating the remaining parameters via the generalized method of moments in the second step. Monte Carlo results for the procedure show a reasonable performance in finite samples. The empirical results for the S&P 500 and FTSE 100 indexes show that the data favor the new FIWSV process rather than the one-factor and two-factor models of the Wishart autoregressive process for the covariance structure.  相似文献   

4.
This article develops an asymmetric volatility model that takes into consideration the structural breaks in the volatility process. Break points and other parameters of the model are estimated using MCMC and Gibbs sampling techniques. Models with different number of break points are compared using the Bayes factor and BIC. We provide a formal test and hence a new procedure for Bayes factor computation to choose between models with different number of breaks. The procedure is illustrated using simulated as well as real data sets. The analysis shows an evidence to the fact that the financial crisis in the market from the first week of September 2008 has caused a significant break in the structure of the return series of two major NYSE indices viz., S & P 500 and Dow Jones. Analysis of the USD/EURO exchange rate data also shows an evidence of structural break around the same time.  相似文献   

5.
We propose a novel Dirichlet-based Pólya tree (D-P tree) prior on the copula and based on the D-P tree prior, a nonparametric Bayesian inference procedure. Through theoretical analysis and simulations, we are able to show that the flexibility of the D-P tree prior ensures its consistency in copula estimation, thus able to detect more subtle and complex copula structures than earlier nonparametric Bayesian models, such as a Gaussian copula mixture. Furthermore, the continuity of the imposed D-P tree prior leads to a more favourable smoothing effect in copula estimation over classic frequentist methods, especially with small sets of observations. We also apply our method to the copula prediction between the S&P 500 index and the IBM stock prices during the 2007–08 financial crisis, finding that D-P tree-based methods enjoy strong robustness and flexibility over classic methods under such irregular market behaviours.  相似文献   

6.
We propose a state-space approach for GARCH models with time-varying parameters able to deal with non-stationarity that is usually observed in a wide variety of time series. The parameters of the non-stationary model are allowed to vary smoothly over time through non-negative deterministic functions. We implement the estimation of the time-varying parameters in the time domain through Kalman filter recursive equations, finding a state-space representation of a class of time-varying GARCH models. We provide prediction intervals for time-varying GARCH models and, additionally, we propose a simple methodology for handling missing values. Finally, the proposed methodology is applied to the Chilean Stock Market (IPSA) and to the American Standard&Poor's 500 index (S&P500).  相似文献   

7.
This article proposes a new directional dependence by using the Gaussian copula beta regression model. In particular, we consider an asymmetric Generalized AutoRegressive Conditional Heteroscedasticity (GARCH) model for the marginal distribution of standardized residuals to make data exhibiting conditionally heteroscedasticity to white noise process. With the simulated data generated by an asymmetric bivariate copula, we verify our proposed directional dependence method. For the multivariate direction dependence by using the Gaussian copula beta regression model, we employ a three-dimensional archemedian copula to generate trivariate data and then show the directional dependence for one random variable given two other random variables. With West Texas Intermediate Daily Price (WTI) and the Standard & Poor’s 500 (S&P 500), our proposed directional dependence by the Gaussian copula beta regression model reveals that the directional dependence from WTI to S&P 500 is greater than that from S&P 500 to WTI. To validate our empirical result, the Granger causality test is conducted, confirming the same result produced by our method.  相似文献   

8.

We discuss the multivariate (2L-variate) correlation structure and the asymptotic distribution for the group-sequential weighted logrank statistics formulated when monitoring two correlated event-time outcomes in clinical trials. The asymptotic distribution and the variance–covariance for the 2L-variate weighted logrank statistic are derived as available in various group-sequential trial designs. These methods are used to determine a group-sequential testing procedure based on calendar times or information fractions. We apply the theoretical results to a group-sequential method for monitoring a clinical trial with early stopping for efficacy when the trial is designed to evaluate the joint effect on two correlated event-time outcomes. We illustrate the method with application to a clinical trial and describe how to calculate the required sample sizes and numbers of events.

  相似文献   

9.
This paper is concerned with testing and dating structural breaks in the dependence structure of multivariate time series. We consider a cumulative sum (CUSUM) type test for constant copula-based dependence measures, such as Spearman''s rank correlation and quantile dependencies. The asymptotic null distribution is not known in closed form and critical values are estimated by an i.i.d. bootstrap procedure. We analyze size and power properties in a simulation study under different dependence measure settings, such as skewed and fat-tailed distributions. To date breakpoints and to decide whether two estimated break locations belong to the same break event, we propose a pivot confidence interval procedure. Finally, we apply the test to the historical data of 10 large financial firms during the last financial crisis from 2002 to mid-2013.  相似文献   

10.
With a parametric model, a measure of departure for an interest parameter is often easily constructed but frequently depends in distribution on nuisance parameters; the elimination of such nuisance parameter effects is a central problem of statistical inference. Fraser & Wong (1993) proposed a nuisance-averaging or approximate Studentization method for eliminating the nuisance parameter effects. They showed that, for many standard problems where an exact answer is available, the averaging method reproduces the exact answer. Also they showed that, if the exact answer is unavailable, as say in the gamma-mean problem, the averaging method provides a simple approximation which is very close to that obtained from third order asymptotic theory. The general asymptotic accuracy, however, of the method has not been examined. In this paper, we show in a general asymptotic context that the averaging method is asymptotically a second order procedure for eliminating the effects of nuisance parameters.  相似文献   

11.
This paper introduces a new method to estimate the spectral distribution of a population covariance matrix from high-dimensional data. The method is founded on a meaningful generalization of the seminal Mar?enko–Pastur equation, originally defined in the complex plane, to the real line. Beyond its easy implementation and the established asymptotic consistency, the new estimator outperforms two existing estimators from the literature in almost all the situations tested in a simulation experiment. An application to the analysis of the correlation matrix of S&P 500 daily stock returns is also given.  相似文献   

12.
We present a new semi-parametric model for the prediction of implied volatility surfaces that can be estimated using machine learning algorithms. Given a reasonable starting model, a boosting algorithm based on regression trees sequentially minimizes generalized residuals computed as differences between observed and estimated implied volatilities. To overcome the poor predictive power of existing models, we include a grid in the region of interest, and implement a cross-validation strategy to find an optimal stopping value for the boosting procedure. Back testing the out-of-sample performance on a large data set of implied volatilities from S&P 500 options, we provide empirical evidence of the strong predictive power of our model.  相似文献   

13.
We propose a modification on the local polynomial estimation procedure to account for the “within-subject” correlation presented in panel data. The proposed procedure is rather simple to compute and has a closed-form expression. We study the asymptotic bias and variance of the proposed procedure and show that it outperforms the working independence estimator uniformly up to the first order. Simulation study shows that the gains in efficiency with the proposed method in the presence of “within-subject” correlation can be significant in small samples. For illustration purposes, the procedure is applied to explore the impact of market concentration on airfare.  相似文献   

14.
The Heston-STAR model is a new class of stochastic volatility models defined by generalizing the Heston model to allow the volatility of the volatility process as well as the correlation between asset log-returns and variance shocks to change across different regimes via smooth transition autoregressive (STAR) functions. The form of the STAR functions is very flexible, much more so than the functions introduced in Jones (J Econom 116:181–224, 2003), and provides the framework for a wide range of stochastic volatility models. A Bayesian inference approach using data augmentation techniques is used for the parameters of our model. We also explore goodness of fit of our Heston-STAR model. Our analysis of the S&P 500 and VIX index demonstrates that the Heston-STAR model is more capable of dealing with large market fluctuations (such as in 2008) compared to the standard Heston model.  相似文献   

15.
This article is an empirical application of the search model with an unknown distribution, as introduced by Rothschild in 1974. For searchers who hold Dirichlet priors, we develop a novel characterization of optimal search behavior. Our solution delivers easily computable formulas for the ex-ante purchase probabilities as outcomes of search, as required by discrete-choice-based estimation. Using our method, we investigate the consequences of consumer learning on the properties of search-generated demand. Holding search costs constant, the search model from a known distribution predicts larger price elasticities, mainly for the lower-priced products. We estimate a search model with Dirichlet priors, on a dataset of prices and market shares of S&P 500 mutual funds. We find that the assumption of no uncertainty in consumer priors leads to substantial biases in search cost estimates.  相似文献   

16.
For nonstationary processes, the time-varying correlation structure provides useful insights into the underlying model dynamics. We study estimation and inferences for local autocorrelation process in locally stationary time series. Our constructed simultaneous confidence band can be used to address important hypothesis testing problems, such as whether the local autocorrelation process is indeed time-varying and whether the local autocorrelation is zero. In particular, our result provides an important generalization of the R function acf() to locally stationary Gaussian processes. Simulation studies and two empirical applications are developed. For the global temperature series, we find that the local autocorrelations are time-varying and have a “V” shape during 1910–1960. For the S&P 500 index, we conclude that the returns satisfy the efficient-market hypothesis whereas the magnitudes of returns show significant local autocorrelations.  相似文献   

17.
When historical data are available, incorporating them in an optimal way into the current data analysis can improve the quality of statistical inference. In Bayesian analysis, one can achieve this by using quality-adjusted priors of Zellner, or using power priors of Ibrahim and coauthors. These rules are constructed by raising the prior and/or the sample likelihood to some exponent values, which act as measures of compatibility of their quality or proximity of historical data to current data. This paper presents a general, optimum procedure that unifies these rules and is derived by minimizing a Kullback–Leibler divergence under a divergence constraint. We show that the exponent values are directly related to the divergence constraint set by the user and investigate the effect of this choice theoretically and also through sensitivity analysis. We show that this approach yields ‘100% efficient’ information processing rules in the sense of Zellner. Monte Carlo experiments are conducted to investigate the effect of historical and current sample sizes on the optimum rule. Finally, we illustrate these methods by applying them on real data sets.  相似文献   

18.
In this work, we discuss the class of bilinear GARCH (BL-GARCH) models that are capable of capturing simultaneously two key properties of non-linear time series: volatility clustering and leverage effects. It has often been observed that the marginal distributions of such time series have heavy tails; thus we examine the BL-GARCH model in a general setting under some non-normal distributions. We investigate some probabilistic properties of this model and we conduct a Monte Carlo experiment to evaluate the small-sample performance of the maximum likelihood estimation (MLE) methodology for various models. Finally, within-sample estimation properties were studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. The main results suggest that the Student-t BL-GARCH seems highly appropriate to describe the S&P 500 daily returns.  相似文献   

19.
In this article, we investigate the effects of careful modeling the long-run dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end, we allow the individual unconditional variances in conditional correlation generalized autoregressive conditional heteroscedasticity (CC-GARCH) models to change smoothly over time by incorporating a nonstationary component in the variance equations such as the spline-GARCH model and the time-varying (TV)-GARCH model. The variance equations combine the long-run and the short-run dynamic behavior of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange. The results suggest that accounting for deterministic changes in the unconditional variances improves the fit of the multivariate CC-GARCH models to the data. The effect of careful specification of the variance equations on the estimated correlations is variable: in some cases rather small, in others more discernible. We also show empirically that the CC-GARCH models with time-varying unconditional variances using the TV-GARCH model outperform the other models under study in terms of out-of-sample forecasting performance. In addition, we find that portfolio volatility-timing strategies based on time-varying unconditional variances often outperform the unmodeled long-run variances strategy out-of-sample. As a by-product, we generalize news impact surfaces to the situation in which both the GARCH equations and the conditional correlations contain a deterministic component that is a function of time.  相似文献   

20.
Summary.  We apply multivariate shrinkage to estimate local area rates of unemployment and economic inactivity by using UK Labour Force Survey data. The method exploits the similarity of the rates of claiming unemployment benefit and the unemployment rates as defined by the International Labour Organisation. This is done without any distributional assumptions, merely relying on the high correlation of the two rates. The estimation is integrated with a multiple-imputation procedure for missing employment status of subjects in the database (item non-response). The hot deck method that is used in the imputations is adapted to reflect the uncertainty in the model for non-response. The method is motivated as a development (improvement) of the current operational procedure in which the imputed value is a non-stochastic function of the data. An extension of the procedure to subjects who are absent from the database (unit non-response) is proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号