首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Using survey data, we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a counter-cyclical fashion. Moreover, inclusion of expected business conditions in otherwise-standard predictive return regressions substantially reduce the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even when including the key nonfinancial predictor, the generalized consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, whereas time-varying consumption/wealth may capture time-varying risk aversion.  相似文献   

2.
Making use of a characterization of multivariate normality by Hermitian polynomials, we propose a multivariate normality test. The approach is then applied to time series analysis by constructing a test for Gaussianity of a stationary univariate series. Simulation study shows that the proposed test has reasonable power and outperforms other tests available in the literature when the innovation series of the time series is symmetric, but non-Gaussian.  相似文献   

3.
Abstract

This article develops a method to estimate search frictions as well as preference parameters in differentiated product markets. Search costs are nonparametrically identified, which means our method can be used to estimate search costs in differentiated product markets that lack a suitable search cost shifter. We apply our model to the U.S. Medigap insurance market. We find that search costs are substantial: the estimated median cost of searching for an insurer is $30. Using the estimated parameters we find that eliminating search costs could result in price decreases of as much as $71 (or 4.7%), along with increases in average consumer welfare of up to $374.  相似文献   

4.
5.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation involving a nonparametric correction of a parametric likelihood has been proposed in the literature with a proof of posterior consistency for spectral density estimation in combination with the Bernstein–Dirichlet process prior for Gaussian time series. In this article, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.  相似文献   

6.
We propose a simulation-based Bayesian approach to analyze multivariate time series with possible common long-range dependent factors. A state-space approach is used to represent the likelihood function in a tractable manner. The approach taken here allows for extension to fit a non-Gaussian multivariate stochastic volatility (MVSV) model with common long-range dependent components. The method is illustrated for a set of stock returns for companies having similar annual sales.  相似文献   

7.
ABSTRACT

The most common measure of dependence between two time series is the cross-correlation function. This measure gives a complete characterization of dependence for two linear and jointly Gaussian time series, but it often fails for nonlinear and non-Gaussian time series models, such as the ARCH-type models used in finance. The cross-correlation function is a global measure of dependence. In this article, we apply to bivariate time series the nonlinear local measure of dependence called local Gaussian correlation. It generally works well also for nonlinear models, and it can distinguish between positive and negative local dependence. We construct confidence intervals for the local Gaussian correlation and develop a test based on this measure of dependence. Asymptotic properties are derived for the parameter estimates, for the test functional and for a block bootstrap procedure. For both simulated and financial index data, we construct confidence intervals and we compare the proposed test with one based on the ordinary correlation and with one based on the Brownian distance correlation. Financial indexes are examined over a long time period and their local joint behavior, including tail behavior, is analyzed prior to, during and after the financial crisis. Supplementary material for this article is available online.  相似文献   

8.
In this paper we use Monte Carlo Simulation methodology to compare the effectiveness of five multivariate quality control methods, namely Hotelling T 2, Multivariate Shewhart Char, Discriminant Analysis, Decomposition Method, and Multivariate Ridge Residual Chart-developed by Authors-, for controlling the mean vector in a multivariate process. P-dimensional multivariate normal data generated using different covariance structures. Various amount of shift in the mean vector is induced and the resulting Average Run Length (ARL) is computed. The effectiveness of each method with regard to ARL is discussed.  相似文献   

9.
Real count data time series often show the phenomenon of the underdispersion and overdispersion. In this paper, we develop two extensions of the first-order integer-valued autoregressive process with Poisson innovations, based on binomial thinning, for modeling integer-valued time series with equidispersion, underdispersion, and overdispersion. The main properties of the models are derived. The methods of conditional maximum likelihood, Yule–Walker, and conditional least squares are used for estimating the parameters, and their asymptotic properties are established. We also use a test based on our processes for checking if the count time series considered is overdispersed or underdispersed. The proposed models are fitted to time series of the weekly number of syphilis cases and monthly counts of family violence illustrating its capabilities in challenging the overdispersed and underdispersed count data.  相似文献   

10.
We consider stochastic volatility models that are defined by an Ornstein–Uhlenbeck (OU)-Gamma time change. These models are most suitable for modeling financial time series and follow the general framework of the popular non-Gaussian OU models of Barndorff-Nielsen and Shephard. One current problem of these otherwise attractive nontrivial models is, in general, the unavailability of a tractable likelihood-based statistical analysis for the returns of financial assets, which requires the ability to sample from a nontrivial joint distribution. We show that an OU process driven by an infinite activity Gamma process, which is an OU-Gamma process, exhibits unique features, which allows one to explicitly describe and exactly sample from relevant joint distributions. This is a consequence of the OU structure and the calculus of Gamma and Dirichlet processes. We develop a particle marginal Metropolis–Hastings algorithm for this type of continuous-time stochastic volatility models and check its performance using simulated data. For illustration we finally fit the model to S&P500 index data.  相似文献   

11.
ABSTRACT

We propose a semiparametric approach to estimate the existence and location of a statistical change-point to a nonlinear multivariate time series contaminated with an additive noise component. In particular, we consider a p-dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach involves conducting a Bayesian analysis on the empirical detail coefficients of the original time series after a wavelet transform. If the mean function of our time series can be expressed as a multivariate step function, we find our Bayesian-wavelet method performs comparably with classical parametric methods such as maximum likelihood estimation. The advantage of our multivariate change-point method is seen in how it applies to a much larger class of mean functions that require only general smoothness conditions.  相似文献   

12.
Summary.  We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments.  相似文献   

13.
In this paper we consider Sharpe's single-index model or Sharpe's model, by assuming that the returns obtained follow a multivariate t elliptical distribution. Also, given that the returns of the market are not observable, the statistical analysis was made in the context of an errors-in-variables model. In order to analyze the sensibility to possible outliers and/or atypical returns of the maximum likelihood estimators the local influence method [10 Cook, R. D. 1986. Assessment of local influence. J. Roy. Statist. Soc. B, 48: 133169.  [Google Scholar]] was implemented. The results are illustrated by using a set of shares of companies belonging to the Chilean Stock Market. The main conclusion is that the t model with small degrees of freedom is able to incorporate possible outliers and influential returns in the data.  相似文献   

14.
Multivariate exponential weighted moving average and cumulative sum charts are the most common memory type multivariate control charts. They make use of the present and past information to detect small shifts in the process parameter(s). In this article, we propose two new multivariate control charts using a mixed version of their design setups. The plotting statistics of the proposed charts are based on the cumulative sum of the multivariate exponentially weighted moving averages. The performances of these schemes are evaluated in terms of average run length. The proposals are compared with their existing counterparts, including HotellingT2, MCUSUM, MEWMA, and MC1 charts. An application example is also presented for practical considerations using a real dataset.  相似文献   

15.
Copulas have proved to be very successful tools for the flexible modeling of dependence. Bivariate copulas have been deeply researched in recent years, while building higher-dimensional copulas is still recognized to be a difficult task. In this paper, we study the higher-dimensional dependent reliability systems using a type of decomposition called “vine,” by which a multivariate distribution can be decomposed into a cascade of bivariate copulas. Some equations of system reliability for parallel, series, and k-out-of-n systems are obtained and then decomposed based on C-vine and D-vine copulas. Finally, a shutdown system is considered to illustrate the results obtained in the paper.  相似文献   

16.
Estimating multivariate location and scatter with both affine equivariance and positive breakdown has always been difficult. A well-known estimator which satisfies both properties is the Minimum Volume Ellipsoid Estimator (MVE). Computing the exact MVE is often not feasible, so one usually resorts to an approximate algorithm. In the regression setup, algorithms for positive-breakdown estimators like Least Median of Squares typically recompute the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, can be applied to the MVE. For this purpose we use the Minimum Volume Ball (MVB), in order to lower the MVE objective function. An exact algorithm for calculating the MVB is presented. As an alternative to MVB location adjustment we propose L 1 location adjustment, which does not necessarily lower the MVE objective function but yields more efficient estimates for the location part. Simulations compare the two types of location adjustment. We also obtain the maxbias curves of L 1 and the MVB in the multivariate setting, revealing the superiority of L 1.  相似文献   

17.
The usual covariance estimates for data n-1 from a stationary zero-mean stochastic process {Xt} are the sample covariances Both direct and resampling approaches are used to estimate the variance of the sample covariances. This paper compares the performance of these variance estimates. Using a direct approach, we show that a consistent windowed periodogram estimate for the spectrum is more effective than using the periodogram itself. A frequency domain bootstrap for time series is proposed and analyzed, and we introduce a frequency domain version of the jackknife that is shown to be asymptotically unbiased and consistent for Gaussian processes. Monte Carlo techniques show that the time domain jackknife and subseries method cannot be recommended. For a Gaussian underlying series a direct approach using a smoothed periodogram is best; for a non-Gaussian series the frequency domain bootstrap appears preferable. For small samples, the bootstraps are dangerous: both the direct approach and frequency domain jackknife are better.  相似文献   

18.
The approximate likelihood function introduced by Whittle has been used to estimate the spectral density and certain parameters of a variety of time series models. In this note we attempt to empirically quantify the loss of efficiency of Whittle's method in nonstandard settings. A recently developed representation of some first-order non-Gaussian stationary autoregressive process allows a direct comparison of the true likelihood function with that of Whittle. The conclusion is that Whittle's likelihood can produce unreliable estimates in the non-Gaussian case, even for moderate sample sizes. Moreover, for small samples, and if the autocorrelation of the process is high, Whittle's approximation is not efficient even in the Gaussian case. While these facts are known to some extent, the present study sheds more light on the degree of efficiency loss incurred by using Whittle's likelihood, in both Gaussian and non-Gaussian cases.  相似文献   

19.
The problem of modelling time series driven by non-Gaussian innovation has been considered recently by Li and McLeod (1988). In this paper we have discussed the problem of identification of ARMA models with non-Gaussian innovations. Simulation experiments are used to study the applicability of theoretical results.  相似文献   

20.
This paper reports an extensive Monte Carlo simulation study based on six estimators for the long memory fractional parameter when the time series is non-stationary, i.e., ARFIMA(p, d, q) process for d?>?0.5. Parametric and semiparametric methods are compared. In addition, the effect of the parameter estimation is investigated for small and large sample sizes and non-Gaussian error innovations. The methodology is applied to a well known data set, the so-called UK short interest rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号