首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the area of finance, the stochastic volatility (SV) model is a useful tool for modelling stock market returns. However, there is evidence that asymmetric behaviour of stock returns exists. A threshold SV (THSV) model is provided to capture this behaviour. In this study, we introduce a robust model created through empirical Bayesian analysis to deal with the uncertainty between the SV and THSV models. A Markov chain Monte Carlo algorithm is applied to empirically select the hyperparameters of the prior distribution. Furthermore, the value at risk from the resulting predictive distribution is also given. Simulation studies show that the proposed empirical Bayes model not only clarifies the acceptability of prediction but also reduces the risk of model uncertainty.  相似文献   

2.
ABSTRACT

We introduce a semi-parametric Bayesian approach based on skewed Dirichlet processes priors for location parameters in the ordinal calibration problem. This approach allows the modeling of asymmetrical error distributions. Conditional posterior distributions are implemented, thus allowing the use of Markov chains Monte Carlo to generate the posterior distributions. The methodology is applied to both simulated and real data.  相似文献   

3.
This paper develops a novel and efficient algorithm for Bayesian inference in inverse Gamma stochastic volatility models. It is shown that by conditioning on auxiliary variables, it is possible to sample all the volatilities jointly directly from their posterior conditional density, using simple and easy to draw from distributions. Furthermore, this paper develops a generalized inverse gamma process with more flexible tails in the distribution of volatilities, which still allows for simple and efficient calculations. Using several macroeconomic and financial datasets, it is shown that the inverse gamma and generalized inverse gamma processes can greatly outperform the commonly used log normal volatility processes with Student’s t errors or jumps in the mean equation.  相似文献   

4.
In many parametric problems the use of order restrictions among the parameters can lead to improved precision. Our interest is in the study of several multinomial populations under the stochastic order restriction (SOR) for univariate situations. We use Bayesian methods to show that the SOR can lead to larger gains in precision than the method without the SOR when the SOR is reasonable. Unlike frequentist order restricted inference, our methodology permits analysis even when there is uncertainty about the SOR. Our method is sampling based, and we use simple and efficient rejection sampling. The Bayes factor in favor of the SOR is computed in a simple manner, and samples from the requisite posterior distributions are easily obtained. We use real data to illustrate the procedure, and we show that there is likely to be larger gains in precision under the SOR.  相似文献   

5.
An alternative distributional assumption is proposed for the stochastic volatility model. This results in extremely flexible tail behaviour of the sampling distribution for the observables, as well as in the availability of a simple Markov Chain Monte Carlo strategy for posterior analysis. By allowing the tail behaviour to be determined by a separate parameter, we reserve the parameters of the volatility process to dictate the degree of volatility clustering. Treatment of a mean function is formally integrated in the analysis.

Some empirical examples on both stock prices and exchange rates clearly indicate the presence of fat tails, in combination with high levels of volatility clustering. In addition, predictive distributions indicate a good fit with these typical financial data sets.  相似文献   

6.
An alternative distributional assumption is proposed for the stochastic volatility model. This results in extremely flexible tail behaviour of the sampling distribution for the observables, as well as in the availability of a simple Markov Chain Monte Carlo strategy for posterior analysis. By allowing the tail behaviour to be determined by a separate parameter, we reserve the parameters of the volatility process to dictate the degree of volatility clustering. Treatment of a mean function is formally integrated in the analysis.

Some empirical examples on both stock prices and exchange rates clearly indicate the presence of fat tails, in combination with high levels of volatility clustering. In addition, predictive distributions indicate a good fit with these typical financial data sets.  相似文献   

7.
A Bayesian mixture model for differential gene expression   总被引:3,自引:0,他引:3  
Summary.  We propose model-based inference for differential gene expression, using a nonparametric Bayesian probability model for the distribution of gene intensities under various conditions. The probability model is a mixture of normal distributions. The resulting inference is similar to a popular empirical Bayes approach that is used for the same inference problem. The use of fully model-based inference mitigates some of the necessary limitations of the empirical Bayes method. We argue that inference is no more difficult than posterior simulation in traditional nonparametric mixture-of-normal models. The approach proposed is motivated by a microarray experiment that was carried out to identify genes that are differentially expressed between normal tissue and colon cancer tissue samples. Additionally, we carried out a small simulation study to verify the methods proposed. In the motivating case-studies we show how the nonparametric Bayes approach facilitates the evaluation of posterior expected false discovery rates. We also show how inference can proceed even in the absence of a null sample of known non-differentially expressed scores. This highlights the difference from alternative empirical Bayes approaches that are based on plug-in estimates.  相似文献   

8.
Summary.  The method of Bayesian model selection for join point regression models is developed. Given a set of K +1 join point models M 0,  M 1, …,  M K with 0, 1, …,  K join points respec-tively, the posterior distributions of the parameters and competing models M k are computed by Markov chain Monte Carlo simulations. The Bayes information criterion BIC is used to select the model M k with the smallest value of BIC as the best model. Another approach based on the Bayes factor selects the model M k with the largest posterior probability as the best model when the prior distribution of M k is discrete uniform. Both methods are applied to analyse the observed US cancer incidence rates for some selected cancer sites. The graphs of the join point models fitted to the data are produced by using the methods proposed and compared with the method of Kim and co-workers that is based on a series of permutation tests. The analyses show that the Bayes factor is sensitive to the prior specification of the variance σ 2, and that the model which is selected by BIC fits the data as well as the model that is selected by the permutation test and has the advantage of producing the posterior distribution for the join points. The Bayesian join point model and model selection method that are presented here will be integrated in the National Cancer Institute's join point software ( http://www.srab.cancer.gov/joinpoint/ ) and will be available to the public.  相似文献   

9.
Summary.  We develop Markov chain Monte Carlo methodology for Bayesian inference for non-Gaussian Ornstein–Uhlenbeck stochastic volatility processes. The approach introduced involves expressing the unobserved stochastic volatility process in terms of a suitable marked Poisson process. We introduce two specific classes of Metropolis–Hastings algorithms which correspond to different ways of jointly parameterizing the marked point process and the model parameters. The performance of the methods is investigated for different types of simulated data. The approach is extended to consider the case where the volatility process is expressed as a superposition of Ornstein–Uhlenbeck processes. We apply our methodology to the US dollar–Deutschmark exchange rate.  相似文献   

10.
11.
The analysis of failure time data often involves two strong assumptions. The proportional hazards assumption postulates that hazard rates corresponding to different levels of explanatory variables are proportional. The additive effects assumption specifies that the effect associated with a particular explanatory variable does not depend on the levels of other explanatory variables. A hierarchical Bayes model is presented, under which both assumptions are relaxed. In particular, time-dependent covariate effects are explicitly modelled, and the additivity of effects is relaxed through the use of a modified neural network structure. The hierarchical nature of the model is useful in that it parsimoniously penalizes violations of the two assumptions, with the strength of the penalty being determined by the data.  相似文献   

12.
In this study, we propose a prior on restricted Vector Autoregressive (VAR) models. The prior setting permits efficient Markov Chain Monte Carlo (MCMC) sampling from the posterior of the VAR parameters and estimation of the Bayes factor. Numerical simulations show that when the sample size is small, the Bayes factor is more effective in selecting the correct model than the commonly used Schwarz criterion. We conduct Bayesian hypothesis testing of VAR models on the macroeconomic, state-, and sector-specific effects of employment growth.  相似文献   

13.
ABSTRACT

Inference for epidemic parameters can be challenging, in part due to data that are intrinsically stochastic and tend to be observed by means of discrete-time sampling, which are limited in their completeness. The problem is particularly acute when the likelihood of the data is computationally intractable. Consequently, standard statistical techniques can become too complicated to implement effectively. In this work, we develop a powerful method for Bayesian paradigm for susceptible–infected–removed stochastic epidemic models via data-augmented Markov Chain Monte Carlo. This technique samples all missing values as well as the model parameters, where the missing values and parameters are treated as random variables. These routines are based on the approximation of the discrete-time epidemic by diffusion process. We illustrate our techniques using simulated epidemics and finally we apply them to the real data of Eyam plague.  相似文献   

14.
Summary.  Road safety has recently become a major concern in most modern societies. The identification of sites that are more dangerous than others (black spots) can help in better scheduling road safety policies. This paper proposes a methodology for ranking sites according to their level of hazard. The model is innovative in at least two respects. Firstly, it makes use of all relevant information per accident location, including the total number of accidents and the number of fatalities, as well as the number of slight and serious injuries. Secondly, the model includes the use of a cost function to rank the sites with respect to their total expected cost to society. Bayesian estimation for the model via a Markov chain Monte Carlo approach is proposed. Accident data from 519 intersections in Leuven (Belgium) are used to illustrate the methodology proposed. Furthermore, different cost functions are used to show the effect of the proposed method on the use of different costs per type of injury.  相似文献   

15.
Bayesian estimation for the two unknown parameters and the reliability function of the exponentiated Weibull model are obtained based on generalized order statistics. Markov chain Monte Carlo (MCMC) methods are considered to compute the Bayes estimates of the target parameters. Our computations are based on the balanced loss function which contains the symmetric and asymmetric loss functions as special cases. The results have been specialized to the progressively Type-II censored data and upper record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.  相似文献   

16.
Bayesian semiparametric inference is considered for a loglinear model. This model consists of a parametric component for the regression coefficients and a nonparametric component for the unknown error distribution. Bayesian analysis is studied for the case of a parametric prior on the regression coefficients and a mixture-of-Dirichlet-processes prior on the unknown error distribution. A Markov-chain Monte Carlo (MCMC) method is developed to compute the features of the posterior distribution. A model selection method for obtaining a more parsimonious set of predictors is studied. The method adds indicator variables to the regression equation. The set of indicator variables represents all the possible subsets to be considered. A MCMC method is developed to search stochastically for the best subset. These procedures are applied to two examples, one with censored data.  相似文献   

17.
This paper presents an efficient Monte Carlo simulation scheme based on the variance reduction methods to evaluate arithmetic average Asian options in the context of the double Heston's stochastic volatility model with jumps. This paper consists of two essential parts. The first part presents a new flexible stochastic volatility model, namely, the double Heston model with jumps. In the second part, by combining two variance reduction procedures via Monte Carlo simulation, we propose an efficient Monte Carlo simulation scheme for pricing arithmetic average Asian options under the double Heston model with jumps. Numerical results illustrate the efficiency of our method.  相似文献   

18.
Standard methods for maximum likelihood parameter estimation in latent variable models rely on the Expectation-Maximization algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing; that is we build a sequence of artificial distributions whose support concentrates itself on the set of maximum likelihood estimates. We sample from these distributions using a sequential Monte Carlo approach. We demonstrate state-of-the-art performance for several applications of the proposed approach.  相似文献   

19.
Bayesian model learning based on a parallel MCMC strategy   总被引:1,自引:0,他引:1  
We introduce a novel Markov chain Monte Carlo algorithm for estimation of posterior probabilities over discrete model spaces. Our learning approach is applicable to families of models for which the marginal likelihood can be analytically calculated, either exactly or approximately, given any fixed structure. It is argued that for certain model neighborhood structures, the ordinary reversible Metropolis-Hastings algorithm does not yield an appropriate solution to the estimation problem. Therefore, we develop an alternative, non-reversible algorithm which can avoid the scaling effect of the neighborhood. To efficiently explore a model space, a finite number of interacting parallel stochastic processes is utilized. Our interaction scheme enables exploration of several local neighborhoods of a model space simultaneously, while it prevents the absorption of any particular process to a relatively inferior state. We illustrate the advantages of our method by an application to a classification model. In particular, we use an extensive bacterial database and compare our results with results obtained by different methods for the same data.  相似文献   

20.
Dynamic regression models are widely used because they express and model the behaviour of a system over time. In this article, two dynamic regression models, the distributed lag (DL) model and the autoregressive distributed lag model, are evaluated focusing on their lag lengths. From a classical statistics point of view, there are various methods to determine the number of lags, but none of them are the best in all situations. This is a serious issue since wrong choices will provide bad estimates for the effects of the regressors on the response variable. We present an alternative for the aforementioned problems by considering a Bayesian approach. The posterior distributions of the numbers of lags are derived under an improper prior for the model parameters. The fractional Bayes factor technique [A. O'Hagan, Fractional Bayes factors for model comparison (with discussion), J. R. Statist. Soc. B 57 (1995), pp. 99–138] is used to handle the indeterminacy in the likelihood function caused by the improper prior. The zero-one loss function is used to penalize wrong decisions. A naive method using the specified maximum number of DLs is also presented. The proposed and the naive methods are verified using simulation data. The results are promising for the method we proposed. An illustrative example with a real data set is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号