首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We commonly observe many types of paired nature of competitions in which the objects are compared by the respondents pairwise in a subjective manner. The Bayesian statistics, contrary to the classical statistics, presents a generic tool to incorporate new experimental evidence and update the existing information. These and other properties have ushered the statisticians to focus their attention on the Bayesian analysis of different paired comparison models. The present article focuses on the amended Davidson model for paired comparison in which an amendment has been introduced that accommodates the option of not distinguishing the effects of two treatments when they are compared pairwise. However, Bayesian analysis of the amended Davidson model is performed using the noninformative priors after making another small modification of incorporating the parameter of order effect factor. The joint and marginal posterior distributions of the parameters, their posterior estimates, predictive and posterior probabilities to compare the treatment parameters are obtained.  相似文献   

2.
This paper develops a new class of option price models and applies it to options on the Australian S&P200 Index. The class of models generalizes the traditional Black‐Scholes framework by accommodating time‐varying conditional volatility, skewness and excess kurtosis in the underlying returns process. An important property of these more general pricing models is that the computational requirements are essentially the same as those associated with the Black‐Scholes model, with both methods being based on one‐dimensional integrals. Bayesian inferential methods are used to evaluate a range of models nested in the general framework, using observed market option prices. The evaluation is based on posterior parameter distributions, as well as posterior model probabilities. Various fit and predictive measures, plus implied volatility graphs, are also used to rank the alternative models. The empirical results provide evidence that time‐varying volatility, leptokurtosis and a small degree of negative skewness are priced in Australian stock market options.  相似文献   

3.
For clustering mixed categorical and continuous data, Lawrence and Krzanowski (1996) proposed a finite mixture model in which component densities conform to the location model. In the graphical models literature the location model is known as the homogeneous Conditional Gaussian model. In this paper it is shown that their model is not identifiable without imposing additional restrictions. Specifically, for g groups and m locations, (g!)m–1 distinct sets of parameter values (not including permutations of the group mixing parameters) produce the same likelihood function. Excessive shrinkage of parameter estimates in a simulation experiment reported by Lawrence and Krzanowski (1996) is shown to be an artifact of the model's non-identifiability. Identifiable finite mixture models can be obtained by imposing restrictions on the conditional means of the continuous variables. These new identified models are assessed in simulation experiments. The conditional mean structure of the continuous variables in the restricted location mixture models is similar to that in the underlying variable mixture models proposed by Everitt (1988), but the restricted location mixture models are more computationally tractable.  相似文献   

4.
In this paper Bayesian methods are applied to a stochastic volatility model using both the prices of the asset and the prices of options written on the asset. Posterior densities for all model parameters, latent volatilities and the market price of volatility risk are produced via a Markov Chain Monte Carlo (MCMC) sampling algorithm. Candidate draws for the unobserved volatilities are obtained in blocks by applying the Kalman filter and simulation smoother to a linearization of a nonlinear state space representation of the model. Crucially, information from both the spot and option prices affects the draws via the specification of a bivariate measurement equation, with implied Black–Scholes volatilities used to proxy observed option prices in the candidate model. Alternative models nested within the Heston (1993) framework are ranked via posterior odds ratios, as well as via fit, predictive and hedging performance. The method is illustrated using Australian News Corporation spot and option price data.  相似文献   

5.
In this paper Bayesian methods are applied to a stochastic volatility model using both the prices of the asset and the prices of options written on the asset. Posterior densities for all model parameters, latent volatilities and the market price of volatility risk are produced via a Markov Chain Monte Carlo (MCMC) sampling algorithm. Candidate draws for the unobserved volatilities are obtained in blocks by applying the Kalman filter and simulation smoother to a linearization of a nonlinear state space representation of the model. Crucially, information from both the spot and option prices affects the draws via the specification of a bivariate measurement equation, with implied Black-Scholes volatilities used to proxy observed option prices in the candidate model. Alternative models nested within the Heston (1993) framework are ranked via posterior odds ratios, as well as via fit, predictive and hedging performance. The method is illustrated using Australian News Corporation spot and option price data.  相似文献   

6.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

7.
Pricing of American options in discrete time is considered, where the option is allowed to be based on several underlying stocks. It is assumed that the price processes of the underlying stocks are given by Markov processes. We use the Monte Carlo approach to generate artificial sample paths of these price processes, and then we use nonparametric regression estimates to estimate from this data so-called continuation values, which are defined as mean values of the American option for given values of the underlying stocks at time t subject to the constraint that the option is not exercised at time t. As nonparametric regression estimates we use least squares estimates with complexity penalties, which include as special cases least squares spline estimates, least squares neural networks, smoothing splines and orthogonal series estimates. General results concerning rate of convergence are presented and applied to derive results for the special cases mentioned above. Furthermore the pricing of American options is illustrated by simulated data.  相似文献   

8.
We use a Bayesian approach to fitting a linear regression model to transformations of the natural parameter for the exponential class of distributions. The usual Bayesian approach is to assume that a linear model exactly describes the relationship among the natural parameters. We assume only that a linear model is approximately in force. We approximate the theta-links by using a linear model obtained by minimizing the posterior expectation of a loss function.While some posterior results can be obtained analytically considerable generality follows from an exact Monte Carlo method for obtaining random samples of parameter values or functions of parameter values from their respective posterior distributions. The approach that is presented is justified for small samples, requires only one-dimensional numerical integrations, and allows for the use of regression matrices with less than full column rank. Two numerical examples are provided.  相似文献   

9.
Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u‐shape very similar, but not equal, to a β‐distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
This paper extends the classical jump-diffusion option pricing model to incorporate serially correlated jump sizes which have been documented in recent empirical studies. We model the series of jump sizes by an autoregressive process and provide an analysis on the underlying stock return process. Based on this analysis, the European option price and the hedging parameters under the extended model are derived analytically. Through numerical examples, we investigate how the autocorrelation of jump sizes influences stock returns, option prices and hedging parameters, and demonstrate its effects on hedging portfolios and implied volatility smiles. A calibration example based on real market data is provided to show the advantage of incorporating the autocorrelation of jump sizes.  相似文献   

11.
In recent years, numerous statisticians have focused their attention on the Bayesian analysis of different paired comparison models. While studying paired comparison techniques, the Davidson model is considered to be one of the famous paired comparison models in the available literature. In this article, we have introduced an amendment in the Davidson model which has been commenced to accommodate the option of not distinguishing the effects of two treatments when they are compared pairwise. Having made this amendment, the Bayesian analysis of the Amended Davidson model is performed using the noninformative (uniform and Jeffreys’) and informative (Dirichlet–gamma–gamma) priors. To study the model and to perform the Bayesian analysis with the help of an example, we have obtained the joint and marginal posterior distributions of the parameters, their posterior estimates, graphical presentations of the marginal densities, preference and predictive probabilities and the posterior probabilities to compare the treatment parameters.  相似文献   

12.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

13.
Based on hybrid censored data, the problem of making statistical inference on parameters of a two parameter Burr Type XII distribution is taken up. The maximum likelihood estimates are developed for the unknown parameters using the EM algorithm. Fisher information matrix is obtained by applying missing value principle and is further utilized for constructing the approximate confidence intervals. Some Bayes estimates and the corresponding highest posterior density intervals of the unknown parameters are also obtained. Lindley’s approximation method and a Markov Chain Monte Carlo (MCMC) technique have been applied to evaluate these Bayes estimates. Further, MCMC samples are utilized to construct the highest posterior density intervals as well. A numerical comparison is made between proposed estimates in terms of their mean square error values and comments are given. Finally, two data sets are analyzed using proposed methods.  相似文献   

14.
In this article, an importance sampling (IS) method for the posterior expectation of a non linear function in a Bayesian vector autoregressive (VAR) model is developed. Most Bayesian inference problems involve the evaluation of the expectation of a function of interest, usually a non linear function of the model parameters, under the posterior distribution. Non linear functions in Bayesian VAR setting are difficult to estimate and usually require numerical methods for their evaluation. A weighted IS estimator is used for the evaluation of the posterior expectation. With the cross-entropy (CE) approach, the IS density is chosen from a specified family of densities such that the CE distance or the Kullback–Leibler divergence between the optimal IS density and the importance density is minimal. The performance of the proposed algorithm is assessed in an iterated multistep forecasting of US macroeconomic time series.  相似文献   

15.
Generally, the semiclosed-form option pricing formula for complex financial models depends on unobservable factors such as stochastic volatility and jump intensity. A popular practice is to use an estimate of these latent factors to compute the option price. However, in many situations this plug-and-play approximation does not yield the appropriate price. This article examines this bias and quantifies its impacts. We decompose the bias into terms that are related to the bias on the unobservable factors and to the precision of their point estimators. The approximated price is found to be highly biased when only the history of the stock price is used to recover the latent states. This bias is corrected when option prices are added to the sample used to recover the states' best estimate. We also show numerically that such a bias is propagated on calibrated parameters, leading to erroneous values. The Canadian Journal of Statistics 48: 8–35; 2020 © 2019 Statistical Society of Canada  相似文献   

16.
The zero truncated inverse Gaussian–Poisson model, obtained by first mixing the Poisson model assuming its expected value has an inverse Gaussian distribution and then truncating the model at zero, is very useful when modelling frequency count data. A Bayesian analysis based on this statistical model is implemented on the word frequency counts of various texts, and its validity is checked by exploring the posterior distribution of the Pearson errors and by implementing posterior predictive consistency checks. The analysis based on this model is useful because it allows one to use the posterior distribution of the model mixing density as an approximation of the posterior distribution of the density of the word frequencies of the vocabulary of the author, which is useful to characterize the style of that author. The posterior distribution of the expectation and of measures of the variability of that mixing distribution can be used to assess the size and diversity of his vocabulary. An alternative analysis is proposed based on the inverse Gaussian-zero truncated Poisson mixture model, which is obtained by switching the order of the mixing and the truncation stages. Even though this second model fits some of the word frequency data sets more accurately than the first model, in practice the analysis based on it is not as useful because it does not allow one to estimate the word frequency distribution of the vocabulary.  相似文献   

17.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

18.
In this article, the Bayes estimates of two-parameter gamma distribution are considered. It is well known that the Bayes estimators of the two-parameter gamma distribution do not have compact form. In this paper, it is assumed that the scale parameter has a gamma prior and the shape parameter has any log-concave prior, and they are independently distributed. Under the above priors, we use Gibbs sampling technique to generate samples from the posterior density function. Based on the generated samples, we can compute the Bayes estimates of the unknown parameters and can also construct HPD credible intervals. We also compute the approximate Bayes estimates using Lindley's approximation under the assumption of gamma priors of the shape parameter. Monte Carlo simulations are performed to compare the performances of the Bayes estimators with the classical estimators. One data analysis is performed for illustrative purposes. We further discuss the Bayesian prediction of future observation based on the observed sample and it is seen that the Gibbs sampling technique can be used quite effectively for estimating the posterior predictive density and also for constructing predictive intervals of the order statistics from the future sample.  相似文献   

19.
In this paper, we propose a hidden Markov model for the analysis of the time series of bivariate circular observations, by assuming that the data are sampled from bivariate circular densities, whose parameters are driven by the evolution of a latent Markov chain. The model segments the data by accounting for redundancies due to correlations along time and across variables. A computationally feasible expectation maximization (EM) algorithm is provided for the maximum likelihood estimation of the model from incomplete data, by treating the missing values and the states of the latent chain as two different sources of incomplete information. Importance-sampling methods facilitate the computation of bootstrap standard errors of the estimates. The methodology is illustrated on a bivariate time series of wind and wave directions and compared with popular segmentation models for bivariate circular data, which ignore correlations across variables and/or along time.  相似文献   

20.
Abstract

This paper introduces a multiscale Gaussian convolution model of Gaussian mixture (MGC-GMM) via the convolution of the GMM and a multiscale Gaussian window function. It is found that the MGC-GMM is still a Gaussian mixture model, and its parameters can be mapped back to the parameters of the GMM. Meanwhile, the multiscale probability density function (MPDF) of the MGC-GMM can be viewed as the mathematical expectation of a random process induced by the Gaussian window function and the GMM, which can be directly estimated by the use of sample data. Based on the estimated MPDF, a novel algorithm denoted by the MGC is proposed for the selection of model and the parameter estimates of the GMM, where the component number and the means of the GMM are respectively determined by the number and the locations of the maximum points of the MPDF, and the numerical algorithms for the weight and variance parameters of the GMM are derived. The MGC is suitable for the GMM with diagonal covariance matrices. A MGC-EM algorithm is also presented for the generalized GMM, where the GMM is estimated using the EM algorithm by taking the estimates from the MGC as initial parameters of the GMM model. The proposed algorithms are tested via a series of simulated sample sets from the given GMM models, and the results show that the proposed algorithms can effectively estimate the GMM model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号