首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Fernández-Durán [Circular distributions based on nonnegative trigonometric sums. Biometrics. 2004;60:499–503] developed a new family of circular distributions based on non-negative trigonometric sums that is suitable for modelling data sets that present skewness and/or multimodality. In this paper, a Bayesian approach to deriving estimates of the unknown parameters of this family of distributions is presented. Because the parameter space is the surface of a hypersphere and the dimension of the hypersphere is an unknown parameter of the distribution, the Bayesian inference must be based on transdimensional Markov Chain Monte Carlo (MCMC) algorithms to obtain samples from the high-dimensional posterior distribution. The MCMC algorithm explores the parameter space by moving along great circles on the surface of the hypersphere. The methodology is illustrated with real and simulated data sets.  相似文献   

2.
Markov chain Monte Carlo (MCMC) algorithms have been shown to be useful for estimation of complex item response theory (IRT) models. Although an MCMC algorithm can be very useful, it also requires care in use and interpretation of results. In particular, MCMC algorithms generally make extensive use of priors on model parameters. In this paper, MCMC estimation is illustrated using a simple mixture IRT model, a mixture Rasch model (MRM), to demonstrate how the algorithm operates and how results may be affected by some commonly used priors. Priors on the probabilities of mixtures, label switching, model selection, metric anchoring, and implementation of the MCMC algorithm using WinBUGS are described, and their effects illustrated on parameter recovery in practical testing situations. In addition, an example is presented in which an MRM is fitted to a set of educational test data using the MCMC algorithm and a comparison is illustrated with results from three existing maximum likelihood estimation methods.  相似文献   

3.
In the expectation–maximization (EM) algorithm for maximum likelihood estimation from incomplete data, Markov chain Monte Carlo (MCMC) methods have been used in change-point inference for a long time when the expectation step is intractable. However, the conventional MCMC algorithms tend to get trapped in local mode in simulating from the posterior distribution of change points. To overcome this problem, in this paper we propose a stochastic approximation Monte Carlo version of EM (SAMCEM), which is a combination of adaptive Markov chain Monte Carlo and EM utilizing a maximum likelihood method. SAMCEM is compared with the stochastic approximation version of EM and reversible jump Markov chain Monte Carlo version of EM on simulated and real datasets. The numerical results indicate that SAMCEM can outperform among the three methods by producing much more accurate parameter estimates and the ability to achieve change-point positions and estimates simultaneously.  相似文献   

4.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

5.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

6.
7.
This article designs a Sequential Monte Carlo (SMC) algorithm for estimation of Bayesian semi-parametric Stochastic Volatility model for financial data. In particular, it makes use of one of the most recent particle filters called Particle Learning (PL). SMC methods are especially well suited for state-space models and can be seen as a cost-efficient alternative to Markov Chain Monte Carlo (MCMC), since they allow for online type inference. The posterior distributions are updated as new data is observed, which is exceedingly costly using MCMC. Also, PL allows for consistent online model comparison using sequential predictive log Bayes factors. A simulated data is used in order to compare the posterior outputs for the PL and MCMC schemes, which are shown to be almost identical. Finally, a short real data application is included.  相似文献   

8.
An automated (Markov chain) Monte Carlo EM algorithm   总被引:1,自引:0,他引:1  
We present an automated Monte Carlo EM (MCEM) algorithm which efficiently assesses Monte Carlo error in the presence of dependent Monte Carlo, particularly Markov chain Monte Carlo, E-step samples and chooses an appropriate Monte Carlo sample size to minimize this Monte Carlo error with respect to progressive EM step estimates. Monte Carlo error is gauged though an application of the central limit theorem during renewal periods of the MCMC sampler used in the E-step. The resulting normal approximation allows us to construct a rigorous and adaptive rule for updating the Monte Carlo sample size each iteration of the MCEM algorithm. We illustrate our automated routine and compare the performance with competing MCEM algorithms in an analysis of a data set fit by a generalized linear mixed model.  相似文献   

9.
The maximum likelihood and Bayesian approaches for parameter estimations and prediction of future record values have been considered for the two-parameter Burr Type XII distribution based on record values with the number of trials following the record values (inter-record times). Firstly, the Bayes estimates are obtained based on a joint bivariate prior for the shape parameters. In this case, the Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo (MCMC) method due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The MCMC method has been also used to construct the highest posterior density credible intervals. Secondly, the Bayes estimates are obtained with respect to a discrete prior for the first shape parameter and a conjugate prior for other shape parameter. The Bayes and the maximum likelihood estimates are compared in terms of the estimated risk by the Monte Carlo simulations. We further consider the non-Bayesian and Bayesian prediction for future lower record arising from the Burr Type XII distribution based on record data. The comparison of the derived predictors is carried out by using Monte Carlo simulations. A real data are analysed for illustration purposes.  相似文献   

10.
Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples.  相似文献   

11.
In this paper, we present an adaptive evolutionary Monte Carlo algorithm (AEMC), which combines a tree-based predictive model with an evolutionary Monte Carlo sampling procedure for the purpose of global optimization. Our development is motivated by sensor placement applications in engineering, which requires optimizing certain complicated “black-box” objective function. The proposed method is able to enhance the optimization efficiency and effectiveness as compared to a few alternative strategies. AEMC falls into the category of adaptive Markov chain Monte Carlo (MCMC) algorithms and is the first adaptive MCMC algorithm that simulates multiple Markov chains in parallel. A theorem about the ergodicity property of the AEMC algorithm is stated and proven. We demonstrate the advantages of the proposed method by applying it to a sensor placement problem in a manufacturing process, as well as to a standard Griewank test function.  相似文献   

12.
When functional data are not homogenous, for example, when there are multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this article, we propose a new estimation procedure for the mixture of Gaussian processes, to incorporate both functional and inhomogenous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from expectation-maximization (EM) algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset.  相似文献   

13.
Feature selection arises in many areas of modern science. For example, in genomic research, we want to find the genes that can be used to separate tissues of different classes (e.g. cancer and normal). One approach is to fit regression/classification models with certain penalization. In the past decade, hyper-LASSO penalization (priors) have received increasing attention in the literature. However, fully Bayesian methods that use Markov chain Monte Carlo (MCMC) for regression/classification with hyper-LASSO priors are still in lack of development. In this paper, we introduce an MCMC method for learning multinomial logistic regression with hyper-LASSO priors. Our MCMC algorithm uses Hamiltonian Monte Carlo in a restricted Gibbs sampling framework. We have used simulation studies and real data to demonstrate the superior performance of hyper-LASSO priors compared to LASSO, and to investigate the issues of choosing heaviness and scale of hyper-LASSO priors.  相似文献   

14.
Bayesian analysis of panel data using an MTAR model   总被引:1,自引:0,他引:1  
Bayesian analysis of panel data using a class of momentum threshold autoregressive (MTAR) models is considered. Posterior estimation of parameters of the MTAR models is done by using a simple Markov Chain Monte Carlo (MCMC) algorithm. Selection of appropriate differenced variables, test for asymmetry and unit roots are recast as model selections and a simple way of computing posterior probabilities of the candidate models is proposed. The proposed method is applied to the yearly unemployment rates of 51 US states and the results show strong evidence of stationarity and asymmetry.  相似文献   

15.
ABSTRACT

The maximum likelihood and Bayesian approaches for estimating the parameters and the prediction of future record values for the Kumaraswamy distribution has been considered when the lower record values along with the number of observations following the record values (inter-record-times) have been observed. The Bayes estimates are obtained based on a joint bivariate prior for the shape parameters. In this case, Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo (MCMC) method due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The MCMC method has been also used to construct the highest posterior density credible intervals. The Bayes and the maximum likelihood estimates are compared by using the estimated risk through Monte Carlo simulations. We further consider the non-Bayesian and Bayesian prediction for future lower record values arising from the Kumaraswamy distribution based on record values with their corresponding inter-record times and only record values. The comparison of the derived predictors are carried out by using Monte Carlo simulations. Real data are analysed for an illustration of the findings.  相似文献   

16.
We develop Bayesian procedures to make inference about parameters of a statistical design with autocorrelated error terms. Modelling treatment effects can be complex in the presence of other factors such as time; for example in longitudinal data. In this paper, Markov chain Monte Carlo methods (MCMC), the Metropolis-Hastings algorithm and Gibbs sampler are used to facilitate the Bayesian analysis of real life data when the error structure can be expressed as an autoregressive model of order p. We illustrate our analysis with real data.  相似文献   

17.
In this paper, we discuss a fully Bayesian quantile inference using Markov Chain Monte Carlo (MCMC) method for longitudinal data models with random effects. Under the assumption of error term subject to asymmetric Laplace distribution, we establish a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at τ-th level. We overcome the current computational limitations using two approaches. One is the general MCMC technique with Metropolis–Hastings algorithm and another is the Gibbs sampling from the full conditional distribution. These two methods outperform the traditional frequentist methods under a wide array of simulated data models and are flexible enough to easily accommodate changes in the number of random effects and in their assumed distribution. We apply the Gibbs sampling method to analyse a mouse growth data and some different conclusions from those in the literatures are obtained.  相似文献   

18.
Summary  In panel studies binary outcome measures together with time stationary and time varying explanatory variables are collected over time on the same individual. Therefore, a regression analysis for this type of data must allow for the correlation among the outcomes of an individual. The multivariate probit model of Ashford and Sowden (1970) was the first regression model for multivariate binary responses. However, a likelihood analysis of the multivariate probit model with general correlation structure for higher dimensions is intractable due to the maximization over high dimensional integrals thus severely restricting ist applicability so far. Czado (1996) developed a Markov Chain Monte Carlo (MCMC) algorithm to overcome this difficulty. In this paper we present an application of this algorithm to unemployment data from the Panel Study of Income Dynamics involving 11 waves of the panel study. In addition we adapt Bayesian model checking techniques based on the posterior predictive distribution (see for example Gelman et al. (1996)) for the multivariate probit model. These help to identify mean and correlation specification which fit the data well. C. Czado was supported by research grant OGP0089858 of the Natural Sciences and Engineering Research Council of Canada.  相似文献   

19.
In spatial generalized linear mixed models (SGLMMs), statistical inference encounters problems, since random effects in the model imply high-dimensional integrals to calculate the marginal likelihood function. In this article, we temporarily treat parameters as random variables and express the marginal likelihood function as a posterior expectation. Hence, the marginal likelihood function is approximated using the obtained samples from the posterior density of the latent variables and parameters given the data. However, in this setting, misspecification of prior distribution of correlation function parameter and problems associated with convergence of Markov chain Monte Carlo (MCMC) methods could have an unpleasant influence on the likelihood approximation. To avoid these challenges, we utilize an empirical Bayes approach to estimate prior hyperparameters. We also use a computationally efficient hybrid algorithm by combining inverse Bayes formula (IBF) and Gibbs sampler procedures. A simulation study is conducted to assess the performance of our method. Finally, we illustrate the method applying a dataset of standard penetration test of soil in an area in south of Iran.  相似文献   

20.
One form of data collected in the study of infectious diseases is on the transmission of a disease within households. We consider a model which allows the rate of disease transmission to vary between households. A Bayesian hierarchical approach to fitting the model is proposed and is implemented by the Metropolis–Hastings algorithm, a standard Markov chain Monte Carlo (MCMC) method. Results are presented for both simulated epidemic chain data and the Providence measles data, illustrating the potential that MCMC methods have to dealing with heterogeneity in infectious disease transmission.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号