首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A Bayesian approach to modeling a rich class of nonconjugate problems is presented. An adaptive Monte Carlo integration technique known as the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in such a framework. The result is a general strategy for obtaining marginal posterior densities under changing specification of the model error densities and related prior densities. We illustrate the approach in a nonlinear regression setting, comparing the merits of three candidate error distributions.  相似文献   

2.
Abstract.  In this paper we propose fast approximate methods for computing posterior marginals in spatial generalized linear mixed models. We consider the common geostatistical case with a high dimensional latent spatial variable and observations at known registration sites. The methods of inference are deterministic, using no simulation-based inference. The first proposed approximation is fast to compute and is 'practically sufficient', meaning that results do not show any bias or dispersion effects that might affect decision making. Our second approximation, an improvement of the first version, is 'practically exact', meaning that one would have to run MCMC simulations for very much longer than is typically done to detect any indication of error in the approximate results. For small-count data the approximations are slightly worse, but still very accurate. Our methods are limited to likelihood functions that give unimodal full conditionals for the latent variable. The methods help to expand the future scope of non-Gaussian geostatistical models as illustrated by applications of model choice, outlier detection and sampling design. The approximations take seconds or minutes of CPU time, in sharp contrast to overnight MCMC runs for solving such problems.  相似文献   

3.
Generalized Gibbs samplers simulate from any direction, not necessarily limited to the coordinate directions of the parameters of the objective function. We study how to optimally choose such directions in a random scan Gibbs sampler setting. We consider that optimal directions will be those that minimize the Kullback–Leibler divergence of two Markov chain Monte Carlo steps. Two distributions over direction are proposed for the multivariate Normal objective function. The resulting algorithms are used to simulate from a truncated multivariate Normal distribution, and the performance of our algorithms is compared with the performance of two algorithms based on the Gibbs sampler.  相似文献   

4.
This paper presents a kernel estimation of the distribution of the scale parameter of the inverse Gaussian distribution under type II censoring together with the distribution of the remaining time. Estimation is carried out via the Gibbs sampling algorithm combined with a missing data approach. Estimates and confidence intervals for the parameters of interest are also presented.  相似文献   

5.
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353–365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R.B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362–1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis–Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271–275]. Our algorithm has only one Metropolis–Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146–178; R.J. Patz and B.W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342–366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599–607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3 Azevedo, C. L.N., Bolfarine, H. and Andrade, D. F. 2011. Bayesian inference for a skew-normal IRT model under the centred parameterization. Comput. Stat. Data Anal., 55: 353365. [Crossref], [Web of Science ®] [Google Scholar]]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3 Azevedo, C. L.N., Bolfarine, H. and Andrade, D. F. 2011. Bayesian inference for a skew-normal IRT model under the centred parameterization. Comput. Stat. Data Anal., 55: 353365. [Crossref], [Web of Science ®] [Google Scholar]], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.  相似文献   

6.
缺失偏态数据下线性回归模型的统计推断   总被引:1,自引:2,他引:1  
研究缺失偏态数据下线性回归模型的参数估计问题,针对缺失偏态数据,为克服样本分布扭曲缺点和提高模型的回归系数、尺度参数和偏度参数的估计效果,提出了一种适合偏态数据下线性回归模型中缺失数据的修正回归插补方法.通过随机模拟和实例研究,并与均值插补、回归插补、随机回归插补方法比较,结果表明所提出的修正回归插补方法是有效可行的.  相似文献   

7.
Consider a class of autoregressive models with exogenous variables and power transformed and threshold GARCH (ARX-PTTGARCH) errors, which is a natural generalization of the standard and special GARCH model. We propose a Bayesian method to show that combining Gibbs sampler and Metropolis-Hastings algorithm to give a Bayesian analysis can be applied to estimate parameters of ARX-PTTGARCH models with success.  相似文献   

8.
A stochastic epidemic model with several kinds of susceptible is used to analyse temporal disease outbreak data from a Bayesian perspective. Prior distributions are used to model uncertainty in the actual numbers of susceptibles initially present. The posterior distribution of the parameters of the model is explored via Markov chain Monte Carlo methods. The methods are illustrated using two datasets, and the results are compared where possible to results obtained by previous analyses.  相似文献   

9.
线性混合模型是非寿险费率厘定的主要方法之一。通常的线性混合模型假设随机误差项服从正态分布,而保险损失数据往往具有右偏特征,这使得该模型在非寿险费率厘定中的应用受到一定影响。在通常的线性混合模型基础上,假设随机误差项服从偏态分布,即可建立偏态线性混合模型,从而改善费率厘定结果的合理性。基于一组实际的保险损失数据,应用贝叶斯MCMC方法建立几个不同的偏态线性混合模型,并与正态分布假设下的线性混合模型进行对比,实证检验偏态线性混合模型在非寿险费率厘定中的优越性。  相似文献   

10.
In this article, we consider local influence analysis for the skew-normal linear mixed model (SN-LMM). As the observed data log-likelihood associated with the SN-LMM is intractable, Cook's well-known approach cannot be applied to obtain measures of local influence. Instead, we develop local influence measures following the approach of Zhu and Lee (2001 Zhu , H. , Lee , S. ( 2001 ). Local influence for incomplete-data models . J. Roy. Statist. Soc. Ser. B 63 : 111126 .[Crossref] [Google Scholar]). This approach is based on the use of an EM-type algorithm and is measurement invariant under reparametrizations. Four specific perturbation schemes are discussed. Results obtained for a simulated data set and a real data set are reported, illustrating the usefulness of the proposed methodology.  相似文献   

11.
Bayesian inference for fractionally integrated exponential generalized autoregressive conditional heteroscedastic (FIEGARCH) models using Markov chain Monte Carlo (MCMC) methods is described. A simulation study is presented to assess the performance of the procedure, under the presence of long-memory in the volatility. Samples from FIEGARCH processes are obtained upon considering the generalized error distribution (GED) for the innovation process. Different values for the tail-thickness parameter ν are considered covering both scenarios, innovation processes with lighter (ν > 2) and heavier (ν < 2) tails than the Gaussian distribution (ν = 2). A comparison between the performance of quasi-maximum likelihood (QML) and MCMC procedures is also discussed. An application of the MCMC procedure to estimate the parameters of a FIEGARCH model for the daily log-returns of the S&P500 U.S. stock market index is provided.  相似文献   

12.
Approximate Bayesian Inference for Survival Models   总被引:1,自引:0,他引:1  
Abstract. Bayesian analysis of time‐to‐event data, usually called survival analysis, has received increasing attention in the last years. In Cox‐type models it allows to use information from the full likelihood instead of from a partial likelihood, so that the baseline hazard function and the model parameters can be jointly estimated. In general, Bayesian methods permit a full and exact posterior inference for any parameter or predictive quantity of interest. On the other side, Bayesian inference often relies on Markov chain Monte Carlo (MCMC) techniques which, from the user point of view, may appear slow at delivering answers. In this article, we show how a new inferential tool named integrated nested Laplace approximations can be adapted and applied to many survival models making Bayesian analysis both fast and accurate without having to rely on MCMC‐based inference.  相似文献   

13.
This study takes up inference in linear models with generalized error and generalized t distributions. For the generalized error distribution, two computational algorithms are proposed. The first is based on indirect Bayesian inference using an approximating finite scale mixture of normal distributions. The second is based on Gibbs sampling. The Gibbs sampler involves only drawing random numbers from standard distributions. This is important because previously the impression has been that an exact analysis of the generalized error regression model using Gibbs sampling is not possible. Next, we describe computational Bayesian inference for linear models with generalized t disturbances based on Gibbs sampling, and exploiting the fact that the model is a mixture of generalized error distributions with inverse generalized gamma distributions for the scale parameter. The linear model with this specification has also been thought not to be amenable to exact Bayesian analysis. All computational methods are applied to actual data involving the exchange rates of the British pound, the French franc, and the German mark relative to the U.S. dollar.  相似文献   

14.
为了研究缺失偏态数据下的联合位置与尺度模型,基于分布自身的特点,提出了一种适合缺失偏态数据下联合建模的插补方法———修正随机回归插补方法,该方法对缺失数据下模型偏度参数的调整十分显著。通过随机模拟和实例研究,并与回归插补和随机回归插补方法进行比较,结果表明,所提出的修正随机回归插补方法是有用和有效的。  相似文献   

15.
Spatial modeling is widely used in environmental sciences, biology, and epidemiology. Generalized linear mixed models are employed to account for spatial variations of point-referenced data called spatial generalized linear mixed models (SGLMMs). Frequentist analysis of these type of data is computationally difficult. On the other hand, the advent of the Markov chain Monte Carlo algorithm has made the Bayesian analysis of SGLMM computationally convenient. Recent introduction of the method of data cloning, which leads to maximum likelihood estimate, has made frequentist analysis of mixed models also equally computationally convenient. Recently, the data cloning was employed to estimate model parameters in SGLMMs, however, the prediction of spatial random effects and kriging are also very important. In this article, we propose a frequentist approach based on data cloning to predict (and provide prediction intervals) spatial random effects and kriging. We illustrate this approach using a real dataset and also by a simulation study.  相似文献   

16.
Linear mixed models have been widely used to analyze repeated measures data which arise in many studies. In most applications, it is assumed that both the random effects and the within-subjects errors are normally distributed. This can be extremely restrictive, obscuring important features of within-and among-subject variations. Here, quantile regression in the Bayesian framework for the linear mixed models is described to carry out the robust inferences. We also relax the normality assumption for the random effects by using a multivariate skew-normal distribution, which includes the normal ones as a special case and provides robust estimation in the linear mixed models. For posterior inference, we propose a Gibbs sampling algorithm based on a mixture representation of the asymmetric Laplace distribution and multivariate skew-normal distribution. The procedures are demonstrated by both simulated and real data examples.  相似文献   

17.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

18.
Abstract

In this article two methods are proposed to make inferences about the parameters of a finite mixture of distributions in the context of partially identifiable censored data. The first method focuses on a mixture of location and scale models and relies on an asymptotic approximation to a suitably constructed augmented likelihood; the second method provides a full Bayesian analysis of the mixture based on a Gibbs sampler. Both methods make explicit use of latent variables and provide computationally efficient procedures compared to other methods which deal directly with the likelihood of the mixture. This may be crucial if the number of components in the mixture is not small. Our proposals are illustrated on a classical example on failure times for communication devices first studied by Mendenhall and Hader (Mendenhall, W., Hader, R. J. (1958 Mendenhall, W. and Hader, R. J. 1958. Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. Biometrika, 45: 504520. [Crossref], [Web of Science ®] [Google Scholar]). Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. Biometrika 45:504–520.). In addition, we study the coverage of the confidence intervals obtained from each of the methods by means of a small simulation exercise.  相似文献   

19.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号