首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a Bayesian analysis of the projected normal distribution, which is a flexible and useful distribution for the analysis of directional data. We obtain samples from the posterior distribution using the Gibbs sampler after the introduction of suitably chosen latent variables. The procedure is illustrated using simulated data as well as a real data set previously analysed in the literature.  相似文献   

2.
Previous time series applications of qualitative response models have ignored features of the data, such as conditional heteroscedasticity, that are routinely addressed in time series econometrics of financial data. This article addresses this issue by adding Markov-switching heteroscedasticity to a dynamic ordered probit model of discrete changes in the bank prime lending rate and estimating via the Gibbs sampler. The dynamic ordered probit model of Eichengreen, Watson, and Grossman allows for serial autocorrelation in probit analysis of a time series, and this article demonstrates the relative simplicity of estimating a dynamic ordered probit using the Gibbs sampler instead of the Eichengreen et al. maximum likelihood procedure. In addition, the extension to regime-switching parameters and conditional heteroscedasticity is easy to implement under Gibbs sampling. The article compares tests of goodness of fit between dynamic ordered probit models of the prime rate that have constant variance and conditional heteroscedasticity.  相似文献   

3.
We demonstrate the use of auxiliary (or latent) variables for sampling non-standard densities which arise in the context of the Bayesian analysis of non-conjugate and hierarchical models by using a Gibbs sampler. Their strategic use can result in a Gibbs sampler having easily sampled full conditionals. We propose such a procedure to simplify or speed up the Markov chain Monte Carlo algorithm. The strength of this approach lies in its generality and its ease of implementation. The aim of the paper, therefore, is to provide an alternative sampling algorithm to rejection-based methods and other sampling approaches such as the Metropolis–Hastings algorithm.  相似文献   

4.
Nonlinear time series analysis plays an important role in recent econometric literature, especially the bilinear model. In this paper, we cast the bilinear time series model in a Bayesian framework and make inference by using the Gibbs sampler, a Monte Carlo method. The methodology proposed is illustrated by using generated examples, two real data sets, as well as a simulation study. The results show that the Gibbs sampler provides a very encouraging option in analyzing bilinear time series.  相似文献   

5.
Implementation of the Gibbs sampler for estimating the accuracy of multiple binary diagnostic tests in one population has been investigated. This method, proposed by Joseph, Gyorkos and Coupal, makes use of a Bayesian approach and is used in the absence of a gold standard to estimate the prevalence, the sensitivity and specificity of medical diagnostic tests. The expressions that allow this method to be implemented for an arbitrary number of tests are given. By using the convergence diagnostics procedure of Raftery and Lewis, the relation between the number of iterations of Gibbs sampling and the precision of the estimated quantiles of the posterior distributions is derived. An example concerning a data set of gastro-esophageal reflux disease patients collected to evaluate the accuracy of the water siphon test compared with 24 h pH-monitoring, endoscopy and histology tests is presented. The main message that emerges from our analysis is that implementation of the Gibbs sampler to estimate the parameters of multiple binary diagnostic tests can be critical and convergence diagnostic is advised for this method. The factors which affect the convergence of the chains to the posterior distributions and those that influence the precision of their quantiles are analyzed.  相似文献   

6.
Bayesian random effects models may be fitted using Gibbs sampling, but the Gibbs sampler can be slow mixing due to what might be regarded as lack of model identifiability. This slow mixing substantially increases the number of iterations required during Gibbs sampling. We present an analysis of data on immunity after Rubella vaccinations which results in a slow-mixing Gibbs sampler. We show that this problem of slow mixing can be resolved by transforming the random effects and then, if desired, expressing their joint prior distribution as a sequence of univariate conditional distributions. The resulting analysis shows that the decline in antibodies after Rubella vaccination is relatively shallow compared to the decline in antibodies which has been shown after Hepatitis B vaccination.  相似文献   

7.
Estimation in logistic-normal models for correlated and overdispersed binomial data is complicated by the numerical evaluation of often intractable likelihood functions. Penalized quasilikelihood (PQL) estimators of fixed effects and variance components are known to be seriously biased for binary data. A simple correction procedure has been proposed to improve the performance of the PQL estimators. The proposed method is illustrated by analyzing infectious disease data. Its performance is compared, by means of simulations, with that of the Bayes approach using the Gibbs sampler.  相似文献   

8.
This article presents a novel Bayesian analysis for linear mixed-effects models. The analysis is based on the method of partial collapsing that allows some components to be partially collapsed out of a model. The resulting partially collapsed Gibbs (PCG) sampler constructed to fit linear mixed-effects models is expected to exhibit much better convergence properties than the corresponding Gibbs sampler. In order to construct the PCG sampler without complicating component updates, we consider the reparameterization of model components by expressing a between-group variance in terms of a within-group variance in a linear mixed-effects model. The proposed method of partial collapsing with reparameterization is applied to the Merton’s jump diffusion model as well as general linear mixed-effects models with proper prior distributions and illustrated using simulated data and longitudinal data on sleep deprivation.  相似文献   

9.
There are many well-known methods applied in classification problem for linear data with both known and unknown distribution. Here, we deal with classification involving data on torus and cylinder. A new method involving a generalized likelihood ratio test is developed for classifying in two populations using directional data. The approach assumes that one of the probabilities of misclassification is known. The procedure is constructed by applying Gibbs sampler on the conditionally specified distribution. A parametric bootstrap approach is also presented. An application to data involving linear and circular measurements on human skull from two tribal populations is given.  相似文献   

10.
Bayesian shrinkage methods have generated a lot of interest in recent years, especially in the context of high‐dimensional linear regression. In recent work, a Bayesian shrinkage approach using generalized double Pareto priors has been proposed. Several useful properties of this approach, including the derivation of a tractable three‐block Gibbs sampler to sample from the resulting posterior density, have been established. We show that the Markov operator corresponding to this three‐block Gibbs sampler is not Hilbert–Schmidt. We propose a simpler two‐block Gibbs sampler and show that the corresponding Markov operator is trace class (and hence Hilbert–Schmidt). Establishing the trace class property for the proposed two‐block Gibbs sampler has several useful consequences. Firstly, it implies that the corresponding Markov chain is geometrically ergodic, thereby implying the existence of a Markov chain central limit theorem, which in turn enables computation of asymptotic standard errors for Markov chain‐based estimates of posterior quantities. Secondly, because the proposed Gibbs sampler uses two blocks, standard recipes in the literature can be used to construct a sandwich Markov chain (by inserting an appropriate extra step) to gain further efficiency and to achieve faster convergence. The trace class property for the two‐block sampler implies that the corresponding sandwich Markov chain is also trace class and thereby geometrically ergodic. Finally, it also guarantees that all eigenvalues of the sandwich chain are dominated by the corresponding eigenvalues of the Gibbs sampling chain (with at least one strict domination). Our results demonstrate that a minor change in the structure of a Markov chain can lead to fundamental changes in its theoretical properties. We illustrate the improvement in efficiency resulting from our proposed Markov chains using simulated and real examples.  相似文献   

11.
Efficient Markov chain Monte Carlo with incomplete multinomial data   总被引:1,自引:0,他引:1  
We propose a block Gibbs sampling scheme for incomplete multinomial data. We show that the new approach facilitates maximal blocking, thereby reducing serial dependency and speeding up the convergence of the Gibbs sampler. We compare the efficiency of the new method with the standard, non-block Gibbs sampler via a number of numerical examples.  相似文献   

12.
Markov chain Monte Carlo methods, in particular, the Gibbs sampler, are widely used algorithms both in application and theoretical works in the classical and Bayesian paradigms. However, these algorithms are often computer intensive. Samawi et al. [Steady-state ranked Gibbs sampler. J. Stat. Comput. Simul. 2012;82(8), 1223–1238. doi:10.1080/00949655.2011.575378] demonstrate through theory and simulation that the dependent steady-state Gibbs sampler is more efficient and accurate in model parameter estimation than the original Gibbs sampler. This paper proposes the independent steady-state Gibbs sampler (ISSGS) approach to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in multidimensional problems.  相似文献   

13.
This article describes three methods for computing a discrete joint density from full conditional densities. They are the Gibbs sampler, a hybrid method, and an interaction-based method. The hybrid method uses the iterative proportional fitting algorithm, and it is derived from the mixed parameterization of a contingency table. The interaction-based approach is derived from the canonical parameters, while the Gibbs sampler can be regarded as based on the mean parameters. In short, different approaches are motivated by different parameterizations. The setting of a bivariate conditionally specified distribution is used as the premise for comparing the numerical accuracy of the three methods. Detailed comparisons of marginal distributions, odds ratios and expected values are reported. We give theoretical justifications as to why the hybrid method produces better approximation than the Gibbs sampler. Generalizations to more than two variables are discussed. In practice, Gibbs sampler has certain advantages: it is conceptually easy to understand and there are many software tools available. Nevertheless, the hybrid method and the interaction-based method are accurate and simple alternatives when the Gibbs sampler results in a slowly mixing chain and requires substantial simulation efforts.  相似文献   

14.
The ordinal probit, univariate or multivariate, is a generalized linear model (GLM) structure that arises frequently in such disparate areas of statistical applications as medicine and econometrics. Despite the straightforwardness of its implementation using the Gibbs sampler, the ordinal probit may present challenges in obtaining satisfactory convergence.We present a multivariate Hastings-within-Gibbs update step for generating latent data and bin boundary parameters jointly, instead of individually from their respective full conditionals. When the latent data are parameters of interest, this algorithm substantially improves Gibbs sampler convergence for large datasets. We also discuss Monte Carlo Markov chain (MCMC) implementation of cumulative logit (proportional odds) and cumulative complementary log-log (proportional hazards) models with latent data.  相似文献   

15.
We analyse a hierarchical Bayes model which is related to the usual empirical Bayes formulation of James-Stein estimators. We consider running a Gibbs sampler on this model. Using previous results about convergence rates of Markov chains, we provide rigorous, numerical, reasonable bounds on the running time of the Gibbs sampler, for a suitable range of prior distributions. We apply these results to baseball data from Efron and Morris (1975). For a different range of prior distributions, we prove that the Gibbs sampler will fail to converge, and use this information to prove that in this case the associated posterior distribution is non-normalizable.  相似文献   

16.
We propose a two-stage algorithm for computing maximum likelihood estimates for a class of spatial models. The algorithm combines Markov chain Monte Carlo methods such as the Metropolis–Hastings–Green algorithm and the Gibbs sampler, and stochastic approximation methods such as the off-line average and adaptive search direction. A new criterion is built into the algorithm so stopping is automatic once the desired precision has been set. Simulation studies and applications to some real data sets have been conducted with three spatial models. We compared the algorithm proposed with a direct application of the classical Robbins–Monro algorithm using Wiebe's wheat data and found that our procedure is at least 15 times faster.  相似文献   

17.
Demographic and Health Surveys collect child survival times that are clustered at the family and community levels. It is assumed that each cluster has a specific, unobservable, random frailty that induces an association in the survival times within the cluster. The Cox proportional hazards model, with family and community random frailties acting multiplicatively on the hazard rate, is presented. The estimation of the fixed effect and the association parameters of the modified model is then examined using the Gibbs sampler and the expectation–maximization (EM) algorithm. The methods are compared using child survival data collected in the 1992 Demographic and Health Survey of Malawi. The two methods lead to very similar estimates of fixed effect parameters. However, the estimates of random effect variances from the EM algorithm are smaller than those of the Gibbs sampler. Both estimation methods reveal considerable family variation in the survival of children, and very little variability over the communities.  相似文献   

18.
Bayesian inference for the multinomial probit model, using the Gibbs sampler with data augmentation, has been recently considered by some authors. The present paper introduces a modification of the sampling technique, by defining a hybrid Markov chain in which, after each Gibbs sampling cycle, a Metropolis step is carried out along a direction of constant likelihood. Examples with simulated data sets motivate and illustrate the new technique. A proof of the ergodicity of the hybrid Markov chain is also given.  相似文献   

19.
The Gibbs sampler has been used extensively in the statistics literature. It relies on iteratively sampling from a set of compatible conditional distributions and the sampler is known to converge to a unique invariant joint distribution. However, the Gibbs sampler behaves rather differently when the conditional distributions are not compatible. Such applications have seen increasing use in areas such as multiple imputation. In this paper, we demonstrate that what a Gibbs sampler converges to is a function of the order of the sampling scheme. Besides providing the mathematical background of this behaviour, we also explain how that happens through a thorough analysis of the examples.  相似文献   

20.
Jump–diffusion processes involving diffusion processes with discontinuous movements, called jumps, are widely used to model time-series data that commonly exhibit discontinuity in their sample paths. The existing jump–diffusion models have been recently extended to multivariate time-series data. The models are, however, still limited by a single parametric jump-size distribution that is common across different subjects. Such strong parametric assumptions for the shape and structure of a jump-size distribution may be too restrictive and unrealistic for multiple subjects with different characteristics. This paper thus proposes an efficient Bayesian nonparametric method to flexibly model a jump-size distribution while borrowing information across subjects in a clustering procedure using a nested Dirichlet process. For efficient posterior computation, a partially collapsed Gibbs sampler is devised to fit the proposed model. The proposed methodology is illustrated through a simulation study and an application to daily stock price data for companies in the S&P 100 index from June 2007 to June 2017.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号