首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 13 毫秒
1.
2.
It is well known that the log-likelihood function for samples coming from normal mixture distributions may present spurious maxima and singularities. For this reason here we reformulate some Hathaways results and we propose two constrained estimation procedures for multivariate normal mixture modelling according to the likelihood approach. Their perfomances are illustrated on the grounds of some numerical simulations based on the EM algorithm. A comparison between multivariate normal mixtures and the hot-deck approach in missing data imputation is also considered.Salvatore Ingrassia: S. Ingrassia carried out the research as part of the project Metodi Statistici e Reti Neuronali per lAnalisi di Dati Complessi (PRIN 2000, resp. G. Lunetta).  相似文献   

3.
We introduce a class of spatial random effects models that have Markov random fields (MRF) as latent processes. Calculating the maximum likelihood estimates of unknown parameters in SREs is extremely difficult, because the normalizing factors of MRFs and additional integrations from unobserved random effects are computationally prohibitive. We propose a stochastic approximation expectation-maximization (SAEM) algorithm to maximize the likelihood functions of spatial random effects models. The SAEM algorithm integrates recent improvements in stochastic approximation algorithms; it also includes components of the Newton-Raphson algorithm and the expectation-maximization (EM) gradient algorithm. The convergence of the SAEM algorithm is guaranteed under some mild conditions. We apply the SAEM algorithm to three examples that are representative of real-world applications: a state space model, a noisy Ising model, and segmenting magnetic resonance images (MRI) of the human brain. The SAEM algorithm gives satisfactory results in finding the maximum likelihood estimate of spatial random effects models in each of these instances.  相似文献   

4.
Non-Gaussian spatial responses are usually modeled using spatial generalized linear mixed model with spatial random effects. The likelihood function of this model cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. There are numerical ways to maximize the likelihood function, such as Monte Carlo Expectation Maximization and Quadrature Pairwise Expectation Maximization algorithms. They can be applied but may in such cases be computationally very slow or even prohibitive. Gauss–Hermite quadrature approximation only suitable for low-dimensional latent variables and its accuracy depends on the number of quadrature points. Here, we propose a new approximate pairwise maximum likelihood method to the inference of the spatial generalized linear mixed model. This approximate method is fast and deterministic, using no sampling-based strategies. The performance of the proposed method is illustrated through two simulation examples and practical aspects are investigated through a case study on a rainfall data set.  相似文献   

5.
Abstract

In this article, we revisit the problem of fitting a mixture model under the assumption that the mixture components are symmetric and log-concave. To this end, we first study the nonparametric maximum likelihood estimation (MLE) of a monotone log-concave probability density. To fit the mixture model, we propose a semiparametric EM (SEM) algorithm, which can be adapted to other semiparametric mixture models. In our numerical experiments, we compare our algorithm to that of Balabdaoui and Doss (2018 Balabdaoui, F., and C. R. Doss. 2018. Inference for a two-component mixture of symmetric distributions under log-concavity. Bernoulli 24 (2):105371.[Crossref], [Web of Science ®] [Google Scholar], Inference for a two-component mixture of symmetric distributions under log-concavity. Bernoulli 24 (2):1053–71) and other mixture models both on simulated and real-world datasets.  相似文献   

6.
Zhang  Zhihua  Chan  Kap Luk  Wu  Yiming  Chen  Chibiao 《Statistics and Computing》2004,14(4):343-355
This paper is a contribution to the methodology of fully Bayesian inference in a multivariate Gaussian mixture model using the reversible jump Markov chain Monte Carlo algorithm. To follow the constraints of preserving the first two moments before and after the split or combine moves, we concentrate on a simplified multivariate Gaussian mixture model, in which the covariance matrices of all components share a common eigenvector matrix. We then propose an approach to the construction of the reversible jump Markov chain Monte Carlo algorithm for this model. Experimental results on several data sets demonstrate the efficacy of our algorithm.  相似文献   

7.
Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss functions. However, many of these algorithms turn out to be too slow in practice, and can be infeasible as the size and/or dimension of the data grow. We propose a new, computationally efficient algorithm based on a loss function interpretation, and show that it can scale up well in large data set scenarios. Then, we review earlier solutions which can scale up well for large data set, and compare their performances on simulated and real data sets. We conclude with some discussions and recommendations of all the methods studied.  相似文献   

8.
Summary.  Phage display is a biological process that is used to screen random peptide libraries for ligands that bind to a target of interest with high affinity. On the basis of a count data set from an innovative multistage phage display experiment, we propose a class of Bayesian mixture models to cluster peptide counts into three groups that exhibit different display patterns across stages. Among the three groups, the investigators are particularly interested in that with an ascending display pattern in the counts, which implies that the peptides are likely to bind to the target with strong affinity. We apply a Bayesian false discovery rate approach to identify the peptides with the strongest affinity within the group. A list of peptides is obtained, among which important ones with meaningful functions are further validated by biologists. To examine the performance of the Bayesian model, we conduct a simulation study and obtain desirable results.  相似文献   

9.
Based on a random cluster representation, the Swendsen–Wang algorithm for the Ising and Potts distributions is extended to a class of continuous Markov random fields. The algorithm can be described briefly as follows. A given configuration is decomposed into clusters. Probabilities for flipping the values of the random variables in each cluster are calculated. According to these probabilities, values of all the random variables in each cluster will be either updated or kept unchanged and this is done independently across the clusters. A new configuration is then obtained. We will show through a simulation study that, like the Swendsen–Wang algorithm in the case of Ising and Potts distributions, the cluster algorithm here also outperforms the Gibbs sampler in beating the critical slowing down for some strongly correlated Markov random fields.  相似文献   

10.
Data sets with excess zeroes are frequently analyzed in many disciplines. A common framework used to analyze such data is the zero-inflated (ZI) regression model. It mixes a degenerate distribution with point mass at zero with a non-degenerate distribution. The estimates from ZI models quantify the effects of covariates on the means of latent random variables, which are often not the quantities of primary interest. Recently, marginal zero-inflated Poisson (MZIP; Long et al. [A marginalized zero-inflated Poisson regression model with overall exposure effects. Stat. Med. 33 (2014), pp. 5151–5165]) and negative binomial (MZINB; Preisser et al., 2016) models have been introduced that model the mean response directly. These models yield covariate effects that have simple interpretations that are, for many applications, more appealing than those available from ZI regression. This paper outlines a general framework for marginal zero-inflated models where the latent distribution is a member of the exponential dispersion family, focusing on common distributions for count data. In particular, our discussion includes the marginal zero-inflated binomial (MZIB) model, which has not been discussed previously. The details of maximum likelihood estimation via the EM algorithm are presented and the properties of the estimators as well as Wald and likelihood ratio-based inference are examined via simulation. Two examples presented illustrate the advantages of MZIP, MZINB, and MZIB models for practical data analysis.  相似文献   

11.
The medical costs in an ageing society substantially increase when the incidences of chronic diseases, disabilities and inability to live independently are high. Healthy lifestyles not only affect elderly individuals but also influence the entire community. When assessing treatment efficacy, survival and quality of life should be considered simultaneously. This paper proposes the joint likelihood approach for modelling survival and longitudinal binary covariates simultaneously. Because some unobservable information is present in the model, the Monte Carlo EM algorithm and Metropolis-Hastings algorithm are used to find the estimators. Monte Carlo simulations are performed to evaluate the performance of the proposed model based on the accuracy and precision of the estimates. Real data are used to demonstrate the feasibility of the proposed model.  相似文献   

12.
In statistical models involving constrained or missing data, likelihoods containing integrals emerge. In the case of both constrained and missing data, the result is a ratio of integrals, which for multivariate data may defy exact or approximate analytic expression. Seeking maximum-likelihood estimates in such settings, we propose Monte Carlo approximants for these integrals, and subsequently maximize the resulting approximate likelihood. Iteration of this strategy expedites the maximization, while the Gibbs sampler is useful for the required Monte Carlo generation. As a result, we handle a class of models broader than the customary EM setting without using an EM-type algorithm. Implementation of the methodology is illustrated in two numerical examples.  相似文献   

13.
Repeated categorical outcomes frequently occur in clinical trials. Muenz and Rubinstein (1985) presented Markov chain models to analyze binary repeated data in a breast cancer study. We extend their method to the setting when more than one repeated outcome variable is of interest. In a randomized clinical trial of breast cancer, we investigate the dependency of toxicities on predictor variables and the relationship among multiple toxic effects.  相似文献   

14.
The mixture transition distribution (MTD) model was introduced by Raftery to face the need for parsimony in the modeling of high-order Markov chains in discrete time. The particularity of this model comes from the fact that the effect of each lag upon the present is considered separately and additively, so that the number of parameters required is drastically reduced. However, the efficiency for the MTD parameter estimations proposed up to date still remains problematic on account of the large number of constraints on the parameters. In this article, an iterative procedure, commonly known as expectation–maximization (EM) algorithm, is developed cooperating with the principle of maximum likelihood estimation (MLE) to estimate the MTD parameters. Some applications of modeling MTD show the proposed EM algorithm is easier to be used than the algorithm developed by Berchtold. Moreover, the EM estimations of parameters for high-order MTD models led on DNA sequences outperform the corresponding fully parametrized Markov chain in terms of Bayesian information criterion. A software implementation of our algorithm is available in the library seq++at http://stat.genopole.cnrs.fr/seqpp.  相似文献   

15.
In this paper, we propose an adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion. The method, called M-PMC, is shown to be applicable to a wide class of importance sampling densities, which includes in particular mixtures of multivariate Student t distributions. The performance of the proposed scheme is studied on both artificial and real examples, highlighting in particular the benefit of a novel Rao-Blackwellisation device which can be easily incorporated in the updating scheme. This work has been supported by the Agence Nationale de la Recherche (ANR) through the 2006–2008 project ’ . Both last authors are grateful to the participants to the BIRS meeting on “Bioinformatics, Genetics and Stochastic Computation: Bridging the Gap”, Banff, for their comments on an earlier version of this paper. The last author also acknowledges an helpful discussion with Geoff McLachlan. The authors wish to thank both referees for their encouraging comments.  相似文献   

16.
Generalized linear models with random effects and/or serial dependence are commonly used to analyze longitudinal data. However, the computation and interpretation of marginal covariate effects can be difficult. This led Heagerty (1999, 2002) to propose models for longitudinal binary data in which a logistic regression is first used to explain the average marginal response. The model is then completed by introducing a conditional regression that allows for the longitudinal, within‐subject, dependence, either via random effects or regressing on previous responses. In this paper, the authors extend the work of Heagerty to handle multivariate longitudinal binary response data using a triple of regression models that directly model the marginal mean response while taking into account dependence across time and across responses. Markov Chain Monte Carlo methods are used for inference. Data from the Iowa Youth and Families Project are used to illustrate the methods.  相似文献   

17.
A longitudinal study commonly follows a set of variables, measured for each individual repeatedly over time, and usually suffers from incomplete data problem. A common approach for dealing with longitudinal categorical responses is to use the Generalized Linear Mixed Model (GLMM). This model induces the potential relation between response variables over time via a vector of random effects, assumed to be shared parameters in the non-ignorable missing mechanism. Most GLMMs assume that the random-effects parameters follow a normal or symmetric distribution and this leads to serious problems in real applications. In this paper, we propose GLMMs for the analysis of incomplete multivariate longitudinal categorical responses with a non-ignorable missing mechanism based on a shared parameter framework with the less restrictive assumption of skew-normality for the random effects. These models may contain incomplete data with monotone and non-monotone missing patterns. The performance of the model is evaluated using simulation studies and a well-known longitudinal data set extracted from a fluvoxamine trial is analyzed to determine the profile of fluvoxamine in ambulatory clinical psychiatric practice.  相似文献   

18.
In this article, we propose a semiparametric mixture of additive regression models, in which the regression functions are additive and non parametric while the mixing proportions and variances are constant. Compared with the mixture of linear regression models, the proposed methodology is more flexible in modeling the non linear relationship between the response and covariate. A two-step procedure based on the spline-backfitted kernel method is derived for computation. Moreover, we establish the asymptotic normality of the resultant estimators and examine their good performance through a numerical example.  相似文献   

19.
The lasso is a popular technique of simultaneous estimation and variable selection in many research areas. The marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian lasso when the regression coefficients have independent Laplace priors. Because of its flexibility of statistical inferences, the Bayesian approach is attracting a growing body of research in recent years. Current approaches are primarily to either do a fully Bayesian analysis using Markov chain Monte Carlo (MCMC) algorithm or use Monte Carlo expectation maximization (MCEM) methods with an MCMC algorithm in each E-step. However, MCMC-based Bayesian method has much computational burden and slow convergence. Tan et al. [An efficient MCEM algorithm for fitting generalized linear mixed models for correlated binary data. J Stat Comput Simul. 2007;77:929–943] proposed a non-iterative sampling approach, the inverse Bayes formula (IBF) sampler, for computing posteriors of a hierarchical model in the structure of MCEM. Motivated by their paper, we develop this IBF sampler in the structure of MCEM to give the marginal posterior mode of the regression coefficients for the Bayesian lasso, by adjusting the weights of importance sampling, when the full conditional distribution is not explicit. Simulation experiments show that the computational time is much reduced with our method based on the expectation maximization algorithm and our algorithms and our methods behave comparably with other Bayesian lasso methods not only in prediction accuracy but also in variable selection accuracy and even better especially when the sample size is relatively large.  相似文献   

20.
The Reed-Frost epidemic model is a simple stochastic process with parameter q that describes the spread of an infectious disease among a closed population. Given data on the final outcome of an epidemic, it is possible to perform Bayesian inference for q using a simple Gibbs sampler algorithm. In this paper it is illustrated that by choosing latent variables appropriately, certain monotonicity properties hold which facilitate the use of a perfect simulation algorithm. The methods are applied to real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号