首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clustering gene expression data are an important step in providing information to biologists. A Bayesian clustering procedure using Fourier series with a Dirichlet process prior for clusters was developed. As an optimal computational tool for this Bayesian approach, Gibbs sampling of a normal mixture with a Dirichlet process was implemented to calculate the posterior probabilities when the number of clusters was unknown. Monte Carlo study results showed that the model was useful for suitable clustering. The proposed method was applied to the budding yeast Saccaromyces cerevisiae and provided biologically interpretable results.  相似文献   

2.
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter αα and a base probability measure G0G0. In problems where αα is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for αα. In this paper an approach is developed for computing a prior for the precision parameter αα that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.  相似文献   

3.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.  相似文献   

4.
Summary.  In functional data analysis, curves or surfaces are observed, up to measurement error, at a finite set of locations, for, say, a sample of n individuals. Often, the curves are homogeneous, except perhaps for individual-specific regions that provide heterogeneous behaviour (e.g. 'damaged' areas of irregular shape on an otherwise smooth surface). Motivated by applications with functional data of this nature, we propose a Bayesian mixture model, with the aim of dimension reduction, by representing the sample of n curves through a smaller set of canonical curves. We propose a novel prior on the space of probability measures for a random curve which extends the popular Dirichlet priors by allowing local clustering: non-homogeneous portions of a curve can be allocated to different clusters and the n individual curves can be represented as recombinations (hybrids) of a few canonical curves. More precisely, the prior proposed envisions a conceptual hidden factor with k -levels that acts locally on each curve. We discuss several models incorporating this prior and illustrate its performance with simulated and real data sets. We examine theoretical properties of the proposed finite hybrid Dirichlet mixtures, specifically, their behaviour as the number of the mixture components goes to ∞ and their connection with Dirichlet process mixtures.  相似文献   

5.
The article presents careful comparisons among several empirical Bayes estimates to the precision parameter of Dirichlet process prior, with the setup of univariate observations and multigroup data. Specifically, the data are equipped with a two-stage compound sampling model, where the prior is assumed as a Dirichlet process that follows within a Bayesian nonparametric framework. The precision parameter α measures the strength of the prior belief and kinds of estimates are generated on the basis of observations, including the naive estimate, two calibrated naive estimates, and two different types of maximum likelihood estimates stemming from distinct distributions. We explore some theoretical properties and provide explicitly detailed comparisons among these estimates, in the perspectives of bias, variance, and mean squared error. Besides, we further present the corresponding calculation algorithms and numerical simulations to illustrate our theoretical achievements.  相似文献   

6.
This paper considers the use of Dirichlet process prior distributions in the statistical analysis of network data. Dirichlet process prior distributions have the advantages of avoiding the parametric specifications for distributions, which are rarely known, and of facilitating a clustering effect, which is often applicable to network nodes. The approach is highlighted for two network models and is conveniently implemented using WinBUGS software.  相似文献   

7.
We develop clustering procedures for longitudinal trajectories based on a continuous-time hidden Markov model (CTHMM) and a generalized linear observation model. Specifically, in this article we carry out finite and infinite mixture model-based clustering for a CTHMM and achieve inference using Markov chain Monte Carlo (MCMC). For a finite mixture model with a prior on the number of components, we implement reversible-jump MCMC to facilitate the trans-dimensional move between models with different numbers of clusters. For a Dirichlet process mixture model, we utilize restricted Gibbs sampling split–merge proposals to improve the performance of the MCMC algorithm. We apply our proposed algorithms to simulated data as well as a real-data example, and the results demonstrate the desired performance of the new sampler.  相似文献   

8.
This paper presents a new Bayesian, infinite mixture model based, clustering approach, specifically designed for time-course microarray data. The problem is to group together genes which have “similar” expression profiles, given the set of noisy measurements of their expression levels over a specific time interval. In order to capture temporal variations of each curve, a non-parametric regression approach is used. Each expression profile is expanded over a set of basis functions and the sets of coefficients of each curve are subsequently modeled through a Bayesian infinite mixture of Gaussian distributions. Therefore, the task of finding clusters of genes with similar expression profiles is then reduced to the problem of grouping together genes whose coefficients are sampled from the same distribution in the mixture. Dirichlet processes prior is naturally employed in such kinds of models, since it allows one to deal automatically with the uncertainty about the number of clusters. The posterior inference is carried out by a split and merge MCMC sampling scheme which integrates out parameters of the component distributions and updates only the latent vector of the cluster membership. The final configuration is obtained via the maximum a posteriori estimator. The performance of the method is studied using synthetic and real microarray data and is compared with the performances of competitive techniques.  相似文献   

9.
The Bayesian approach to inference stands out for naturally allowing borrowing information across heterogeneous populations, with different samples possibly sharing the same distribution. A popular Bayesian nonparametric model for clustering probability distributions is the nested Dirichlet process, which however has the drawback of grouping distributions in a single cluster when ties are observed across samples. With the goal of achieving a flexible and effective clustering method for both samples and observations, we investigate a nonparametric prior that arises as the composition of two different discrete random structures and derive a closed-form expression for the induced distribution of the random partition, the fundamental tool regulating the clustering behavior of the model. On the one hand, this allows to gain a deeper insight into the theoretical properties of the model and, on the other hand, it yields an MCMC algorithm for evaluating Bayesian inferences of interest. Moreover, we single out limitations of this algorithm when working with more than two populations and, consequently, devise an alternative more efficient sampling scheme, which as a by-product, allows testing homogeneity between different populations. Finally, we perform a comparison with the nested Dirichlet process and provide illustrative examples of both synthetic and real data.  相似文献   

10.
We provide Bayesian methodology to relax the assumption that all subpopulation effects in a linear mixed-effects model have, after adjustment for covariates, a common mean. We expand the model specification by assuming that the m subpopulation effects are allowed to cluster into d groups where the value of d, 1?d?m, and the composition of the d groups are unknown, a priori. Specifically, for each partition of the m effects into d groups we only assume that the subpopulation effects in each group are exchangeable and are independent across the groups. We show that failure to take account of this clustering, as with the customary method, will lead to serious errors in inference about the variances and subpopulation effects, but the proposed, expanded, model leads to appropriate inferences. The efficacy of the proposed method is evaluated by contrasting it with both the customary method and use of a Dirichlet process prior. We use data from small area estimation to illustrate our method.  相似文献   

11.
Stylometry refers to the statistical analysis of literary style of authors based on the characteristics of expression in their writings. We propose an approach to stylometry based on a Bayesian Dirichlet process mixture model using multinomial word frequency data. The parameters of the multinomial distribution of word frequency data are the “word prints” of the author. Our approach is based on model-based clustering of the vectors of probability values of the multinomial distribution. The resultant clusters identify different writing styles that assist in author attribution for disputed works in a corpus. As a test case, the methodology is applied to the problem of authorship attribution involving the Federalist papers. Our results are consistent with previous stylometric analyses of these papers.  相似文献   

12.
Summary. We present a decision theoretic formulation of product partition models (PPMs) that allows a formal treatment of different decision problems such as estimation or hypothesis testing and clustering methods simultaneously. A key observation in our construction is the fact that PPMs can be formulated in the context of model selection. The underlying partition structure in these models is closely related to that arising in connection with Dirichlet processes. This allows a straightforward adaptation of some computational strategies—originally devised for nonparametric Bayesian problems—to our framework. The resulting algorithms are more flexible than other competing alternatives that are used for problems involving PPMs. We propose an algorithm that yields Bayes estimates of the quantities of interest and the groups of experimental units. We explore the application of our methods to the detection of outliers in normal and Student t regression models, with clustering structure equivalent to that induced by a Dirichlet process prior. We also discuss the sensitivity of the results considering different prior distributions for the partitions.  相似文献   

13.
We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (Commun. Stat., Simul. Comput. 36:45–54, 2007). This new sampler allows for the fitting of infinite mixture models with a wide-range of prior specifications. To illustrate this flexibility we consider priors defined through infinite sequences of independent positive random variables. Two applications are considered: density estimation using mixture models and hazard function estimation. In each case we show how the slice efficient sampler can be applied to make inference in the models. In the mixture case, two submodels are studied in detail. The first one assumes that the positive random variables are Gamma distributed and the second assumes that they are inverse-Gaussian distributed. Both priors have two hyperparameters and we consider their effect on the prior distribution of the number of occupied clusters in a sample. Extensive computational comparisons with alternative “conditional” simulation techniques for mixture models using the standard Dirichlet process prior and our new priors are made. The properties of the new priors are illustrated on a density estimation problem.  相似文献   

14.
The K-means algorithm and the normal mixture model method are two common clustering methods. The K-means algorithm is a popular heuristic approach which gives reasonable clustering results if the component clusters are ball-shaped. Currently, there are no analytical results for this algorithm if the component distributions deviate from the ball-shape. This paper analytically studies how the K-means algorithm changes its classification rule as the normal component distributions become more elongated under the homoscedastic assumption and compares this rule with that of the Bayes rule from the mixture model method. We show that the classification rules of both methods are linear, but the slopes of the two classification lines change in the opposite direction as the component distributions become more elongated. The classification performance of the K-means algorithm is then compared to that of the mixture model method via simulation. The comparison, which is limited to two clusters, shows that the K-means algorithm provides poor classification performances consistently as the component distributions become more elongated while the mixture model method can potentially, but not necessarily, take advantage of this change and provide a much better classification performance.  相似文献   

15.
The forward search is a method of robust data analysis in which outlier free subsets of the data of increasing size are used in model fitting; the data are then ordered by closeness to the model. Here the forward search, with many random starts, is used to cluster multivariate data. These random starts lead to the diagnostic identification of tentative clusters. Application of the forward search to the proposed individual clusters leads to the establishment of cluster membership through the identification of non-cluster members as outlying. The method requires no prior information on the number of clusters and does not seek to classify all observations. These properties are illustrated by the analysis of 200 six-dimensional observations on Swiss banknotes. The importance of linked plots and brushing in elucidating data structures is illustrated. We also provide an automatic method for determining cluster centres and compare the behaviour of our method with model-based clustering. In a simulated example with eight clusters our method provides more stable and accurate solutions than model-based clustering. We consider the computational requirements of both procedures.  相似文献   

16.
The Dirichlet process has been used extensively in Bayesian non parametric modeling, and has proven to be very useful. In particular, mixed models with Dirichlet process random effects have been used in modeling many types of data and can often outperform their normal random effect counterparts. Here we examine the linear mixed model with Dirichlet process random effects from a classical view, and derive the best linear unbiased estimator (BLUE) of the fixed effects. We are also able to calculate the resulting covariance matrix and find that the covariance is directly related to the precision parameter of the Dirichlet process, giving a new interpretation of this parameter. We also characterize the relationship between the BLUE and the ordinary least-squares (OLS) estimator and show how confidence intervals can be approximated.  相似文献   

17.
Let X1, …,Xn, and Y1, … Yn be consecutive samples from a distribution function F which itself is randomly chosen according to the Ferguson (1973) Dirichlet-process prior distribution on the space of distribution functions. Typically, prediction intervals employ the observations X1,…, Xn in the first sample in order to predict a specified function of the future sample Y1, …, Yn. Here one- and two-sided prediction intervals for at least q of N future observations are developed for the situation in which, in addition to the previous sample, there is prior information available. The information is specified via the parameter α of the Dirichlet process prior distribution.  相似文献   

18.
This paper compares the Bayesian and frequentist approaches to testing a one-sided hypothesis about a multivariate mean. First, this paper proposes a simple way to assign a Bayesian posterior probability to one-sided hypotheses about a multivariate mean. The approach is to use (almost) the exact posterior probability under the assumption that the data has multivariate normal distribution, under either a conjugate prior in large samples or under a vague Jeffreys prior. This is also approximately the Bayesian posterior probability of the hypothesis based on a suitably flat Dirichlet process prior over an unknown distribution generating the data. Then, the Bayesian approach and a frequentist approach to testing the one-sided hypothesis are compared, with results that show a major difference between Bayesian reasoning and frequentist reasoning. The Bayesian posterior probability can be substantially smaller than the frequentist p-value. A class of example is given where the Bayesian posterior probability is basically 0, while the frequentist p-value is basically 1. The Bayesian posterior probability in these examples seems to be more reasonable. Other drawbacks of the frequentist p-value as a measure of whether the one-sided hypothesis is true are also discussed.  相似文献   

19.
We evaluate MCMC sampling schemes for a variety of link functions in generalized linear models with Dirichlet process random effects. First, we find that there is a large amount of variability in the performance of MCMC algorithms, with the slice sampler typically being less desirable than either a Kolmogorov–Smirnov mixture representation or a Metropolis–Hastings algorithm. Second, in fitting the Dirichlet process, dealing with the precision parameter has troubled model specifications in the past. Here we find that incorporating this parameter into the MCMC sampling scheme is not only computationally feasible, but also results in a more robust set of estimates, in that they are marginalized-over rather than conditioned-upon. Applications are provided with social science problems in areas where the data can be difficult to model, and we find that the nonparametric nature of the Dirichlet process priors for the random effects leads to improved analyses with more reasonable inferences.  相似文献   

20.
In the field of molecular biology, it is often of interest to analyze microarray data for clustering genes based on similar profiles of gene expression to identify genes that are differentially expressed under multiple biological conditions. One of the notable characteristics of a gene expression profile is that it shows a cyclic curve over a course of time. To group sequences of similar molecular functions, we propose a Bayesian Dirichlet process mixture of linear regression models with a Fourier series for the regression coefficients, for each of which a spike and slab prior is assumed. A full Gibbs-sampling algorithm is developed for an efficient Markov chain Monte Carlo (MCMC) posterior computation. Due to the so-called “label-switching” problem and different numbers of clusters during the MCMC computation, a post-process approach of Fritsch and Ickstadt (2009) is additionally applied to MCMC samples for an optimal single clustering estimate by maximizing the posterior expected adjusted Rand index with the posterior probabilities of two observations being clustered together. The proposed method is illustrated with two simulated data and one real data of the physiological response of fibroblasts to serum of Iyer et al. (1999).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号