首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a new Bayesian, infinite mixture model based, clustering approach, specifically designed for time-course microarray data. The problem is to group together genes which have “similar” expression profiles, given the set of noisy measurements of their expression levels over a specific time interval. In order to capture temporal variations of each curve, a non-parametric regression approach is used. Each expression profile is expanded over a set of basis functions and the sets of coefficients of each curve are subsequently modeled through a Bayesian infinite mixture of Gaussian distributions. Therefore, the task of finding clusters of genes with similar expression profiles is then reduced to the problem of grouping together genes whose coefficients are sampled from the same distribution in the mixture. Dirichlet processes prior is naturally employed in such kinds of models, since it allows one to deal automatically with the uncertainty about the number of clusters. The posterior inference is carried out by a split and merge MCMC sampling scheme which integrates out parameters of the component distributions and updates only the latent vector of the cluster membership. The final configuration is obtained via the maximum a posteriori estimator. The performance of the method is studied using synthetic and real microarray data and is compared with the performances of competitive techniques.  相似文献   

2.
ABSTRACT

Genetic data are frequently categorical and have complex dependence structures that are not always well understood. For this reason, clustering and classification based on genetic data, while highly relevant, are challenging statistical problems. Here we consider a versatile U-statistics-based approach for non-parametric clustering that allows for an unconventional way of solving these problems. In this paper we propose a statistical test to assess group homogeneity taking into account multiple testing issues and a clustering algorithm based on dissimilarities within and between groups that highly speeds up the homogeneity test. We also propose a test to verify classification significance of a sample in one of two groups. We present Monte Carlo simulations that evaluate size and power of the proposed tests under different scenarios. Finally, the methodology is applied to three different genetic data sets: global human genetic diversity, breast tumour gene expression and Dengue virus serotypes. These applications showcase this statistical framework's ability to answer diverse biological questions in the high dimension low sample size scenario while adapting to the specificities of the different datatypes.  相似文献   

3.
This paper extends the scedasticity comparison among several groups of observations, usually complying with the homoscedastic and the heteroscedastic cases, in order to deal with data sets laying in an intermediate situation. As is well known, homoscedasticity corresponds to equality in orientation, shape and size of the group scatters. Here our attention is focused on two weaker requirements: scatters with the same orientation, but with different shape and size, or scatters with the same shape and size but different orientation. We introduce a multiple testing procedure that takes into account each of the above conditions. This approach discloses a richer information on the data underlying structure than the classical method only based on homo/heteroscedasticity. At the same time, it allows a more parsimonious parametrization, whenever the patterned model is appropriate to describe the real data. The new inferential methodology is then applied to some well-known data sets, chosen in the multivariate literature, to show the real gain in using this more informative approach. Finally, a wide simulation study illustrates and compares the performance of the proposal using data sets with gradual departure from homoscedasticity.  相似文献   

4.
We present a new class of models to fit longitudinal data, obtained with a suitable modification of the classical linear mixed-effects model. For each sample unit, the joint distribution of the random effect and the random error is a finite mixture of scale mixtures of multivariate skew-normal distributions. This extension allows us to model the data in a more flexible way, taking into account skewness, multimodality and discrepant observations at the same time. The scale mixtures of skew-normal form an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal, skew-Student-t, skew-slash and the skew-contaminated normal distributions as special cases, being a flexible alternative to the use of the corresponding symmetric distributions in this type of models. A simple efficient MCMC Gibbs-type algorithm for posterior Bayesian inference is employed. In order to illustrate the usefulness of the proposed methodology, two artificial and two real data sets are analyzed.  相似文献   

5.
The authors describe Bayesian estimation for the parameters of the bivariate gamma distribution due to Kibble (1941). The density of this distribution can be written as a mixture, which allows for a simple data augmentation scheme. The authors propose a Markov chain Monte Carlo algorithm to facilitate estimation. They show that the resulting chain is geometrically ergodic, and thus a regenerative sampling procedure is applicable, which allows for estimation of the standard errors of the ergodic means. They develop Bayesian hypothesis testing procedures to test both the dependence hypothesis of the two variables and the hypothesis of equal means. They also propose a reversible jump Markov chain Monte Carlo algorithm to carry out the model selection problem. Finally, they use sets of real and simulated data to illustrate their methodology.  相似文献   

6.
Extended Hazard Regression Model for Reliability and Survival Analysis   总被引:1,自引:0,他引:1  
We propose an extended hazard regression model which allows the spread parameter to be dependent on covariates. This allows a broad class of models which includes the most common hazard models, such as the proportional hazards model, the accelerated failure time model and a proportional hazards/accelerated failure time hybrid model with constant spread parameter. Simulations based on sub-classes of this model suggest that maximum likelihood performs well even when only small or moderate-size data sets are available and the censoring pattern is heavy. The methodology provides a broad framework for analysis of reliability and survival data. Two numerical examples illustrate the results.  相似文献   

7.
In the past decades, the number of variables explaining observations in different practical applications increased gradually. This has led to heavy computational tasks, despite of widely using provisional variable selection methods in data processing. Therefore, more methodological techniques have appeared to reduce the number of explanatory variables without losing much of the information. In these techniques, two distinct approaches are apparent: ‘shrinkage regression’ and ‘sufficient dimension reduction’. Surprisingly, there has not been any communication or comparison between these two methodological categories, and it is not clear when each of these two approaches are appropriate. In this paper, we fill some of this gap by first reviewing each category in brief, paying special attention to the most commonly used methods in each category. We then compare commonly used methods from both categories based on their accuracy, computation time, and their ability to select effective variables. A simulation study on the performance of the methods in each category is generated as well. The selected methods are concurrently tested on two sets of real data which allows us to recommend conditions under which one approach is more appropriate to be applied to high-dimensional data.  相似文献   

8.
This paper is about techniques for clustering sequences such as nucleic or amino acids. Our application is to defining viral subtypes of HIV on the basis of similarities of V3 loop region amino acids of the envelope (env) gene. The techniques introduced here could apply with virtually no change to other HIV genes as well as to other problems and data not necessarily of viral origin. These algorithms as they apply to quantitative data have found much application in engineering contexts to compressing images and speech. They are called vector quantization and involve a mapping from a large number of possible inputs into a much smaller number of outputs. Many implementations, in particular those that go by the name generalized Lloyd or k-means, exist for choosing sets of possible outputs and mappings. With each there is an attempt to maximize similarities among inputs that map to any single output, or, alternatively, to minimize some measure of distortion between input and output. Here, two standard types of vector quantization are brought to bear upon the cited problem of clustering V3 loop amino acid sequences. Results of this clustering are compared to those of the well known UPGMA algorithms, the unweighted pair group method in which arithmetic averages are employed.  相似文献   

9.
We propose a mixture of latent variables model for the model-based clustering, classification, and discriminant analysis of data comprising variables with mixed type. This approach is a generalization of latent variable analysis, and model fitting is carried out within the expectation-maximization framework. Our approach is outlined and a simulation study conducted to illustrate the effect of sample size and noise on the standard errors and the recovery probabilities for the number of groups. Our modelling methodology is then applied to two real data sets and their clustering and classification performance is discussed. We conclude with discussion and suggestions for future work.  相似文献   

10.
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.  相似文献   

11.
Cross-validated likelihood is investigated as a tool for automatically determining the appropriate number of components (given the data) in finite mixture modeling, particularly in the context of model-based probabilistic clustering. The conceptual framework for the cross-validation approach to model selection is straightforward in the sense that models are judged directly on their estimated out-of-sample predictive performance. The cross-validation approach, as well as penalized likelihood and McLachlan's bootstrap method, are applied to two data sets and the results from all three methods are in close agreement. The second data set involves a well-known clustering problem from the atmospheric science literature using historical records of upper atmosphere geopotential height in the Northern hemisphere. Cross-validated likelihood provides an interpretable and objective solution to the atmospheric clustering problem. The clusters found are in agreement with prior analyses of the same data based on non-probabilistic clustering techniques.  相似文献   

12.
Dimension reduction for model-based clustering   总被引:1,自引:0,他引:1  
We introduce a dimension reduction method for visualizing the clustering structure obtained from a finite mixture of Gaussian densities. Information on the dimension reduction subspace is obtained from the variation on group means and, depending on the estimated mixture model, on the variation on group covariances. The proposed method aims at reducing the dimensionality by identifying a set of linear combinations, ordered by importance as quantified by the associated eigenvalues, of the original features which capture most of the cluster structure contained in the data. Observations may then be projected onto such a reduced subspace, thus providing summary plots which help to visualize the clustering structure. These plots can be particularly appealing in the case of high-dimensional data and noisy structure. The new constructed variables capture most of the clustering information available in the data, and they can be further reduced to improve clustering performance. We illustrate the approach on both simulated and real data sets.  相似文献   

13.
Often, categorical ordinal data are clustered using a well-defined similarity measure for this kind of data and then using a clustering algorithm not specifically developed for them. The aim of this article is to introduce a new clustering method suitably planned for ordinal data. Objects are grouped using a multinomial model, a cluster tree and a pruning strategy. Two types of pruning are analyzed through simulations. The proposed method allows to overcome two typical problems of cluster analysis: the choice of the number of groups and the scale invariance.  相似文献   

14.
Clustering algorithms are used in the analysis of gene expression data to identify groups of genes with similar expression patterns. These algorithms group genes with respect to a predefined dissimilarity measure without using any prior classification of the data. Most of the clustering algorithms require the number of clusters as input, and all the objects in the dataset are usually assigned to one of the clusters. We propose a clustering algorithm that finds clusters sequentially, and allows for sporadic objects, so there are objects that are not assigned to any cluster. The proposed sequential clustering algorithm has two steps. First it finds candidates for centers of clusters. Multiple candidates are used to make the search for clusters more efficient. Secondly, it conducts a local search around the candidate centers to find the set of objects that defines a cluster. The candidate clusters are compared using a predefined score, the best cluster is removed from data, and the procedure is repeated. We investigate the performance of this algorithm using simulated data and we apply this method to analyze gene expression profiles in a study on the plasticity of the dendritic cells.  相似文献   

15.
Model-based clustering using copulas with applications   总被引:1,自引:0,他引:1  
The majority of model-based clustering techniques is based on multivariate normal models and their variants. In this paper copulas are used for the construction of flexible families of models for clustering applications. The use of copulas in model-based clustering offers two direct advantages over current methods: (i) the appropriate choice of copulas provides the ability to obtain a range of exotic shapes for the clusters, and (ii) the explicit choice of marginal distributions for the clusters allows the modelling of multivariate data of various modes (either discrete or continuous) in a natural way. This paper introduces and studies the framework of copula-based finite mixture models for clustering applications. Estimation in the general case can be performed using standard EM, and, depending on the mode of the data, more efficient procedures are provided that can fully exploit the copula structure. The closure properties of the mixture models under marginalization are discussed, and for continuous, real-valued data parametric rotations in the sample space are introduced, with a parallel discussion on parameter identifiability depending on the choice of copulas for the components. The exposition of the methodology is accompanied and motivated by the analysis of real and artificial data.  相似文献   

16.
《统计学通讯:理论与方法》2012,41(16-17):3211-3232
The analysis of microarray data is a widespread functional genomics approach that allows for the monitoring of the expression of thousands of genes at once. The analysis of the great amount of data generated in a microarray experiment requires powerful statistical techniques. One of the first tasks of the analysis of microarray data is to cluster data into biologically meaningful groups according to their expression patterns. In this article, we discuss classical as well as recent clustering techniques for microarray data. We pay particular attention to both theoretical and practical issues and give some general indications that might be useful to practitioners.  相似文献   

17.
A bootstrap algorithm is provided for obtaining a confidence interval for the mean of a probability distribution when sequential data are considered. For this kind of data the empirical distribution can be biased but its bias is bounded by the coefficient of variation of the stopping rule associated with the sequential procedure. When using this distribution for resampling the validity of the bootstrap approach is established by means of a series expansion of the corresponding pivotal quantity. A simulation study is carried out using Wang and Tsiatis type tests and considering the normal and exponential distributions to generate the data. This study confirms that for moderate coefficients of variation of the stopping rule, the bootstrap method allows adequate confidence intervals for the parameters to be obtained, whichever is the distribution of data.  相似文献   

18.
This paper proposes a new approach to the treatment of item non-response in attitude scales. It combines the ideas of latent variable identification with the issues of non-response adjustment in sample surveys. The latent variable approach allows missing values to be included in the analysis and, equally importantly, allows information about attitude to be inferred from non-response. We present a symmetric pattern methodology for handling item non-response in attitude scales. The methodology is symmetric in that all the variables are given equivalent status in the analysis (none is designated a 'dependent' variable) and is pattern based in that the pattern of responses and non-responses across individuals is a key element in the analysis. Our approach to the problem is through a latent variable model with two latent dimensions: one to summarize response propensity and the other to summarize attitude, ability or belief. The methodology presented here can handle binary, metric and mixed (binary and metric) manifest items with missing values. Examples using both artificial data sets and two real data sets are used to illustrate the mechanism and the advantages of the methodology proposed.  相似文献   

19.
We present a methodology for rating in real-time the creditworthiness of public companies in the U.S. from the prices of traded assets. Our approach uses asset pricing data to impute a term structure of risk neutral survival functions or default probabilities. Firms are then clustered into ratings categories based on their survival functions using a functional clustering algorithm. This allows all public firms whose assets are traded to be directly rated by market participants. For firms whose assets are not traded, we show how they can be indirectly rated by matching them to firms that are traded based on observable characteristics. We also show how the resulting ratings can be used to construct loss distributions for portfolios of bonds. Finally, we compare our ratings to Standard & Poors and find that, over the period 2005 to 2011, our ratings lead theirs for firms that ultimately default.  相似文献   

20.
The kernel method of estimation of curves is now popular and widely used in statistical applications. Kernel estimators suffer from boundary effects, however, when the support of the function to be estimated has finite endpoints. Several solutions to this problem have already been proposed. Here the authors develop a new method of boundary correction for kernel density estimation. Their technique is a kind of generalized reflection involving transformed data. It generates a class of boundary corrected estimators having desirable properties such as local smoothness and nonnegativity. Simulations show that the proposed method performs quite well when compared with the existing methods for almost all shapes of densities. The authors present the theory behind this new methodology, and they determine the bias and variance of their estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号