首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The microarray technology allows the measurement of expression levels of thousands of genes simultaneously. The dimension and complexity of gene expression data obtained by microarrays create challenging data analysis and management problems ranging from the analysis of images produced by microarray experiments to biological interpretation of results. Therefore, statistical and computational approaches are beginning to assume a substantial position within the molecular biology area. We consider the problem of simultaneously clustering genes and tissue samples (in general conditions) of a microarray data set. This can be useful for revealing groups of genes involved in the same molecular process as well as groups of conditions where this process takes place. The need of finding a subset of genes and tissue samples defining a homogeneous block had led to the application of double clustering techniques on gene expression data. Here, we focus on an extension of standard K-means to simultaneously cluster observations and features of a data matrix, namely double K-means introduced by Vichi (2000). We introduce this model in a probabilistic framework and discuss the advantages of using this approach. We also develop a coordinate ascent algorithm and test its performance via simulation studies and real data set. Finally, we validate the results obtained on the real data set by building resampling confidence intervals for block centroids.  相似文献   

2.
Clustered or correlated samples of categorical response data arise frequently in many fields of application. The method of generalized estimating equations (GEEs) introduced in Liang and Zeger [Longitudinal data analysis using generalized linear models, Biometrika 73 (1986), pp. 13–22] is often used to analyse this type of data. GEEs give consistent estimates of the regression parameters and their variance based upon the Pearson residuals. Park et al. [Alternative GEE estimation procedures for discrete longitudinal data, Comput. Stat. Data Anal. 28 (1998), pp. 243–256] considered a modification of the GEE approach using the Anscombe residual and the deviance residual. In this work, we propose to extend this idea to a family of generalized residuals. A wide simulation study is conducted for binary and Poisson correlated outcomes and also two numerical illustrations are presented.  相似文献   

3.
DNA microarrays allow for measuring expression levels of a large number of genes between different experimental conditions and/or samples. Association rule mining (ARM) methods are helpful in finding associational relationships between genes. However, classical association rule mining (CARM) algorithms extract only a subset of the associations that exist among different binary states; therefore can only infer part of the relationships on gene regulations. To resolve this problem, we developed an extended association rule mining (EARM) strategy along with a new way of the association rule definition. Compared with the CARM method, our new approach extracted more frequent genesets from a public microarray data set. The EARM method discovered some biologically interesting association rules that were not detected by CARM. Therefore, EARM provides an effective tool for exploring relationships among genes.  相似文献   

4.
Multivariate control charts are used to monitor stochastic processes for changes and unusual observations. Hotelling's T2 statistic is calculated for each new observation and an out‐of‐control signal is issued if it goes beyond the control limits. However, this classical approach becomes unreliable as the number of variables p approaches the number of observations n, and impossible when p exceeds n. In this paper, we devise an improvement to the monitoring procedure in high‐dimensional settings. We regularise the covariance matrix to estimate the baseline parameter and incorporate a leave‐one‐out re‐sampling approach to estimate the empirical distribution of future observations. An extensive simulation study demonstrates that the new method outperforms the classical Hotelling T2 approach in power, and maintains appropriate false positive rates. We demonstrate the utility of the method using a set of quality control samples collected to monitor a gas chromatography–mass spectrometry apparatus over a period of 67 days.  相似文献   

5.
Time-course gene sets are collections of predefined groups of genes in some patients gathered over time. The analysis of time-course gene sets for testing gene sets which vary significantly over time is an important context in genomic data analysis. In this paper, the method of generalized estimating equations (GEEs), which is a semi-parametric approach, is applied to time-course gene set data. We propose a special structure of working correlation matrix to handle the association among repeated measurements of each patient over time. Also, the proposed working correlation matrix permits estimation of the effects of the same gene among different patients. The proposed approach is applied to an HIV therapeutic vaccine trial (DALIA-1 trial). This data set has two phases: pre-ATI and post-ATI which depend on a vaccination period. Using multiple testing, the significant gene sets in the pre-ATI phase are detected and data on two randomly selected gene sets in the post-ATI phase are also analyzed. Some simulation studies are performed to illustrate the proposed approaches. The results of the simulation studies confirm the good performance of our proposed approach.  相似文献   

6.
We consider the situation where there is a known regression model that can be used to predict an outcome, Y, from a set of predictor variables X . A new variable B is expected to enhance the prediction of Y. A dataset of size n containing Y, X and B is available, and the challenge is to build an improved model for Y| X ,B that uses both the available individual level data and some summary information obtained from the known model for Y| X . We propose a synthetic data approach, which consists of creating m additional synthetic data observations, and then analyzing the combined dataset of size n + m to estimate the parameters of the Y| X ,B model. This combined dataset of size n + m now has missing values of B for m of the observations, and is analyzed using methods that can handle missing data (e.g., multiple imputation). We present simulation studies and illustrate the method using data from the Prostate Cancer Prevention Trial. Though the synthetic data method is applicable to a general regression context, to provide some justification, we show in two special cases that the asymptotic variances of the parameter estimates in the Y| X ,B model are identical to those from an alternative constrained maximum likelihood estimation approach. This correspondence in special cases and the method's broad applicability makes it appealing for use across diverse scenarios. The Canadian Journal of Statistics 47: 580–603; 2019 © 2019 Statistical Society of Canada  相似文献   

7.
Lin  Tsung I.  Lee  Jack C.  Ni  Huey F. 《Statistics and Computing》2004,14(2):119-130
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data.  相似文献   

8.
The aim of this paper is to formulate an analytical–informational–theoretical approach which, given the incomplete nature of the available micro-level data, can be used to provide disaggregated values of a given variable. A functional relationship between the variable to be disaggregated and the available variables/indicators at the area level is specified through a combination of different macro- and micro-data sources. Data disaggregation is accomplished by considering two different cases. In the first case, sub-area level information on the variable of interest is available, and a generalized maximum entropy approach is employed to estimate the optimal disaggregate model. In the second case, we assume that the sub-area level information is partial and/or incomplete, and we estimate the model on a smaller scale by developing a generalized cross-entropy-based formulation. The proposed spatial-disaggregation approach is used in relation to an Italian data set in order to compute the value-added per manufacturing sector of local labour systems within the Umbria region, by combining the available micro/macro-level data and by formulating a suitable set of constraints for the optimization problem in the presence of errors in micro-aggregates.  相似文献   

9.
Currently there is much interest in using microarray gene-expression data to form prediction rules for the diagnosis of patient outcomes. A process of gene selection is usually carried out first to find those genes that are most useful according to some criterion for distinguishing between the given classes of tissue samples. However, there is a bias (selection bias) introduced in the estimate of the final version of a prediction rule that has been formed from a smaller subset of the genes that have been selected according to some optimality criterion. In this paper, we focus on the bias that arises when a full data set is not available in the first instance and the prediction rule is formed subsequently by working with the top-ranked genes from the full set. We demonstrate how large the subset of top genes must be before this selection bias is not of practical consequence.  相似文献   

10.
In this note we provide a general framework for describing interval-censored samples including estimation of the magnitude and rank positions of data that have been interval-censored so as to counteract the effect of censoring. This process of sample adjustment, or renovation, allows samples to be compared graphically, using diagrams (such as boxplots) which are based on ranks. The renovation process is based on Buckley-James regression estimators for linear regression with censored data.  相似文献   

11.
Abstract

Cluster analysis is the distribution of objects into different groups or more precisely the partitioning of a data set into subsets (clusters) so that the data in subsets share some common trait according to some distance measure. Unlike classification, in clustering one has to first decide the optimum number of clusters and then assign the objects into different clusters. Solution of such problems for a large number of high dimensional data points is quite complicated and most of the existing algorithms will not perform properly. In the present work a new clustering technique applicable to large data set has been used to cluster the spectra of 702248 galaxies and quasars having 1,540 points in wavelength range imposed by the instrument. The proposed technique has successfully discovered five clusters from this 702,248X1,540 data matrix.  相似文献   

12.
This paper presents a new Bayesian, infinite mixture model based, clustering approach, specifically designed for time-course microarray data. The problem is to group together genes which have “similar” expression profiles, given the set of noisy measurements of their expression levels over a specific time interval. In order to capture temporal variations of each curve, a non-parametric regression approach is used. Each expression profile is expanded over a set of basis functions and the sets of coefficients of each curve are subsequently modeled through a Bayesian infinite mixture of Gaussian distributions. Therefore, the task of finding clusters of genes with similar expression profiles is then reduced to the problem of grouping together genes whose coefficients are sampled from the same distribution in the mixture. Dirichlet processes prior is naturally employed in such kinds of models, since it allows one to deal automatically with the uncertainty about the number of clusters. The posterior inference is carried out by a split and merge MCMC sampling scheme which integrates out parameters of the component distributions and updates only the latent vector of the cluster membership. The final configuration is obtained via the maximum a posteriori estimator. The performance of the method is studied using synthetic and real microarray data and is compared with the performances of competitive techniques.  相似文献   

13.
By sequence homology search, the list of all the functions found and the counts of reads being aligned to them present the functional profile of a metagenomic sample. However, a significant obstacle has been observed in this approach due to the short read length associated with many next-generation sequencing technologies. This includes artificial families, cross-annotations, length bias and conservation bias. The widely applied cut-off methods, such as BLAST E-value, are not able to solve the problems. Following the published successful procedures on the artificial families and the cross-annotation issue, we propose in this paper to use zero-truncated Poisson and Binomial (ZTP-Bin) hierarchical modelling to correct the length bias and the conservation bias. Goodness of fit of the modelling and cross-validation for the prediction using a bioinformatic simulated sample show the validity of this approach. Evaluated on an in vitro-simulated data set, the proposed modelling method outperforms other traditional methods. All three steps were then sequentially applied on real-life metagenomic samples to show that the proposed framework will lead to a more accurate functional profile of a short-read metagenomic sample.  相似文献   

14.
Most statistical and data-mining algorithms assume that data come from a stationary distribution. However, in many real-world classification tasks, data arrive over time and the target concept to be learned from the data stream may change accordingly. Many algorithms have been proposed for learning drifting concepts. To deal with the problem of learning when the distribution generating the data changes over time, dynamic weighted majority was proposed as an ensemble method for concept drift. Unfortunately, this technique considers neither the age of the classifiers in the ensemble nor their past correct classification. In this paper, we propose a method that takes into account expert's age as well as its contribution to the global algorithm's accuracy. We evaluate the effectiveness of our proposed method by using m classifiers and training a collection of n-fold partitioning of the data. Experimental results on a benchmark data set show that our method outperforms existing ones.  相似文献   

15.
It is often necessary to compare two measurement methods in medicine and other experimental sciences. This problem covers a broad range of data. Many authors have explored ways of assessing the agreement of two sets of measurements. However, there has been relatively little attention to the problem of determining sample size for designing an agreement study. In this paper, a method using the interval approach for concordance is proposed to calculate sample size in conducting an agreement study. The philosophy behind this is that the concordance is satisfied when no more than the pre‐specified k discordances are found for a reasonable large sample size n since it is much easier to define a discordance pair. The goal here is to find such a reasonable large sample size n. The sample size calculation is based on two rates: the discordance rate and tolerance probability, which in turn can be used to quantify an agreement study. The proposed approach is demonstrated through a real data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we apply empirical likelihood for two-sample problems with growing high dimensionality. Our results are demonstrated for constructing confidence regions for the difference of the means of two p-dimensional samples and the difference in value between coefficients of two p-dimensional sample linear model. We show that empirical likelihood based estimator has the efficient property. That is, as p → ∞ for high-dimensional data, the limit distribution of the EL ratio statistic for the difference of the means of two samples and the difference in value between coefficients of two-sample linear model is asymptotic normal distribution. Furthermore, empirical likelihood (EL) gives efficient estimator for regression coefficients in linear models, and can be as efficient as a parametric approach. The performance of the proposed method is illustrated via numerical simulations.  相似文献   

17.
Nonlinear mixed-effects (NLME) models are flexible enough to handle repeated-measures data from various disciplines. In this article, we propose both maximum-likelihood and restricted maximum-likelihood estimations of NLME models using first-order conditional expansion (FOCE) and the expectation–maximization (EM) algorithm. The FOCE-EM algorithm implemented in the ForStat procedure SNLME is compared with the Lindstrom and Bates (LB) algorithm implemented in both the SAS macro NLINMIX and the S-Plus/R function nlme in terms of computational efficiency and statistical properties. Two realworld data sets an orange tree data set and a Chinese fir (Cunninghamia lanceolata) data set, and a simulated data set were used for evaluation. FOCE-EM converged for all mixed models derived from the base model in the two realworld cases, while LB did not, especially for the models in which random effects are simultaneously considered in several parameters to account for between-subject variation. However, both algorithms had identical estimated parameters and fit statistics for the converged models. We therefore recommend using FOCE-EM in NLME models, particularly when convergence is a concern in model selection.  相似文献   

18.
In this paper, we study the multi-class differential gene expression detection for microarray data. We propose a likelihood-based approach to estimating an empirical null distribution to incorporate gene interactions and provide a more accurate false-positive control than the commonly used permutation or theoretical null distribution-based approach. We propose to rank important genes by p-values or local false discovery rate based on the estimated empirical null distribution. Through simulations and application to lung transplant microarray data, we illustrate the competitive performance of the proposed method.  相似文献   

19.
This is a comparative study of various clustering and classification algorithms as applied to differentiate cancer and non-cancer protein samples using mass spectrometry data. Our study demonstrates the usefulness of a feature selection step prior to applying a machine learning tool. A natural and common choice of a feature selection tool is the collection of marginal p-values obtained from t-tests for testing the intensity differences at each m/z ratio in the cancer versus non-cancer samples. We study the effect of selecting a cutoff in terms of the overall Type 1 error rate control on the performance of the clustering and classification algorithms using the significant features. For the classification problem, we also considered m/z selection using the importance measures computed by the Random Forest algorithm of Breiman. Using a data set of proteomic analysis of serum from ovarian cancer patients and serum from cancer-free individuals in the Food and Drug Administration and National Cancer Institute Clinical Proteomics Database, we undertake a comparative study of the net effect of the machine learning algorithm–feature selection tool–cutoff criteria combination on the performance as measured by an appropriate error rate measure.  相似文献   

20.
This article considers the Phase I analysis of data when the quality of a process or product is characterized by a multiple linear regression model. This is usually referred to as the analysis of linear profiles in the statistical quality control literature. The literature includes several approaches for the analysis of simple linear regression profiles. Little work, however, has been done in the analysis of multiple linear regression profiles. This article proposes a new approach for the analysis of Phase I multiple linear regression profiles. Using this approach, regardless of the number of explanatory variables used to describe it, the profile response is monitored using only three parameters, an intercept, a slope, and a variance. Using simulation, the performance of the proposed method is compared to that of the existing methods for monitoring multiple linear profiles data in terms of the probability of a signal. The advantage of the proposed method over the existing methods is greatly improved detection of changes in the process parameters of linear profiles with high-dimensional space. The article also proposes useful diagnostic aids based on F-statistics to help in identifying the source of profile variation and the locations of out-of-control samples. Finally, the use of multiple linear profile methods is illustrated by a data set from a calibration application at National Aeronautics and Space Administration (NASA) Langley Research Center.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号