首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A familyof partial likelihood logistic models is proposed for clusteredsurvival data that are reported in discrete time and that maybe censored. The possible dependence of individual survival timeswithin clusters is modeled, while distinct clusters are assumedto be independent. Two types of clusters are considered. First,all clusters have the same size and are identically distributed.Second, the clusters may vary in size. In both cases our asymptoticresults apply to a large number of small independent clusters.  相似文献   

2.
Clustering algorithms are used in the analysis of gene expression data to identify groups of genes with similar expression patterns. These algorithms group genes with respect to a predefined dissimilarity measure without using any prior classification of the data. Most of the clustering algorithms require the number of clusters as input, and all the objects in the dataset are usually assigned to one of the clusters. We propose a clustering algorithm that finds clusters sequentially, and allows for sporadic objects, so there are objects that are not assigned to any cluster. The proposed sequential clustering algorithm has two steps. First it finds candidates for centers of clusters. Multiple candidates are used to make the search for clusters more efficient. Secondly, it conducts a local search around the candidate centers to find the set of objects that defines a cluster. The candidate clusters are compared using a predefined score, the best cluster is removed from data, and the procedure is repeated. We investigate the performance of this algorithm using simulated data and we apply this method to analyze gene expression profiles in a study on the plasticity of the dendritic cells.  相似文献   

3.
We consider Dirichlet process mixture models in which the observed clusters in any particular dataset are not viewed as belonging to a finite set of possible clusters but rather as representatives of a latent structure in which objects belong to one of a potentially infinite number of clusters. As more information is revealed the number of inferred clusters is allowed to grow. The precision parameter of the Dirichlet process is a crucial parameter that controls the number of clusters. We develop a framework for the specification of the hyperparameters associated with the prior for the precision parameter that can be used both in the presence or absence of subjective prior information about the level of clustering. Our approach is illustrated in an analysis of clustering brands at the magazine Which?. The results are compared with the approach of Dorazio (2009) via a simulation study.  相似文献   

4.
In this study, an attempt has been made to classify the textile fabrics based on the physical properties using statistical multivariate techniques like discriminant analysis and cluster analysis. Initially, the discriminant functions have been constructed for the classification of the three known categories of fabrics made up of polyster, lyocell/viscose and treated-polyster. The classification yielded hundred per cent accuracy. Each of the three different categories of fabrics has been further subjected to the K-means clustering algorithm that yielded three clusters. These clusters are subjected to discriminant analysis which again yielded a 100% correct classification, indicating that the clusters are well separated. The properties of clusters are also investigated with respect to the measurements.  相似文献   

5.
6.
We consider n individuals described by p variables, represented by points of the surface of unit hypersphere. We suppose that the individuals are fixed and the set of variables comes from a mixture of bipolar Watson distributions. For the mixture identification, we use EM and dynamic clusters algorithms, which enable us to obtain a partition of the set of variables into clusters of variables.

Our aim is to evaluate the clusters obtained in these algorithms, using measures of within-groups variability and between-groups variability and compare these clusters with those obtained in other clustering approaches, by analyzing simulated and real data.  相似文献   

7.
Abstract

Cluster analysis is the distribution of objects into different groups or more precisely the partitioning of a data set into subsets (clusters) so that the data in subsets share some common trait according to some distance measure. Unlike classification, in clustering one has to first decide the optimum number of clusters and then assign the objects into different clusters. Solution of such problems for a large number of high dimensional data points is quite complicated and most of the existing algorithms will not perform properly. In the present work a new clustering technique applicable to large data set has been used to cluster the spectra of 702248 galaxies and quasars having 1,540 points in wavelength range imposed by the instrument. The proposed technique has successfully discovered five clusters from this 702,248X1,540 data matrix.  相似文献   

8.
Silhouette information evaluates the quality of the partition detected by a clustering technique. Since it is based on a measure of distance between the clustered observations, its standard formulation is not adequate when a density-based clustering technique is used. In this work we propose a suitable modification of the Silhouette information aimed at evaluating the quality of clusters in a density-based framework. It is based on the estimation of the data posterior probabilities of belonging to the clusters and may be used to measure our confidence about data allocation to the clusters as well as to choose the best partition among different ones.  相似文献   

9.
Accurate and efficient methods to detect unusual clusters of abnormal activity are needed in many fields such as medicine and business. Often the size of clusters is unknown; hence, multiple (variable) window scan statistics are used to identify clusters using a set of different potential cluster sizes. We give an efficient method to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. We define a Markov chain to efficiently keep track of probabilities needed to compute p-values for the statistic. The state space of the Markov chain is set up by a criterion developed to identify strings that are associated with observing the specified values of the statistic. Using our algorithm, we identify cases where the available approximations do not perform well. We demonstrate our methods by detecting unusual clusters of made free throw shots by National Basketball Association players during the 2009–2010 regular season.  相似文献   

10.
Two sampling strategies for estimation of population mean in overlapping clusters have been proposed, In the first strategy clusters are selected with egual probabilities, whereas in the second case selection probabilities are taken proportional to cluster size. The sampling efficiency of the latter is expected to be more in comparison to the former.  相似文献   

11.
Gi-Sung Lee  Daiho Uhm 《Statistics》2013,47(3):685-709
We propose new variants of Land et al.’s [Estimation of a rare sensitive attribute using Poisson distribution. Statistics. 2011. DOI: 10.1080/02331888.2010.524300] randomized response model when a population consists of some clusters and the population is stratified with some clusters in each stratum. The estimator for the mean number of persons who possess a rare sensitive attribute, its variance, and the variance estimator are suggested when the parameter of a rare unrelated attribute is assumed to be known and unknown. The clusters are selected with and without replacement. When they are selected with replacement, the selecting probabilities for each cluster are defined depending on the cluster sizes and with equal probability. In addition, the variance comparison between a probability proportional to size (PPS) and PPS for stratification are performed. When the parameters vary in clusters, the stratified PPS has better efficiency than the PPS.  相似文献   

12.
Block clustering with collapsed latent block models   总被引:1,自引:0,他引:1  
We introduce a Bayesian extension of the latent block model for model-based block clustering of data matrices. Our approach considers a block model where block parameters may be integrated out. The result is a posterior defined over the number of clusters in rows and columns and cluster memberships. The number of row and column clusters need not be known in advance as these are sampled along with cluster memberhips using Markov chain Monte Carlo. This differs from existing work on latent block models, where the number of clusters is assumed known or is chosen using some information criteria. We analyze both simulated and real data to validate the technique.  相似文献   

13.
The forward search is a method of robust data analysis in which outlier free subsets of the data of increasing size are used in model fitting; the data are then ordered by closeness to the model. Here the forward search, with many random starts, is used to cluster multivariate data. These random starts lead to the diagnostic identification of tentative clusters. Application of the forward search to the proposed individual clusters leads to the establishment of cluster membership through the identification of non-cluster members as outlying. The method requires no prior information on the number of clusters and does not seek to classify all observations. These properties are illustrated by the analysis of 200 six-dimensional observations on Swiss banknotes. The importance of linked plots and brushing in elucidating data structures is illustrated. We also provide an automatic method for determining cluster centres and compare the behaviour of our method with model-based clustering. In a simulated example with eight clusters our method provides more stable and accurate solutions than model-based clustering. We consider the computational requirements of both procedures.  相似文献   

14.
Clustering streaming data is gaining importance as automatic data acquisition technologies are deployed in diverse applications. We propose a fully incremental projected divisive clustering method for high-dimensional data streams that is motivated by high density clustering. The method is capable of identifying clusters in arbitrary subspaces, estimating the number of clusters, and detecting changes in the data distribution which necessitate a revision of the model. The empirical evaluation of the proposed method on numerous real and simulated datasets shows that it is scalable in dimension and number of clusters, is robust to noisy and irrelevant features, and is capable of handling a variety of types of non-stationarity.  相似文献   

15.
Randomly generated points in Rd are connected to their nearest neighbours (Euclidean distance). The resulting connected clusters of points are studied. This paper examines questions related to the collection of clusters formed and to the internal structure of a cluster. In particular, the one-dimensional structure is examined in detail.  相似文献   

16.
We use simulations based on data on injury severity in car accidents to compare methods for the analysis of very large data sets containing clusters of individuals for which the measured response is polytomous. Retrospective sampling of clusters is used to expedite the analysis of the large data set while at the same time obtaining information about rare, but important, outcomes. An additional complication in the analysis of such data sets is that there can be two types of covariates: those which vary within a cluster and those which vary only among clusters. Weighted generalized estimating equations are developed to obtain consistent estimates of the regression coefficients in a proportional-odds model, along with a weighted robust covariance matrix to estimate the variabilities of these estimated coefficients.  相似文献   

17.
In cluster-randomized trials, investigators randomize clusters of individuals such as households, medical practices, schools or classrooms despite the unit of interest are the individuals. It results in the loss of efficiency in terms of the estimation of the unknown parameters as well as the power of the test for testing the treatment effects. To recoup this efficiency loss, some studies pair similar clusters and randomize treatment within pairs. However, the clusters within a treatment arm might be heterogeneous in nature. In this article, we propose a locally optimal design that accounts the clusters heterogeneity and optimally allocates the subjects within each cluster. To address the dependency of design on the unknown parameters, we also discuss Bayesian optimal designs. Performances of proposed designs are investigated numerically through some data examples.  相似文献   

18.
This article proposes a new spatial cluster detection method for longitudinal outcomes that detects neighborhoods and regions with elevated rates of disease while controlling for individual level confounders. The proposed method, CumResPerm, utilizes cumulative geographic residuals through a permutation test to detect potential clusters which are defined as sets of administrative regions, such as a town or group of administrative regions. Previous cluster detection methods are not able to incorporate individual level data including covariate adjustment, while still being able to define potential clusters using informative neighborhood or town boundaries. Often, it is of interest to detect such spatial clusters because individuals residing in a town may have similar environmental exposures or socioeconomic backgrounds due to administrative reasons, such as zoning laws. Therefore, these boundaries can be very informative and more relevant than arbitrary clusters such as the standard circle or square. Application of the CumResPerm method will be illustrated by the Home Allergens and Asthma prospective cohort study analyzing the relationship between area or neighborhood residence and repeated measured outcome, occurrence of wheeze in the last six months, while taking into account mobile locations.  相似文献   

19.
In epidemiologic studies where the outcome is binary, the data often arise as clusters, as when siblings, friends or neighbors are used as matched controls in a case-control study. Conditional logistic regression (CLR) is typically used for such studies to estimate the odds ratio for an exposure of interest. However, CLR assumes the exposure coefficient is the same in every cluster, and CLR-based inference can be badly biased when homogeneity is violated. Existing methods for testing goodness-of-fit for CLR are not designed to detect such violations. Good alternative methods of analysis exist if one suspects there is heterogeneity across clusters. However, routine use of alternative robust approaches when there is no appreciable heterogeneity could cause loss of precision and be computationally difficult, particularly if the clusters are small. We propose a simple non-parametric test, the test of heterogeneous susceptibility (THS), to assess the assumption of homogeneity of a coefficient across clusters. The test is easy to apply and provides guidance as to the appropriate method of analysis. Simulations demonstrate that the THS has reasonable power to reveal violations of homogeneity. We illustrate by applying the THS to a study of periodontal disease.  相似文献   

20.
We consider the adjustment, based upon a sample of size n, of collections of vectors drawn from either an infinite or finite population. The vectors may be judged to be either normally distributed or, more generally, second-order exchangeable. We develop the work of Goldstein and Wooff (1998) to show how the familiar univariate finite population corrections (FPCs) naturally generalise to individual quantities in the multivariate population. The types of information we gain by sampling are identified with the orthogonal canonical variable directions derived from a generalised eigenvalue problem. These canonical directions share the same co-ordinate representation for all sample sizes and, for equally defined individuals, all population sizes enabling simple comparisons between both the effects of different sample sizes and of different population sizes. We conclude by considering how the FPC is modified for multivariate cluster sampling with exchangeable clusters. In univariate two-stage cluster sampling, we may decompose the variance of the population mean into the sum of the variance of cluster means and the variance of the cluster members within clusters. The first term has a FPC relating to the sampling fraction of clusters, the second term has a FPC relating to the sampling fraction of cluster size. We illustrate how this generalises in the multivariate case. We decompose the variance into two terms: the first relating to multivariate finite population sampling of clusters and the second to multivariate finite population sampling within clusters. We solve two generalised eigenvalue problems to show how to generalise the univariate to the multivariate: each of the two FPCs attaches to one, and only one, of the two eigenbases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号