首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The flexible Dirichlet (FD) distribution (Ongaro and Migliorati in J. Multvar. Anal. 114: 412–426, 2013) makes it possible to preserve many theoretical properties of the Dirichlet one, without inheriting its lack of flexibility in modeling the various independence concepts appropriate for compositional data, i.e. data representing vectors of proportions. In this paper we tackle the potential of the FD from an inferential and applicative viewpoint. In this regard, the key feature appears to be the special structure defining its Dirichlet mixture representation. This structure determines a simple and clearly interpretable differentiation among mixture components which can capture the main features of a large variety of data sets. Furthermore, it allows a substantially greater flexibility than the Dirichlet, including both unimodality and a varying number of modes. Very importantly, this increased flexibility is obtained without sharing many of the inferential difficulties typical of general mixtures. Indeed, the FD displays the identifiability and likelihood behavior proper to common (non-mixture) models. Moreover, thanks to a novel non random initialization based on the special FD mixture structure, an efficient and sound estimation procedure can be devised which suitably combines EM-types algorithms. Reliable complete-data likelihood-based estimators for standard errors can be provided as well.  相似文献   

2.
Mixtures of factor analyzers is a useful model-based clustering method which can avoid the curse of dimensionality in high-dimensional clustering. However, this approach is sensitive to both diverse non-normalities of marginal variables and outliers, which are commonly observed in multivariate experiments. We propose mixtures of Gaussian copula factor analyzers (MGCFA) for clustering high-dimensional clustering. This model has two advantages; (1) it allows different marginal distributions to facilitate fitting flexibility of the mixture model, (2) it can avoid the curse of dimensionality by embedding the factor-analytic structure in the component-correlation matrices of the mixture distribution.An EM algorithm is developed for the fitting of MGCFA. The proposed method is free of the curse of dimensionality and allows any parametric marginal distribution which fits best to the data. It is applied to both synthetic data and a microarray gene expression data for clustering and shows its better performance over several existing methods.  相似文献   

3.
A folded type model is developed for analysing compositional data. The proposed model involves an extension of the α‐transformation for compositional data and provides a new and flexible class of distributions for modelling data defined on the simplex sample space. Despite its rather seemingly complex structure, employment of the EM algorithm guarantees efficient parameter estimation. The model is validated through simulation studies and examples which illustrate that the proposed model performs better in terms of capturing the data structure, when compared to the popular logistic normal distribution, and can be advantageous over a similar model without folding.  相似文献   

4.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

5.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

6.
In mixture experiments the properties of mixtures are usually studied by mixing the amounts of the mixture components that are required to obtain the necessary proportions. This paper considers the impact of inaccuracies in discharging the required amounts of the mixture components on the statistical analysis of the data. It shows how the regression calibration approach can be used to minimize the resulting bias in the model and in the estimates of the model parameters, as well as to find correct estimates of the corresponding variances. Its application is made difficult by the complex structure of these errors. We also show how knowledge of the form of the model bias allows for choosing a manufacturing setting for a mixture product that is not biased and has smaller signal to noise ratio.  相似文献   

7.
Colours and Cocktails: Compositional Data Analysis 2013 Lancaster Lecture   总被引:1,自引:0,他引:1  
The different constituents of physical mixtures such as coloured paint, cocktails, geological and other samples can be represented by d‐dimensional vectors called compositions with non‐negative components that sum to one. Data in which the observations are compositions are called compositional data. There are a number of different ways of thinking about and consequently analysing compositional data. The log‐ratio methods proposed by Aitchison in the 1980s have become the dominant methods in the field. One reason for this is the development of normative arguments converting the properties of log‐ratio methods to ‘essential requirements’ or Principles for any method of analysis to satisfy. We discuss different ways of thinking about compositional data and interpret the development of the Principles in terms of these different viewpoints. We illustrate the properties on which the Principles are based, focussing particularly on the key subcompositional coherence property. We show that this Principle is based on implicit assumptions and beliefs that do not always hold. Moreover, it is applied selectively because it is not actually satisfied by the log‐ratio methods it is intended to justify. This implies that a more open statistical approach to compositional data analysis should be adopted.  相似文献   

8.
For clustering mixed categorical and continuous data, Lawrence and Krzanowski (1996) proposed a finite mixture model in which component densities conform to the location model. In the graphical models literature the location model is known as the homogeneous Conditional Gaussian model. In this paper it is shown that their model is not identifiable without imposing additional restrictions. Specifically, for g groups and m locations, (g!)m–1 distinct sets of parameter values (not including permutations of the group mixing parameters) produce the same likelihood function. Excessive shrinkage of parameter estimates in a simulation experiment reported by Lawrence and Krzanowski (1996) is shown to be an artifact of the model's non-identifiability. Identifiable finite mixture models can be obtained by imposing restrictions on the conditional means of the continuous variables. These new identified models are assessed in simulation experiments. The conditional mean structure of the continuous variables in the restricted location mixture models is similar to that in the underlying variable mixture models proposed by Everitt (1988), but the restricted location mixture models are more computationally tractable.  相似文献   

9.
This article proposes a mixture double autoregressive model by introducing the flexibility of mixture models to the double autoregressive model, a novel conditional heteroscedastic model recently proposed in the literature. To make it more flexible, the mixing proportions are further assumed to be time varying, and probabilistic properties including strict stationarity and higher order moments are derived. Inference tools including the maximum likelihood estimation, an expectation–maximization (EM) algorithm for searching the estimator and an information criterion for model selection are carefully studied for the logistic mixture double autoregressive model, which has two components and is encountered more frequently in practice. Monte Carlo experiments give further support to the new models, and the analysis of an empirical example is also reported.  相似文献   

10.
The analysis of compositional data using the log-ratio approach is based on ratios between the compositional parts. Zeros in the parts thus cause serious difficulties for the analysis. This is a particular problem in case of structural zeros, which cannot be simply replaced by a non-zero value as it is done, e.g. for values below detection limit or missing values. Instead, zeros to be incorporated into further statistical processing. The focus is on exploratory tools for identifying outliers in compositional data sets with structural zeros. For this purpose, Mahalanobis distances are estimated, computed either directly for subcompositions determined by their zero patterns, or by using imputation to improve the efficiency of the estimates, and then proceed to the subcompositional and subgroup level. For this approach, new theory is formulated that allows to estimate covariances for imputed compositional data and to apply estimations on subgroups using parts of this covariance matrix. Moreover, the zero pattern structure is analyzed using principal component analysis for binary data to achieve a comprehensive view of the overall multivariate data structure. The proposed tools are applied to larger compositional data sets from official statistics, where the need for an appropriate treatment of zeros is obvious.  相似文献   

11.
In this paper, we propose a quantile approach to the multi-index semiparametric model for an ordinal response variable. Permitting non-parametric transformation of the response, the proposed method achieves a root-n rate of convergence and has attractive robustness properties. Further, the proposed model allows additional indices to model the remaining correlations between covariates and the residuals from the single-index, considerably reducing the error variance and thus leading to more efficient prediction intervals (PIs). The utility of the model is demonstrated by estimating PIs for functional status of the elderly based on data from the second longitudinal study of aging. It is shown that the proposed multi-index model provides significantly narrower PIs than competing models. Our approach can be applied to other areas in which the distribution of future observations must be predicted from ordinal response data.  相似文献   

12.
Longitudinal studies of a binary outcome are common in the health, social, and behavioral sciences. In general, a feature of random effects logistic regression models for longitudinal binary data is that the marginal functional form, when integrated over the distribution of the random effects, is no longer of logistic form. Recently, Wang and Louis (2003) proposed a random intercept model in the clustered binary data setting where the marginal model has a logistic form. An acknowledged limitation of their model is that it allows only a single random effect that varies from cluster to cluster. In this paper, we propose a modification of their model to handle longitudinal data, allowing separate, but correlated, random intercepts at each measurement occasion. The proposed model allows for a flexible correlation structure among the random intercepts, where the correlations can be interpreted in terms of Kendall's τ. For example, the marginal correlations among the repeated binary outcomes can decline with increasing time separation, while the model retains the property of having matching conditional and marginal logit link functions. Finally, the proposed method is used to analyze data from a longitudinal study designed to monitor cardiac abnormalities in children born to HIV-infected women.  相似文献   

13.
The hidden Markov model (HMM) provides an attractive framework for modeling long-term persistence in a variety of applications including pattern recognition. Unlike typical mixture models, hidden Markov states can represent the heterogeneity in data and it can be extended to a multivariate case using a hierarchical Bayesian approach. This article provides a nonparametric Bayesian modeling approach to the multi-site HMM by considering stick-breaking priors for each row of an infinite state transition matrix. This extension has many advantages over a parametric HMM. For example, it can provide more flexible information for identifying the structure of the HMM than parametric HMM analysis, such as the number of states in HMM. We exploit a simulation example and a real dataset to evaluate the proposed approach.  相似文献   

14.
Biplots of compositional data   总被引:6,自引:0,他引:6  
Summary. The singular value decomposition and its interpretation as a linear biplot have proved to be a powerful tool for analysing many forms of multivariate data. Here we adapt biplot methodology to the specific case of compositional data consisting of positive vectors each of which is constrained to have unit sum. These relative variation biplots have properties relating to the special features of compositional data: the study of ratios, subcompositions and models of compositional relationships. The methodology is applied to a data set consisting of six-part colour compositions in 22 abstract paintings, showing how the singular value decomposition can achieve an accurate biplot of the colour ratios and how possible models interrelating the colours can be diagnosed.  相似文献   

15.
Compositional tables represent a continuous counterpart to well-known contingency tables. Their cells contain quantitatively expressed relative contributions of a whole, carrying exclusively relative information and are popularly represented in proportions or percentages. The resulting factors, corresponding to rows and columns of the table, can be inspected similarly as with contingency tables, e.g. for their mutual independent behaviour. The nature of compositional tables requires a specific geometrical treatment, represented by the Aitchison geometry on the simplex. The properties of the Aitchison geometry allow a decomposition of the original table into its independent and interactive parts. Moreover, the specific case of 2×2 compositional tables allows the construction of easily interpretable orthonormal coordinates (resulting from the isometric logratio transformation) for the original table and its decompositions. Consequently, for a sample of compositional tables both explorative statistical analysis like graphical inspection of the independent and interactive parts or any statistical inference (odds-ratio-like testing of independence) can be performed. Theoretical advancements of the presented approach are demonstrated using two economic applications.  相似文献   

16.
The majority of the existing literature on model-based clustering deals with symmetric components. In some cases, especially when dealing with skewed subpopulations, the estimate of the number of groups can be misleading; if symmetric components are assumed we need more than one component to describe an asymmetric group. Existing mixture models, based on multivariate normal distributions and multivariate t distributions, try to fit symmetric distributions, i.e. they fit symmetric clusters. In the present paper, we propose the use of finite mixtures of the normal inverse Gaussian distribution (and its multivariate extensions). Such finite mixture models start from a density that allows for skewness and fat tails, generalize the existing models, are tractable and have desirable properties. We examine both the univariate case, to gain insight, and the multivariate case, which is more useful in real applications. EM type algorithms are described for fitting the models. Real data examples are used to demonstrate the potential of the new model in comparison with existing ones.  相似文献   

17.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

18.
We propose a semiparametric modeling approach for mixtures of symmetric distributions. The mixture model is built from a common symmetric density with different components arising through different location parameters. This structure ensures identifiability for mixture components, which is a key feature of the model as it allows applications to settings where primary interest is inference for the subpopulations comprising the mixture. We focus on the two-component mixture setting and develop a Bayesian model using parametric priors for the location parameters and for the mixture proportion, and a nonparametric prior probability model, based on Dirichlet process mixtures, for the random symmetric density. We present an approach to inference using Markov chain Monte Carlo posterior simulation. The performance of the model is studied with a simulation experiment and through analysis of a rainfall precipitation data set as well as with data on eruptions of the Old Faithful geyser.  相似文献   

19.
Integro-difference equations (IDEs) provide a flexible framework for dynamic modeling of spatio-temporal data. The choice of kernel in an IDE model relates directly to the underlying physical process modeled, and it can affect model fit and predictive accuracy. We introduce Bayesian non-parametric methods to the IDE literature as a means to allow flexibility in modeling the kernel. We propose a mixture of normal distributions for the IDE kernel, built from a spatial Dirichlet process for the mixing distribution, which can model kernels with shapes that change with location. This allows the IDE model to capture non-stationarity with respect to location and to reflect a changing physical process across the domain. We address computational concerns for inference that leverage the use of Hermite polynomials as a basis for the representation of the process and the IDE kernel, and incorporate Hamiltonian Markov chain Monte Carlo steps in the posterior simulation method. An example with synthetic data demonstrates that the model can successfully capture location-dependent dynamics. Moreover, using a data set of ozone pressure, we show that the spatial Dirichlet process mixture model outperforms several alternative models for the IDE kernel, including the state of the art in the IDE literature, that is, a Gaussian kernel with location-dependent parameters.  相似文献   

20.
The zero truncated inverse Gaussian–Poisson model, obtained by first mixing the Poisson model assuming its expected value has an inverse Gaussian distribution and then truncating the model at zero, is very useful when modelling frequency count data. A Bayesian analysis based on this statistical model is implemented on the word frequency counts of various texts, and its validity is checked by exploring the posterior distribution of the Pearson errors and by implementing posterior predictive consistency checks. The analysis based on this model is useful because it allows one to use the posterior distribution of the model mixing density as an approximation of the posterior distribution of the density of the word frequencies of the vocabulary of the author, which is useful to characterize the style of that author. The posterior distribution of the expectation and of measures of the variability of that mixing distribution can be used to assess the size and diversity of his vocabulary. An alternative analysis is proposed based on the inverse Gaussian-zero truncated Poisson mixture model, which is obtained by switching the order of the mixing and the truncation stages. Even though this second model fits some of the word frequency data sets more accurately than the first model, in practice the analysis based on it is not as useful because it does not allow one to estimate the word frequency distribution of the vocabulary.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号