首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A sequence of nested hypotheses is presented for the examination of the assumption of autoregressive covariance structure in, for example, a repeated measures experiment. These hypotheses arise naturally by specifying the joint density of the underlying vector random variable as a product of conditional densities and the density of a subset of the vector random variable. The tests for all but one of the nested hypotheses are well known procedures, namely analysis of variance F-tests and Bartlett's test of equality of variances. While the procedure is based on tests of hypotheses, it may be viewed as an exploratory tool which can lead to model identification. An example is presented to illustrate the method.  相似文献   

2.
Mixed‐effects models for repeated measures (MMRM) analyses using the Kenward‐Roger method for adjusting standard errors and degrees of freedom in an “unstructured” (UN) covariance structure are increasingly becoming common in primary analyses for group comparisons in longitudinal clinical trials. We evaluate the performance of an MMRM‐UN analysis using the Kenward‐Roger method when the variance of outcome between treatment groups is unequal. In addition, we provide alternative approaches for valid inferences in the MMRM analysis framework. Two simulations are conducted in cases with (1) unequal variance but equal correlation between the treatment groups and (2) unequal variance and unequal correlation between the groups. Our results in the first simulation indicate that MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for the groups yields notably poor coverage probability (CP) with confidence intervals for the treatment effect when both the variance and the sample size between the groups are disparate. In addition, even when the randomization ratio is 1:1, the CP will fall seriously below the nominal confidence level if a treatment group with a large dropout proportion has a larger variance. Mixed‐effects models for repeated measures analysis with the Mancl and DeRouen covariance estimator shows relatively better performance than the traditional MMRM‐UN analysis method. In the second simulation, the traditional MMRM‐UN analysis leads to bias of the treatment effect and yields notably poor CP. Mixed‐effects models for repeated measures analysis fitting separate UN covariance structures for each group provides an unbiased estimate of the treatment effect and an acceptable CP. We do not recommend MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for treatment groups, although it is frequently seen in applications, when heteroscedasticity between the groups is apparent in incomplete longitudinal data.  相似文献   

3.
Longitudinal imaging studies have moved to the forefront of medical research due to their ability to characterize spatio-temporal features of biological structures across the lifespan. Valid inference in longitudinal imaging requires enough flexibility of the covariance model to allow reasonable fidelity to the true pattern. On the other hand, the existence of computable estimates demands a parsimonious parameterization of the covariance structure. Separable (Kronecker product) covariance models provide one such parameterization in which the spatial and temporal covariances are modeled separately. However, evaluating the validity of this parameterization in high dimensions remains a challenge. Here we provide a scientifically informed approach to assessing the adequacy of separable (Kronecker product) covariance models when the number of observations is large relative to the number of independent sampling units (sample size). We address both the general case, in which unstructured matrices are considered for each covariance model, and the structured case, which assumes a particular structure for each model. For the structured case, we focus on the situation where the within-subject correlation is believed to decrease exponentially in time and space as is common in longitudinal imaging studies. However, the provided framework equally applies to all covariance patterns used within the more general multivariate repeated measures context. Our approach provides useful guidance for high dimension, low-sample size data that preclude using standard likelihood-based tests. Longitudinal medical imaging data of caudate morphology in schizophrenia illustrate the approaches appeal.  相似文献   

4.
Complex dependency structures are often conditionally modeled, where random effects parameters are used to specify the natural heterogeneity in the population. When interest is focused on the dependency structure, inferences can be made from a complex covariance matrix using a marginal modeling approach. In this marginal modeling framework, testing covariance parameters is not a boundary problem. Bayesian tests on covariance parameter(s) of the compound symmetry structure are proposed assuming multivariate normally distributed observations. Innovative proper prior distributions are introduced for the covariance components such that the positive definiteness of the (compound symmetry) covariance matrix is ensured. Furthermore, it is shown that the proposed priors on the covariance parameters lead to a balanced Bayes factor, in case of testing an inequality constrained hypothesis. As an illustration, the proposed Bayes factor is used for testing (non-)invariant intra-class correlations across different group types (public and Catholic schools), using the 1982 High School and Beyond survey data.  相似文献   

5.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

6.
Ante-dependence models can be used to model the covariance structure in problems involving repeated measures through time. They are conditional regression models which generalize Gabriel’s constant-order ante-dependence model. Likelihood-based procedures are presented, together with simple expressions for likelihood ratio test statistics in terms of sum of squares from appropriate analysis of covariance. The estimation of the orders is approached as a model selection problem, and penalized likelihood criteria are suggested. Extensions of all procedures discussed here to situations with a monotone pattern of missing data are presented.  相似文献   

7.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

8.
The Liouville and Generalized Liouville families have been proposed as parametric models for data constrained to the simplex. These families have generated practical interest owing primarily to inadequacies, such as a completely negative covariance structure, that are inherent in the better-known Dirichlet class. Although there is some numerical evidence suggesting that the Liouville and Generalized Liouville families can produce completely positive and mixed covariance structures, no general paradigms have been developed. Research toward this end might naturally be focused on the many classical "positive dependence" concepts available in the literature, all of which imply a nonnegative covariance structure. However, in this article it is shown that no strictly positive distribution on the simplex can possess any of these classical dependence properties. The same result holds for Liouville and generalized Liouville distributions even if the condition of strict positivity is relaxed.  相似文献   

9.
We present a bivariate regression model for count data that allows for positive as well as negative correlation of the response variables. The covariance structure is based on the Sarmanov distribution and consists of a product of generalised Poisson marginals and a factor that depends on particular functions of the response variables. The closed form of the probability function is derived by means of the moment-generating function. The model is applied to a large real dataset on health care demand. Its performance is compared with alternative models presented in the literature. We find that our model is significantly better than or at least equivalent to the benchmark models. It gives insights into influences on the variance of the response variables.  相似文献   

10.
We propose an estimation method that incorporates the correlation/covariance structure between repeated measurements in covariate-adjusted regression models for distorted longitudinal data. In this distorted data setting, neither the longitudinal response nor (possibly time-varying) predictors are directly observable. The unobserved response and predictors are assumed to be distorted/contaminated by unknown functions of a common observable confounder. The proposed estimation methodology adjusts for the distortion effects both in estimation of the covariance structure and in the regression parameters using generalized least squares. The finite-sample performance of the proposed estimators is studied numerically by means of simulations. The consistency and convergence rates of the proposed estimators are also established. The proposed method is illustrated with an application to data from a longitudinal study of cognitive and social development in children.  相似文献   

11.
Scheffé’s mixed model, generalized for application to multivariate repeated measures, is known as the multivariate mixed model (MMM). The primary advantages the MMM are (1) the minimum sample size required to conduct an analysis is smaller than for competing procedures and (2) for certain covariance structures, the MMM analysis is more powerful than its competitors. The primary disadvantage is that the MMM makes a very restrictive covariance assumption; namely multivariate sphericity. This paper shows, first, that even minor departures from multivariate sphericity inflate the size of MMM based tests. Accordingly, MMM analyses, as computed in release 4.0 of SPSS MANOVA (SPSS Inc., 1990), can not be recommended unless it is known that multivariate sphericity is satisfied. Second, it is shown that a new Box-type (Box, 1954) Δ-corrected MMM test adequately controls test size unless departure from multivariate sphericity is severe or the covariance matrix departs substantially from a multiplicative-Kronecker structure. Third, power functions of adjusted MMM tests for selected covariance and noncentrality structures are compared to those of doubly multivariate methods that do not require multivariate sphericity. Based on relative efficiency evaluations, the adjusted MMM analyses described in this paper can be recommended only when sample sizes are very small or there is reason to believe that multivariate sphericity is nearly satisfied. Neither the e-adjusted analysis suggested in the SPSS MANOVA output (release 4.0) nor the adjusted analysis suggested by Boik (1988) can be recommended at all.  相似文献   

12.
To build a linear mixed effects model, one needs to specify the random effects and often the associated parametrized covariance matrix structure. Inappropriate specification of the structures can result in the covariance parameters of the model not identifiable. Non-identifiability can result in extraordinary wide confidence intervals, and unreliable parameter inference. Sometimes software produces implication of model non-identifiability, but not always. In the simulation of fitting non-identifiable models we tried, about half of the times the software output did not look abnormal. We derive necessary and sufficient conditions of covariance parameters identifiability which does not require any prior model fitting. The results are easy to implement and are applicable to commonly used covariance matrix structures.  相似文献   

13.
This paper proposes new classifiers under the assumption of multivariate normality for multivariate repeated measures data with Kronecker product covariance structures. These classifiers are especially effective when the number of observations is not large enough to estimate the covariance matrices, and thus the traditional classifiers fail. Computational scheme for maximum likelihood estimates of required class parameters are also given. The quality of these new classifiers are examined on some real data.  相似文献   

14.
The problem of whether seasonal decomposition should be used prior to or after aggregation of time series is quite old. We tackle the problem by using a state-space representation and the variance/covariance structure of a simplified one-component model. The variances of the estimated components in a two-series system are compared for direct and indirect approaches and also to a multivariate method. The covariance structure between the two time series is important for the relative efficiency. Indirect estimation is always best when the series are independent, but when the series or the measurement errors are negatively correlated, direct estimation may be much better in the above sense. Some covariance structures indicate that direct estimation should be used while others indicate that an indirect approach is more efficient. Signal-to-noise ratios and relative variances are used for inference.  相似文献   

15.
Traditionally, sphericity (i.e., independence and homoscedasticity for raw data) is put forward as the condition to be satisfied by the variance–covariance matrix of at least one of the two observation vectors analyzed for correlation, for the unmodified t test of significance to be valid under the Gaussian and constant population mean assumptions. In this article, the author proves that the sphericity condition is too strong and a weaker (i.e., more general) sufficient condition for valid unmodified t testing in correlation analysis is circularity (i.e., independence and homoscedasticity after linear transformation by orthonormal contrasts), to be satisfied by the variance–covariance matrix of one of the two observation vectors. Two other conditions (i.e., compound symmetry for one of the two observation vectors; absence of correlation between the components of one observation vector, combined with a particular pattern of joint heteroscedasticity in the two observation vectors) are also considered and discussed. When both observation vectors possess the same variance–covariance matrix up to a positive multiplicative constant, the circularity condition is shown to be necessary and sufficient. “Observation vectors” may designate partial realizations of temporal or spatial stochastic processes as well as profile vectors of repeated measures. From the proof, it follows that an effective sample size appropriately defined can measure the discrepancy from the more general sufficient condition for valid unmodified t testing in correlation analysis with autocorrelated and heteroscedastic sample data. The proof is complemented by a simulation study. Finally, the differences between the role of the circularity condition in the correlation analysis and its role in the repeated measures ANOVA (i.e., where it was first introduced) are scrutinized, and the link between the circular variance–covariance structure and the centering of observations with respect to the sample mean is emphasized.  相似文献   

16.
In this article we study the problem of classification of three-level multivariate data, where multiple qq-variate observations are measured on uu-sites and over pp-time points, under the assumption of multivariate normality. The new classification rules with certain structured and unstructured mean vectors and covariance structures are very efficient in small sample scenario, when the number of observations is not adequate to estimate the unknown variance–covariance matrix. These classification rules successfully model the correlation structure on successive repeated measurements over time. Computation algorithms for maximum likelihood estimates of the unknown population parameters are presented. Simulation results show that the introduction of sites in the classification rules improves their performance over the existing classification rules without the sites.  相似文献   

17.
Longitudinal data frequently arises in various fields of applied sciences where individuals are measured according to some ordered variable, e.g. time. A common approach used to model such data is based on the mixed models for repeated measures. This model provides an eminently flexible approach to modeling of a wide range of mean and covariance structures. However, such models are forced into a rigidly defined class of mathematical formulas which may not be well supported by the data within the whole sequence of observations. A possible non-parametric alternative is a cubic smoothing spline, which is highly flexible and has useful smoothing properties. It can be shown that under normality assumption, the solution of the penalized log-likelihood equation is the cubic smoothing spline, and this solution can be further expressed as a solution of the linear mixed model. It is shown here how cubic smoothing splines can be easily used in the analysis of complete and balanced data. Analysis can be greatly simplified by using the unweighted estimator studied in the paper. It is shown that if the covariance structure of random errors belong to certain class of matrices, the unweighted estimator is the solution to the penalized log-likelihood function. This result is new in smoothing spline context and it is not only confined to growth curve settings. The connection to mixed models is used in developing a rough testing of group profiles. Numerical examples are presented to illustrate the techniques proposed.  相似文献   

18.
Maximum likelihood estimation under constraints for estimation in the Wishart class of distributions, is considered. It provides a unified approach to estimation in a variety of problems concerning covariance matrices. Virtually all covariance structures can be translated to constraints on the covariances. This includes covariance matrices with given structure such as linearly patterned covariance matrices, covariance matrices with zeros, independent covariance matrices and structurally dependent covariance matrices. The methodology followed in this paper provides a useful and simple approach to directly obtain the exact maximum likelihood estimates. These maximum likelihood estimates are obtained via an estimation procedure for the exponential class using constraints.  相似文献   

19.
Methods for linear regression with multivariate response variables are well described in statistical literature. In this study we conduct a theoretical evaluation of the expected squared prediction error in bivariate linear regression where one of the response variables contains missing data. We make the assumption of known covariance structure for the error terms. On this basis, we evaluate three well-known estimators: standard ordinary least squares, generalized least squares, and a James–Stein inspired estimator. Theoretical risk functions are worked out for all three estimators to evaluate under which circumstances it is advantageous to take the error covariance structure into account.  相似文献   

20.
We study the correlation structure for a mixture of ordinal and continuous repeated measures using a Bayesian approach. We assume a multivariate probit model for the ordinal variables and a normal linear regression for the continuous variables, where latent normal variables underlying the ordinal data are correlated with continuous variables in the model. Due to the probit model assumption, we are required to sample a covariance matrix with some of the diagonal elements equal to one. The key computational idea is to use parameter-extended data augmentation, which involves applying the Metropolis-Hastings algorithm to get a sample from the posterior distribution of the covariance matrix incorporating the relevant restrictions. The methodology is illustrated through a simulated example and through an application to data from the UCLA Brain Injury Research Center.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号