首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The relationship between the mixed-model analysis and multivariate approach to a repeated measures design with multiple responses is presented. It is shown that by taking the trace of the appropriate submatrix of the hypothesis (error) sums of squares and crossproducts (SSCP) matrix obtained from the multivariate approach, one can get the hypothesis (error) SSCP matrix for the mixed-model analysis. Thus, when analyzing data from a multivariate repeated measures design, it is advantageous to use the multivariate approach because the result of the mixed-model analysis can also be obtained without additional computation.  相似文献   

2.
Despite tremendous effort on different designs with cross-sectional data, little research has been conducted for sample size calculation and power analyses under repeated measures design. In addition to time-averaged difference, changes in mean response over time (CIMROT) is the primary interest in repeated measures analysis. We generalized sample size calculation and power analysis equations for CIMROT to allow unequal sample size between groups for both continuous and binary measures, through simulation, evaluated the performance of proposed methods, and compared our approach to that of a two-stage model formulization. We also created a software procedure to implement the proposed methods.  相似文献   

3.
Summary.  Long-term experiments are commonly used tools in agronomy, soil science and other disciplines for comparing the effects of different treatment regimes over an extended length of time. Periodic measurements, typically annual, are taken on experimental units and are often analysed by using customary tools and models for repeated measures. These models contain nothing that accounts for the random environmental variations that typically affect all experimental units simultaneously and can alter treatment effects. This added variability can dominate that from all other sources and can adversely influence the results of a statistical analysis and interfere with its interpretation. The effect that this has on the standard repeated measures analysis is quantified by using an alternative model that allows for random variations over time. This model, however, is not useful for analysis because the random effects are confounded with fixed effects that are already in the repeated measures model. Possible solutions are reviewed and recommendations are made for improving statistical analysis and interpretation in the presence of these extra random variations.  相似文献   

4.
In this paper, regressive models are proposed for modeling a sequence of transitions in longitudinal data. These models are employed to predict the future status of the outcome variable of the individuals on the basis of their underlying background characteristics or risk factors. The estimation of parameters and also estimates of conditional and unconditional probabilities are shown for repeated measures. The goodness of fit tests are extended in this paper on the basis of the deviance and the Hosmer–Lemeshow procedures and generalized to repeated measures. In addition, to measure the suitability of the proposed models for predicting the disease status, we have extended the ROC curve approach to repeated measures. The procedure is shown for the conditional models for any order as well as for the unconditional model, to predict the outcome at the end of the study. The test procedures are also suggested. For testing the differences between areas under the ROC curves in subsequent follow-ups, two different test procedures are employed, one of which is based on permutation test. In this paper, an unconditional model is proposed on the basis of conditional models for the disease progression of depression among the elderly population in the USA on the basis of the Health and Retirement Survey data collected longitudinally. The illustration shows that the disease progression observed conditionally can be employed to predict the outcome and the role of selected variables and the previous outcomes can be utilized for predictive purposes. The results show that the percentage of correct predictions of a disease is quite high and the measures of sensitivity and specificity are also reasonably impressive. The extended measures of area under the ROC curve show that the models provide a reasonably good fit in terms of predicting the disease status during a long period of time. This procedure will have extensive applications in the field of longitudinal data analysis where the objective is to obtain estimates of unconditional probabilities on the basis of series of conditional transitional models.  相似文献   

5.
Families of splitting criteria for classification trees   总被引:6,自引:0,他引:6  
Several splitting criteria for binary classification trees are shown to be written as weighted sums of two values of divergence measures. This weighted sum approach is then used to form two families of splitting criteria. One of them contains the chi-squared and entropy criterion, the other contains the mean posterior improvement criterion. Both family members are shown to have the property of exclusive preference. Furthermore, the optimal splits based on the proposed families are studied. We find that the best splits depend on the parameters in the families. The results reveal interesting differences among various criteria. Examples are given to demonstrate the usefulness of both families.  相似文献   

6.
In this paper a unified approach is given to the distribution of scalar quadratic forms for dependent variables. Necessary and sufficient conditions are found for the sums of squares of the various hierarchical layers in ANOVA to be distributed as multiples of chi-square variables. Results concerning the usual univariate F-tests in ANOVA of repeated measurements are derived as a special case.  相似文献   

7.
Models for repeated measures or growth curves consist of a mean response plus error and the errors are usually correlated. Both maximum likelihood and residual maximum likelihood (REML) estimators of a regression model with dependent errors are derived for cases in which the variance matrix of the error model admits a convenient Cholesky factorisation. This factorisation may be linked to methods for producing recursive estimates of the regression parameters and recursive residuals to provide a convenient computational method. The method is used to develop a general approach to repeated measures analysis.  相似文献   

8.
A convenient recursive computational method for repeated measures analysis, provided by McGilchrist and Cullis (1990), has been extended by the authors to heterogeneous error structures and also to the repeated measures model with random coefficients. The approach is outlined briefly in this paper. A computing program for the approach has been written and used to obtain results for simulated data having various error structures. A summary of the results is given. The computing program together with some subroutines is available from the authors.  相似文献   

9.
Traditionally, sphericity (i.e., independence and homoscedasticity for raw data) is put forward as the condition to be satisfied by the variance–covariance matrix of at least one of the two observation vectors analyzed for correlation, for the unmodified t test of significance to be valid under the Gaussian and constant population mean assumptions. In this article, the author proves that the sphericity condition is too strong and a weaker (i.e., more general) sufficient condition for valid unmodified t testing in correlation analysis is circularity (i.e., independence and homoscedasticity after linear transformation by orthonormal contrasts), to be satisfied by the variance–covariance matrix of one of the two observation vectors. Two other conditions (i.e., compound symmetry for one of the two observation vectors; absence of correlation between the components of one observation vector, combined with a particular pattern of joint heteroscedasticity in the two observation vectors) are also considered and discussed. When both observation vectors possess the same variance–covariance matrix up to a positive multiplicative constant, the circularity condition is shown to be necessary and sufficient. “Observation vectors” may designate partial realizations of temporal or spatial stochastic processes as well as profile vectors of repeated measures. From the proof, it follows that an effective sample size appropriately defined can measure the discrepancy from the more general sufficient condition for valid unmodified t testing in correlation analysis with autocorrelated and heteroscedastic sample data. The proof is complemented by a simulation study. Finally, the differences between the role of the circularity condition in the correlation analysis and its role in the repeated measures ANOVA (i.e., where it was first introduced) are scrutinized, and the link between the circular variance–covariance structure and the centering of observations with respect to the sample mean is emphasized.  相似文献   

10.
In this article a general result is derived that, along with a functional central limit theorem for a sequence of statistics, can be employed in developing a nonparametric repeated significance test with adaptive target sample size. This method is used in deriving a repeated significance test with adaptive target sample size for the shift model. The repeated significance test is based on a functional central limit theorem for a sequence of partial sums of truncated observations. Based on numerical results presented in this article one can conclude that this nonparametric sequential test performs quite well.  相似文献   

11.
The main difficulty in parametric analysis of longitudinal data lies in specifying covariance structure. Several covariance structures, which usually reflect one series of measurements collected over time, have been presented in the literature. However there is a lack of literature on covariance structures designed for repeated measures specified by more than one repeated factor. In this paper a new, general method of modelling covariance structure based on the Kronecker product of underlying factor specific covariance profiles is presented. The method has an attractive interpretation in terms of independent factor specific contribution to overall within subject covariance structure and can be easily adapted to standard software.  相似文献   

12.
In this article, small area estimation under a multivariate linear model for repeated measures data is considered. The proposed model aims to get a model which borrows strength both across small areas and over time. The model accounts for repeated surveys, grouped response units, and random effects variations. Estimation of model parameters is discussed within a likelihood based approach. Prediction of random effects, small area means across time points, and per group units are derived. A parametric bootstrap method is proposed for estimating the mean squared error of the predicted small area means. Results are supported by a simulation study.  相似文献   

13.
Summary.  Longitudinal population-based surveys are widely used in the health sciences to study patterns of change over time. In many of these data sets unique patient identifiers are not publicly available, making it impossible to link the repeated measures from the same individual directly. This poses a statistical challenge for making inferences about time trends because repeated measures from the same individual are likely to be positively correlated, i.e., although the time trend that is estimated under the naïve assumption of independence is unbiased, an unbiased estimate of the variance cannot be obtained without knowledge of the subject identifiers linking repeated measures over time. We propose a simple method for obtaining a conservative estimate of variability for making inferences about trends in proportions over time, ensuring that the type I error is no greater than the specified level. The method proposed is illustrated by using longitudinal data on diabetes hospitalization proportions in South Carolina.  相似文献   

14.
In most surveys, inference for domains poses a difficult problem because of data shortage. This paper presents a probability sampling theory approach to some common types of statistical analysis for domains of a surveyed population. Simple and multiple regression analysis, and analysis of ratios are considered. Two new methods are constructed and explored which can improve substantially over the common method based on sample-weighted sums of squares and products. These new methods use auxiliary variables whose importance depends on the extent to which they succeed in explaining certain patterns in the regression residuals. The theoretical conclusions are supported by empirical results from Monte Carlo experiments.  相似文献   

15.
Binary data are commonly used as responses to assess the effects of independent variables in longitudinal factorial studies. Such effects can be assessed in terms of the rate difference (RD), the odds ratio (OR), or the rate ratio (RR). Traditionally, the logistic regression seems always a recommended method with statistical comparisons made in terms of the OR. Statistical inference in terms of the RD and RR can then be derived using the delta method. However, this approach is hard to realize when repeated measures occur. To obtain statistical inference in longitudinal factorial studies, the current article shows that the mixed-effects model for repeated measures, the logistic regression for repeated measures, the log-transformed regression for repeated measures, and the rank-based methods are all valid methods that lead to inference in terms of the RD, OR, and RR, respectively. Asymptotic linear relationships between the estimators of the regression coefficients of these models are derived when the weight (working covariance) matrix is an identity matrix. Conditions for the Wald-type tests to be asymptotically equivalent in these models are provided and powers were compared using simulation studies. A phase III clinical trial is used to illustrate the investigated methods with corresponding SAS® code supplied.  相似文献   

16.
The validity conditions for univariate or multivariate analyses of repeated measures are highly sensitive to the usual assumptions. In cancer experiments, the data are frequently heteroscedastic and strongly correlated with time, and standard analyses do not perform well. Alternative non-parametric approaches can contribute to an analysis of these longitudinal data. This paper describes a method for such situations, using the results from a comparative experiment in which tumour volume is evaluated over time. First, we apply the non-parametric approach proposed by Raz in constructing a randomization Ftest for comparing treatments. A local polynomial fit is conducted to estimate the growth curves and confidence intervals for each treatment. Finally, this technique is used to estimate the velocity of tumour growth.  相似文献   

17.
An elementary approach to the analysis of variance for balanced designs is sketched and illustrated with an analysis of a repeated measures design. The approach is based on a conceptually simple algorithm that computes the usual linear decomposition of the data by repeatedly calculating and removing averages for groups of observations corresponding to the sources of variation in the design. An interactive computer program, written in Applesoft Basic, is available for use in teaching the algorithm.  相似文献   

18.
Non‐likelihood‐based methods for repeated measures analysis of binary data in clinical trials can result in biased estimates of treatment effects and associated standard errors when the dropout process is not completely at random. We tested the utility of a multiple imputation approach in reducing these biases. Simulations were used to compare performance of multiple imputation with generalized estimating equations and restricted pseudo‐likelihood in five representative clinical trial profiles for estimating (a) overall treatment effects and (b) treatment differences at the last scheduled visit. In clinical trials with moderate to high (40–60%) dropout rates with dropouts missing at random, multiple imputation led to less biased and more precise estimates of treatment differences for binary outcomes based on underlying continuous scores. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a unified method for influence analysis to deal with random effects appeared in additive nonlinear regression models for repeated measurement data. The basic idea is to apply the Q-function, the conditional expectation of the complete-data log-likelihood function obtained from EM algorithm, instead of the observed-data log-likelihood function as used in standard influence analysis. Diagnostic measures are derived based on the case-deletion approach and the local influence approach. Two real examples and a simulation study are examined to illustrate our methodology.  相似文献   

20.
Random effect models have often been used in longitudinal data analysis since they allow for association among repeated measurements due to unobserved heterogeneity. Various approaches have been proposed to extend mixed models for repeated count data to include dependence on baseline counts. Dependence between baseline counts and individual-specific random effects result in a complex form of the (conditional) likelihood. An approximate solution can be achieved ignoring this dependence, but this approach could result in biased parameter estimates and in wrong inferences. We propose a computationally feasible approach to overcome this problem, leaving the random effect distribution unspecified. In this context, we show how the EM algorithm for nonparametric maximum likelihood (NPML) can be extended to deal with dependence of repeated measures on baseline counts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号