首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The occurrence of missing data is an often unavoidable consequence of repeated measures studies. Fortunately, multivariate general linear models such as growth curve models and linear mixed models with random effects have been well developed to analyze incomplete normally-distributed repeated measures data. Most statistical methods have assumed that the missing data occur at random. This assumption may include two types of missing data mechanism: missing completely at random (MCAR) and missing at random (MAR) in the sense of Rubin (1976). In this paper, we develop a test procedure for distinguishing these two types of missing data mechanism for incomplete normally-distributed repeated measures data. The proposed test is similar in spiril to the test of Park and Davis (1992). We derive the test for incomplete normally-distribrlted repeated measures data using linear mixed models. while Park and Davis (1992) cleirved thr test for incomplete repeatctl categorical data in the framework of Grizzle Starmer. and Koch (1969). Thr proposed procedure can be applied easily to any other multivariate general linear model which allow for missing data. The test is illustrated using the hip-replacernent patient.data from Crowder and Hand (1990).  相似文献   

2.
Rank tests are considered that compare t treatments in repeated measures designs. A statistic is given that contains as special cases several that have been proposed for this problem, including one that corresponds to the randomized block ANOVA statistic applied to the rank transformed data. Another statistic is proposed, having a null distribution holding under more general conditions, that is the rank transform of the Hotelling statistic for repeated measures. A statistic of this type is also given for data that are ordered categorical rather than fully rankedo Unlike the Friedman statistic, the statistics discussed in this article utilize a single ranking of the entire sample. Power calculations for an underlying normal distribution indicate that the rank transformed ANOVA test can be substantially more powerful than the Friedman test.  相似文献   

3.
In this paper, we discuss the derivation of the first and second moments for the proposed small area estimators under a multivariate linear model for repeated measures data. The aim is to use these moments to estimate the mean-squared errors (MSE) for the predicted small area means as a measure of precision. At the first stage, we derive the MSE when the covariance matrices are known. At the second stage, a method based on parametric bootstrap is proposed for bias correction and for prediction error that reflects the uncertainty when the unknown covariance is replaced by its suitable estimator.  相似文献   

4.
Four nonparametric test statistics for the change-point problem with repeated measures data are proposed. In a Monte Carlo simulation study, critical values for the proposed test statistics are simulated and the performances of the proposed tests are compared with the performances of some competitive tests in terms of asymptotic behavior and power. We provide appropriate recommendations for different occurrences of the change-point and illustrate the testing methods using a set of real data.  相似文献   

5.
In biomedical and public health research, both repeated measures of biomarkers Y as well as times T to key clinical events are often collected for a subject. The scientific question is how the distribution of the responses [ T , Y | X ] changes with covariates X . [ T | X ] may be the focus of the estimation where Y can be used as a surrogate for T . Alternatively, T may be the time to drop-out in a study in which [ Y | X ] is the target for estimation. Also, the focus of a study might be on the effects of covariates X on both T and Y or on some underlying latent variable which is thought to be manifested in the observable outcomes. In this paper, we present a general model for the joint analysis of [ T , Y | X ] and apply the model to estimate [ T | X ] and other related functionals by using the relevant information in both T and Y . We adopt a latent variable formulation like that of Fawcett and Thomas and use it to estimate several quantities of clinical relevance to determine the efficacy of a treatment in a clinical trial setting. We use a Markov chain Monte Carlo algorithm to estimate the model's parameters. We illustrate the methodology with an analysis of data from a clinical trial comparing risperidone with a placebo for the treatment of schizophrenia.  相似文献   

6.
In this paper, we consider a nonparametric test procedure for multivariate data with grouped components under the two sample problem setting. For the construction of the test statistic, we use linear rank statistics which were derived by applying the likelihood ratio principle for each component. For the null distribution of the test statistic, we apply the permutation principle for small or moderate sample sizes and derive the limiting distribution for the large sample case. Also we illustrate our test procedure with an example and compare with other procedures through simulation study. Finally, we discuss some additional interesting features as concluding remarks.  相似文献   

7.
The asymptotic efficiencies are computed for several popular two sample rank tests when the underlying distributions are Poisson, binomial, discrete uniform, and negative binomial The rank tests examined include the Mann-Whitney test, the van der Waerden test, and the median test. Three methods for handling ties are discussed and compared. The computed asymptotic efficiencies apply also to the k-sample extensions of the above tests, such as the Kruskal-Wallis test, etc.  相似文献   

8.
Ruiqin Tian 《Statistics》2017,51(5):988-1005
In this paper, empirical likelihood inference for longitudinal data within the framework of partial linear regression models are investigated. The proposed procedures take into consideration the correlation within groups without involving direct estimation of nuisance parameters in the correlation matrix. The empirical likelihood method is used to estimate the regression coefficients and the baseline function, and to construct confidence intervals. A nonparametric version of Wilk's theorem for the limiting distribution of the empirical likelihood ratio is derived. Compared with methods based on normal approximations, the empirical likelihood does not require consistent estimators for the asymptotic variance and bias. The finite sample behaviour of the proposed method is evaluated with simulation and illustrated with an AIDS clinical trial data set.  相似文献   

9.
The main difficulty in parametric analysis of longitudinal data lies in specifying covariance structure. Several covariance structures, which usually reflect one series of measurements collected over time, have been presented in the literature. However there is a lack of literature on covariance structures designed for repeated measures specified by more than one repeated factor. In this paper a new, general method of modelling covariance structure based on the Kronecker product of underlying factor specific covariance profiles is presented. The method has an attractive interpretation in terms of independent factor specific contribution to overall within subject covariance structure and can be easily adapted to standard software.  相似文献   

10.
Under the assumption of multivariate normality the likelihood ratio test is derived to test a hypothesis for Kronecker product structure on a covariance matrix in the context of multivariate repeated measures data. Although the proposed hypothesis testing can be computationally performed by indirect use of Proc Mixed of SAS, the Proc Mixed algorithm often fails to converge. We provide an alternative algorithm. The algorithm is illustrated with two real data sets. A simulation study is also conducted for the purpose of sample size consideration.  相似文献   

11.
In longitudinal studies, as repeated observations are made on the same individual the response variables will usually be correlated. In analyzing such data, this dependence must be taken into account to avoid misleading inferences. The focus of this paper is to apply a logistic marginal model with Markovian dependence proposed by Azzalini [A. Azzalini, Logistic regression for autocorrelated data with application to repeated measures, Biometrika 81 (1994) 767–775] to the study of the influence of time-dependent covariates on the marginal distribution of the binary response in serially correlated binary data. We have shown how to construct the model so that the covariates relate only to the mean value of the process, independent of the association parameters. After formulating the proposed model for repeated measures data, the same approach is applied to missing data. An application is provided to the diabetes mellitus data of registered patients at the Bangladesh Institute of Research and Rehabilitation in Diabetes, Endocrine and Metabolic Disorders (BIRDEM) in 1984, using both time stationary and time varying covariates.  相似文献   

12.
In recent years various sophisticated methods have been developed for the analysis of repeated measures, or longitudinal data. The more traditional approach, based on a normal likelihood function, has been shown to be unsatisfactory, in the sense of yielding asymptotically biased estimates when the covariance structure is misspecified. More recent methodology, based on generalized linear models and quasi-likelihood estimation, has gained widespread acceptance as 'generalized estimating equations'. However, this also has theoretical problems. In this paper a suggestion is made for improving the asymptotic behaviour of estimators by using the older approach, implemented via Gaussian estimation. The resulting estimating equations include the quasi-score function as one component, so the methodology proposed can be viewed as a combination of Gaussian estimation and generalized estimating equations which has a firmer asymptotic basis than either alone has.  相似文献   

13.
Longitudinal imaging studies have moved to the forefront of medical research due to their ability to characterize spatio-temporal features of biological structures across the lifespan. Valid inference in longitudinal imaging requires enough flexibility of the covariance model to allow reasonable fidelity to the true pattern. On the other hand, the existence of computable estimates demands a parsimonious parameterization of the covariance structure. Separable (Kronecker product) covariance models provide one such parameterization in which the spatial and temporal covariances are modeled separately. However, evaluating the validity of this parameterization in high dimensions remains a challenge. Here we provide a scientifically informed approach to assessing the adequacy of separable (Kronecker product) covariance models when the number of observations is large relative to the number of independent sampling units (sample size). We address both the general case, in which unstructured matrices are considered for each covariance model, and the structured case, which assumes a particular structure for each model. For the structured case, we focus on the situation where the within-subject correlation is believed to decrease exponentially in time and space as is common in longitudinal imaging studies. However, the provided framework equally applies to all covariance patterns used within the more general multivariate repeated measures context. Our approach provides useful guidance for high dimension, low-sample size data that preclude using standard likelihood-based tests. Longitudinal medical imaging data of caudate morphology in schizophrenia illustrate the approaches appeal.  相似文献   

14.
In this article, we consider statistical inference for longitudinal partial linear models when the response variable is sometimes missing with missingness probability depending on the covariate that is measured with error. A generalized empirical likelihood (GEL) method is proposed by combining correction attenuation and quadratic inference functions. The method that takes into consideration the correlation within groups is used to estimate the regression coefficients. Furthermore, residual-adjusted empirical likelihood (EL) is employed for estimating the baseline function so that undersmoothing is avoided. The empirical log-likelihood ratios are proven to be asymptotically Chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Compared with methods based on NAs, the GEL does not require consistent estimators for the asymptotic variance and bias. The numerical study is conducted to compare the performance of the EL and the normal approximation-based method, and a real example is analysed.  相似文献   

15.
The exact inference and prediction intervals for the K-sample exponential scale parameter under doubly Type-II censored samples are derived using an algorithm of Huffer and Lin [Huffer, F.W. and Lin, C.T., 2001, Computing the joint distribution of general linear combinations of spacings or exponen-tial variates. Statistica Sinica, 11, 1141–1157.]. This approach provides a simple way to determine the exact percentage points of the pivotal quantity based on the best linear unbiased estimator in order to develop exact inference for the scale parameter as well as to construct exact prediction intervals for failure times unobserved in the ith sample. Similarly, exact prediction intervals for failure times of units from a future sample can also be easily obtained.  相似文献   

16.
In this article, small area estimation under a multivariate linear model for repeated measures data is considered. The proposed model aims to get a model which borrows strength both across small areas and over time. The model accounts for repeated surveys, grouped response units, and random effects variations. Estimation of model parameters is discussed within a likelihood based approach. Prediction of random effects, small area means across time points, and per group units are derived. A parametric bootstrap method is proposed for estimating the mean squared error of the predicted small area means. Results are supported by a simulation study.  相似文献   

17.
This paper considers the optimal design problem for multivariate mixed-effects logistic models with longitudinal data. A decomposition method of the binary outcome and the penalized quasi-likelihood are used to obtain the information matrix. The D-optimality criterion based on the approximate information matrix is minimized under different cost constraints. The results show that the autocorrelation coefficient plays a significant role in the design. To overcome the dependence of the D-optimal designs on the unknown fixed-effects parameters, the Bayesian D-optimality criterion is proposed. The relative efficiencies of designs reveal that both the cost ratio and autocorrelation coefficient play an important role in the optimal designs.  相似文献   

18.
When carrying out data analysis, a practitioner has to decide on a suitable test for hypothesis testing, and as such, would look for a test that has a high relative power. Tests for paired data tests are usually conducted using t-test, Wilcoxon signed-rank test or the sign test. Some adaptive tests have also been suggested in the literature by O'Gorman, who found that no single member of that family performed well for all sample sizes and different tail weights, and hence, he recommended that choice of a member of that family be made depending on both the sample size and the tail weight. In this paper, we propose a new adaptive test. Simulation studies for n=25 and n=50 show that it works well for nearly all tail weights ranging from the light-tailed beta and uniform distributions to t(4) distributions. More precisely, our test has both robustness of level (in keeping the empirical levels close to the nominal level) and efficiency of power. The results of our study contribute to the area of statistical inference.  相似文献   

19.
Models for fitting longitudinal binary responses are explored by using a panel study of voting intentions. A standard multilevel repeated measures logistic model is shown to be inadequate owing to a substantial proportion of respondents who maintain a constant response over time. A multivariate binary response model is shown to be a better fit to the data.  相似文献   

20.
We propose a method for saddlepoint approximating the distribution of estimators in single lag subset autoregressive models of order one. By viewing the estimator as the root of an appropriate estimating equation, the approach circumvents the difficulty inherent in more standard methods that require an explicit expression for the estimator to be available. Plots of the densities reveal that the distributions of the Burg and maximum likelihood estimators are nearly identical. We show that one possible reason for this is the fact that Burg enjoys the property of estimation equation optimality among a class of estimators expressible as a ratio of quadratic forms in normal random variables, which includes Yule–Walker and least squares. By inverting a two-sided hypothesis test, we show how small sample confidence intervals for the parameters can be constructed from the saddlepoint approximations. Simulation studies reveal that the resulting intervals generally outperform traditional ones based on asymptotics and have good robustness properties with respect to heavy-tailed and skewed innovations. The applicability of the models is illustrated by analyzing a longitudinal data set in a novel manner.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号