首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.  相似文献   

2.
Summary.  The main advantage of longitudinal studies is that they can distinguish changes over time within individuals (longitudinal effects) from differences between subjects at the start of the study (base-line characteristics; cross-sectional effects). Often, especially in observational studies, subjects are very heterogeneous at base-line, and one may want to correct for this, when doing inferences for the longitudinal trends. Three procedures for base-line correction are compared in the context of linear mixed models for continuous longitudinal data. All procedures are illustrated extensively by using data from an experiment which aimed at studying the relationship between the post-operative evolution of the functional status of elderly hip fracture patients and their preoperative neurocognitive status.  相似文献   

3.
We propose a robust estimation procedure for the analysis of longitudinal data including a hidden process to account for unobserved heterogeneity between subjects in a dynamic fashion. We show how to perform estimation by an expectation–maximization-type algorithm in the hidden Markov regression literature. We show that the proposed robust approaches work comparably to the maximum-likelihood estimator when there are no outliers and the error is normal and outperform it when there are outliers or the error is heavy tailed. A real data application is used to illustrate our proposal. We also provide details on a simple criterion to choose the number of hidden states.  相似文献   

4.
Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. To overcome these drawbacks, in this paper, a simple and powerful two-step alternative is proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves time-dependent covariates, are used to demonstrate the approach proposed. Simulation studies show that our two-step approach improves the kernel method proposed by Hoover and co-workers in several aspects such as accuracy, computational time and visual appeal of the estimators.  相似文献   

5.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Although generalized linear mixed models are recognized to be of major practical importance, it is also known that they can be computationally demanding. The problem is the evaluation of the integral in calculating the marginalized likelihood. The straightforward method is based on the Gauss–Hermite technique, based on Gaussian quadrature points. Another approach is provided by the class of penalized quasi-likelihood methods. It is commonly believed that the Gauss–Hermite method works relatively well in simple situations but fails in more complicated structures. However, we present here a strikingly simple example of a logistic random-intercepts model in the context of a longitudinal clinical trial where the method gives valid results only for a high number of quadrature points ( Q ). As a consequence, this result warns the practitioner to examine routinely the dependence of the results on Q . The adaptive Gaussian quadrature, as implemented in the new SAS procedure NLMIXED, offered the solution to our problem. However, even the adaptive version of Gaussian quadrature needs careful handling to ensure convergence.  相似文献   

7.
In biomedical and public health research, both repeated measures of biomarkers Y as well as times T to key clinical events are often collected for a subject. The scientific question is how the distribution of the responses [ T , Y | X ] changes with covariates X . [ T | X ] may be the focus of the estimation where Y can be used as a surrogate for T . Alternatively, T may be the time to drop-out in a study in which [ Y | X ] is the target for estimation. Also, the focus of a study might be on the effects of covariates X on both T and Y or on some underlying latent variable which is thought to be manifested in the observable outcomes. In this paper, we present a general model for the joint analysis of [ T , Y | X ] and apply the model to estimate [ T | X ] and other related functionals by using the relevant information in both T and Y . We adopt a latent variable formulation like that of Fawcett and Thomas and use it to estimate several quantities of clinical relevance to determine the efficacy of a treatment in a clinical trial setting. We use a Markov chain Monte Carlo algorithm to estimate the model's parameters. We illustrate the methodology with an analysis of data from a clinical trial comparing risperidone with a placebo for the treatment of schizophrenia.  相似文献   

8.
Summary.  In many longitudinal studies, a subject's response profile is closely associated with his or her risk of experiencing a related event. Examples of such event risks include recurrence of disease, relapse, drop-out and non-compliance. When evaluating the effect of a treatment, it is sometimes of interest to consider the joint process consisting of both the response and the risk of an associated event. Motivated by a prevention of depression study among patients with malignant melanoma, we examine a joint model that incorporates the risk of discontinuation into the analysis of serial depression measures. We present a maximum likelihood estimator for the mean response and event risk vectors. We test hypotheses about functions of mean depression and withdrawal risk profiles from our joint model, predict depression from updated patient histories, characterize associations between components of the joint process and estimate the probability that a patient's depression and risk of withdrawal exceed specified levels. We illustrate the application of our joint model by using the depression data.  相似文献   

9.
For longitudinal data, the within-subject dependence structure and covariance parameters may be of practical and theoretical interests. The estimation of covariance parameters has received much attention and been studied mainly in the framework of generalized estimating equations (GEEs). The GEEs method, however, is sensitive to outliers. In this paper, an alternative set of robust generalized estimating equations for both the mean and covariance parameters are proposed in the partial linear model for longitudinal data. The asymptotic properties of the proposed estimators of regression parameters, non-parametric function and covariance parameters are obtained. Simulation studies are conducted to evaluate the performance of the proposed estimators under different contaminations. The proposed method is illustrated with a real data analysis.  相似文献   

10.
Different longitudinal study designs require different statistical analysis methods and different methods of sample size determination. Statistical power analysis is a flexible approach to sample size determination for longitudinal studies. However, different power analyses are required for different statistical tests which arises from the difference between different statistical methods. In this paper, the simulation-based power calculations of F-tests with Containment, Kenward-Roger or Satterthwaite approximation of degrees of freedom are examined for sample size determination in the context of a special case of linear mixed models (LMMs), which is frequently used in the analysis of longitudinal data. Essentially, the roles of some factors, such as variance–covariance structure of random effects [unstructured UN or factor analytic FA0], autocorrelation structure among errors over time [independent IND, first-order autoregressive AR1 or first-order moving average MA1], parameter estimation methods [maximum likelihood ML and restricted maximum likelihood REML] and iterative algorithms [ridge-stabilized Newton-Raphson and Quasi-Newton] on statistical power of approximate F-tests in the LMM are examined together, which has not been considered previously. The greatest factor affecting statistical power is found to be the variance–covariance structure of random effects in the LMM. It appears that the simulation-based analysis in this study gives an interesting insight into statistical power of approximate F-tests for fixed effects in LMMs for longitudinal data.  相似文献   

11.
This paper considers a finite mixture model for longitudinal data, which can be used to study the dependency of the shape of the respective follow-up curves on treatments or other influential factors and to classify these curves. An EM-algorithm to achieve the ml-estimate of the model is given. The potencies of the model are demonstrated using data of a clinical trial.  相似文献   

12.
Liang and Zeger (1986) proposed an extension of generalized linear models to the analysis of longitudinal data. In their formulation, a common dispersion parameter assumption across observation times is required. However, this assumption is not expected to hold in most situations. Park (1993) proposed a simple extension of Liang and Zeger's formulation to allow for different dispersion parameters for each time point. The proposed model is easy to apply without heavy computations and useful to handle the cases when variations in over-dispersion over time exist. In this paper, we focus on evaluating the effect of additional dispersion parameters on the estimators of model parameters. Through a Monte Carlo simulation study, efficiency of Park's method is compared with the Liang and Zeger's method.  相似文献   

13.
The occurrence of missing data is an often unavoidable consequence of repeated measures studies. Fortunately, multivariate general linear models such as growth curve models and linear mixed models with random effects have been well developed to analyze incomplete normally-distributed repeated measures data. Most statistical methods have assumed that the missing data occur at random. This assumption may include two types of missing data mechanism: missing completely at random (MCAR) and missing at random (MAR) in the sense of Rubin (1976). In this paper, we develop a test procedure for distinguishing these two types of missing data mechanism for incomplete normally-distributed repeated measures data. The proposed test is similar in spiril to the test of Park and Davis (1992). We derive the test for incomplete normally-distribrlted repeated measures data using linear mixed models. while Park and Davis (1992) cleirved thr test for incomplete repeatctl categorical data in the framework of Grizzle Starmer. and Koch (1969). Thr proposed procedure can be applied easily to any other multivariate general linear model which allow for missing data. The test is illustrated using the hip-replacernent patient.data from Crowder and Hand (1990).  相似文献   

14.
15.
A within-subject monotonicity index is proposed to summarize the fol-lowup for each subject in a longitudinal study where the response is required to be only ordinal. The individual measures can be weighted to detect different trend behaviors of interest. Asymptotically normal tests for single group alternatives or for comparing the means of two groups are derived and illustrated with clinical data.  相似文献   

16.
It is now a standard practice to replace missing data in longitudinal surveys with imputed values, but there is still much uncertainty about the best approach to adopt. Using data from a real survey, we compared different strategies combining multiple imputation and the chained equations method, the two main objectives being (1) to explore the impact of the explanatory variables in the chained regression equations and (2) to study the effect of imputation on causality between successive waves of the survey. Results were very stable from one simulation to another, and no systematic bias did appear. The critical points of the method lied in the proper choice of covariates and in the respect of the temporal relation between variables.  相似文献   

17.
18.
The potency of antiretroviral agents in AIDS clinical trials can be assessed on the basis of a viral response such as viral decay rate or change in viral load (number of HIV RNA copies in plasma). Linear, nonlinear, and nonparametric mixed-effects models have been proposed to estimate such parameters in viral dynamic models. However, there are two critical questions that stand out: whether these models achieve consistent estimates for viral decay rates, and which model is more appropriate for use in practice. Moreover, one often assumes that a model random error is normally distributed, but this assumption may be unrealistic, obscuring important features of within- and among-subject variations. In this article, we develop a skew-normal (SN) Bayesian linear mixed-effects (SN-BLME) model, an SN Bayesian nonlinear mixed-effects (SN-BNLME) model, and an SN Bayesian semiparametric nonlinear mixed-effects (SN-BSNLME) model that relax the normality assumption by considering model random error to have an SN distribution. We compare the performance of these SN models, and also compare their performance with the corresponding normal models. An AIDS dataset is used to test the proposed models and methods. It was found that there is a significant incongruity in the estimated viral decay rates. The results indicate that SN-BSNLME model is preferred to the other models, implying that an arbitrary data truncation is not necessary. The findings also suggest that it is important to assume a model with an SN distribution in order to achieve reasonable results when the data exhibit skewness.  相似文献   

19.
We consider approximate Bayesian inference about the quantity R = P[Y2> Y1] when both the random variables Y1, Y2 have expectations that depend on certain explanatory variables. Our interest centers on certain characteristics of the posterior of R under Jeffreys's prior, such as its mean, variance and percentiles. Since the posterior of R is not available in closed form, several approximation procedures are introduced, and their relative performance is assessed using two real datasets.  相似文献   

20.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号