共查询到12条相似文献,搜索用时 15 毫秒
1.
Subject dropout is an inevitable problem in longitudinal studies. It makes the analysis challenging when the main interest is the change in outcome from baseline to endpoint of study. The last observation carried forward (LOCF) method is a very common approach for handling this problem. It assumes that the last measured outcome is frozen in time after the point of dropout, an unrealistic assumption given any time trends. Though existence and direction of the bias can sometimes be anticipated, the more important statistical question involves the actual magnitude of the bias and this requires computation. This paper provides explicit expressions for the exact bias in the LOCF estimates of mean change and its variance when the longitudinal data follow a linear mixed-effects model with linear time trajectories. General dropout patterns are considered that may depend on treatment group, subject-specific trajectories and follow different time to dropout distributions. In our case studies, the magnitude of bias for mean change estimators linearly increases as time to dropout decreases. The bias depends heavily on the dropout interval. The variance term is always underestimated. 相似文献
2.
The topic of this paper was prompted by a study for which one of us was the statistician. It was submitted to Annals of Internal Medicine. The paper had positive reviewer comment; however, the statistical reviewer stated that for the analysis to be acceptable for publication, the missing data had to be accounted for in the analysis through the use of baseline in a last observation carried forward imputation. We discuss the issues associated with this form of imputation and recommend that it should not be undertaken as a primary analysis. 相似文献
3.
Wen Su 《Journal of applied statistics》2018,45(11):1978-1993
We focus on regression analysis of irregularly observed longitudinal data which often occur in medical follow-up studies and observational investigations. The model for such data involves two processes: a longitudinal response process of interest and an observation process controlling observation times. Restrictive models and questionable assumptions, such as Poisson assumption and independent censoring time assumption, were posed in previous works for analysing longitudinal data. In this paper, we propose a more general model together with a robust estimation approach for longitudinal data with informative observation times and censoring times, and the asymptotic normalities of the proposed estimators are established. Both simulation studies and real data application indicate that the proposed method is promising. 相似文献
4.
Longitudinal studies suffer from patient dropout. The dropout process may be informative if there exists an association between dropout patterns and the rate of change in the response over time. Multiple patterns are plausible in that different causes of dropout might contribute to different patterns. These multiple patterns can be dichotomized into two groups: quantitative and qualitative interaction. Quantitative interaction indicates that each of the multiple sources is biasing the estimate of the rate of change in the same direction, although with differing magnitudes. Alternatively, qualitative interaction results in the multiple sources biasing the estimate of the rate of change in opposing directions. Qualitative interaction is of special concern, since it is less likely to be detected by conventional methods and can lead to highly misleading slope estimates. We explore a test for qualitative interaction based on simultaneous confidence intervals. The test accommodates the realistic situation where reasons for dropout are not fully understood, or even entirely unknown. It allows for an additional level of clustering among participating subjects. We apply these methods to a study exploring tumor growth rates in mice as well as a longitudinal study exploring rates of change in cognitive functioning for Alzheimer's patients. 相似文献
5.
In this note, we highlight the fact that the choice of type I and type II error rates should not simply be set at traditional levels in the phase II clinical trial setting when considering the relative success rate of previous trials in a given disease setting. For diseases in which it is rare that a new compound is active, we argue that more stringent type I error rates in the phase II setting may be more important relative to relaxing the type II error rates. The paper itself is more of a 'thought' experiment on this topic such that specific clinical trial settings will require specific applications of this approach. This is due in part to the fact that the real-world setting is more complex relative to overall decision process in terms of moving from phase II to phase III trials than our basic illustrative model. 相似文献
6.
Lane P 《Pharmaceutical statistics》2008,7(2):93-106
This study compares two methods for handling missing data in longitudinal trials: one using the last-observation-carried-forward (LOCF) method and one based on a multivariate or mixed model for repeated measurements (MMRM). Using data sets simulated to match six actual trials, I imposed several drop-out mechanisms, and compared the methods in terms of bias in the treatment difference and power of the treatment comparison. With equal drop-out in Active and Placebo arms, LOCF generally underestimated the treatment effect; but with unequal drop-out, bias could be much larger and in either direction. In contrast, bias with the MMRM method was much smaller; and whereas MMRM rarely caused a difference in power of greater than 20%, LOCF caused a difference in power of greater than 20% in nearly half the simulations. Use of the LOCF method is therefore likely to misrepresent the results of a trial seriously, and so is not a good choice for primary analysis. In contrast, the MMRM method is unlikely to result in serious misinterpretation, unless the drop-out mechanism is missing not at random (MNAR) and there is substantially unequal drop-out. Moreover, MMRM is clearly more reliable and better grounded statistically. Neither method is capable of dealing on its own with trials involving MNAR drop-out mechanisms, for which sensitivity analysis is needed using more complex methods. 相似文献
7.
For any decision-making study, there are two sorts of errors that can be made, declaring a positive result when the truth is negative, and declaring a negative result when the truth is positive. Traditionally, the primary analysis of a study is a two-sided hypothesis test, the type I error rate will be set to 5% and the study is designed to give suitably low type II error – typically 10 or 20% – to detect a given effect size. These values are standard, arbitrary and, other than the choice between 10 and 20%, do not reflect the context of the study, such as the relative costs of making type I and II errors and the prior belief the drug will be placebo-like. Several authors have challenged this paradigm, typically for the scenario where the planned analysis is frequentist. When resource is limited, there will always be a trade-off between the type I and II error rates, and this article explores optimising this trade-off for a study with a planned Bayesian statistical analysis. This work provides a scientific basis for a discussion between stakeholders as to what type I and II error rates may be appropriate and some algebraic results for normally distributed data. 相似文献
8.
In previous work, non–response adjustments based on calibration weighting have been proposed for estimating gross flows in economic activity status from the quarterly Labour Force Survey. However, even after adjustment there may be residual non–response bias. The weighting is based on estimates of cross–sectional distributions and so cannot adjust for bias if non–response is associated with individual flows between quarters. To investigate this possibility, it was decided to apply models for estimating gross flows when non–response depends on the flows. This paper has two aims: first to describe the many problems encountered when attempting to implement these models; and second to outline a solution to the major problem that arose, namely, that comparing the model results directly with the weighting results was not possible. A simulation study was used to compare the results indirectly and it was tentatively concluded that non–response is not strongly associated with the flows and that the weighting provides an adequate adjustment. 相似文献
9.
In many prospective clinical and biomedical studies, longitudinal biomarkers are repeatedly measured as health indicators to evaluate disease progression when patients are followed up over a period of time. Patient visiting times can be referred to as informative observation times if they are assumed to carry information in addition to that of the longitudinal biomarker measures alone. Irregular visiting times may reflect compliance with physician instruction, disease progression and symptom severity. When the follow-up time may be stopped by competing terminal events, it is possible that patient observation times may correlate with the competing terminal events themselves, thus making the observation times difficult to assess. To explicitly account for the impact of competing terminal events and dependent observation times on the longitudinal data analysis in the context of such complex data, we propose a joint model using latent random effects to describe the association among them. A likelihood-based approach is derived for statistical inference. Extensive simulation studies reveal that the proposed approach performs well for practical situations, and an analysis of patients with chronic kidney disease in a cohort study is presented to illustrate the proposed method. 相似文献
10.
Chandan Saha Michael P. Jones 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2005,67(1):167-182
Summary. In longitudinal studies, missingness of data is often an unavoidable problem. Estimators from the linear mixed effects model assume that missing data are missing at random. However, estimators are biased when this assumption is not met. In the paper, theoretical results for the asymptotic bias are established under non-ignorable drop-out, drop-in and other missing data patterns. The asymptotic bias is large when the drop-out subjects have only one or no observation, especially for slope-related parameters of the linear mixed effects model. In the drop-in case, intercept-related parameter estimators show substantial asymptotic bias when subjects enter late in the study. Eight other missing data patterns are considered and these produce asymptotic biases of a variety of magnitudes. 相似文献
11.
Peter C. Austin George Leckie 《Journal of Statistical Computation and Simulation》2018,88(16):3151-3163
When using multilevel regression models that incorporate cluster-specific random effects, the Wald and the likelihood ratio (LR) tests are used for testing the null hypothesis that the variance of the random effects distribution is equal to zero. We conducted a series of Monte Carlo simulations to examine the effect of the number of clusters and the number of subjects per cluster on the statistical power to detect a non-null random effects variance and to compare the empirical type I error rates of the Wald and LR tests. Statistical power increased with increasing number of clusters and number of subjects per cluster. Statistical power was greater for the LR test than for the Wald test. These results applied to both the linear and logistic regressions, but were more pronounced for the latter. The use of the LR test is preferable to the use of the Wald test. 相似文献
12.
Review of guidelines and literature for handling missing data in longitudinal clinical trials with a case study 总被引:1,自引:0,他引:1
Missing data in clinical trials are inevitable. We highlight the ICH guidelines and CPMP points to consider on missing data. Specifically, we outline how we should consider missing data issues when designing, planning and conducting studies to minimize missing data impact. We also go beyond the coverage of the above two documents, provide a more detailed review of the basic concepts of missing data and frequently used terminologies, and examples of the typical missing data mechanism, and discuss technical details and literature for several frequently used statistical methods and associated software. Finally, we provide a case study where the principles outlined in this paper are applied to one clinical program at protocol design, data analysis plan and other stages of a clinical trial. 相似文献