首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Missing data in clinical trials is a well‐known problem, and the classical statistical methods used can be overly simple. This case study shows how well‐established missing data theory can be applied to efficacy data collected in a long‐term open‐label trial with a discontinuation rate of almost 50%. Satisfaction with treatment in chronically constipated patients was the efficacy measure assessed at baseline and every 3 months postbaseline. The improvement in treatment satisfaction from baseline was originally analyzed with a paired t‐test ignoring missing data and discarding the correlation structure of the longitudinal data. As the original analysis started from missing completely at random assumptions regarding the missing data process, the satisfaction data were re‐examined, and several missing at random (MAR) and missing not at random (MNAR) techniques resulted in adjusted estimate for the improvement in satisfaction over 12 months. Throughout the different sensitivity analyses, the effect sizes remained significant and clinically relevant. Thus, even for an open‐label trial design, sensitivity analysis, with different assumptions for the nature of dropouts (MAR or MNAR) and with different classes of models (selection, pattern‐mixture, or multiple imputation models), has been found useful and provides evidence towards the robustness of the original analyses; additional sensitivity analyses could be undertaken to further qualify robustness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Pattern‐mixture models provide a general and flexible framework for sensitivity analyses of nonignorable missing data in longitudinal studies. The placebo‐based pattern‐mixture model handles missing data in a transparent and clinically interpretable manner. We extend this model to include a sensitivity parameter that characterizes the gradual departure of the missing data mechanism from being missing at random toward being missing not at random under the standard placebo‐based pattern‐mixture model. We derive the treatment effect implied by the extended model. We propose to utilize the primary analysis based on a mixed‐effects model for repeated measures to draw inference about the treatment effect under the extended placebo‐based pattern‐mixture model. We use simulation studies to confirm the validity of the proposed method. We apply the proposed method to a clinical study of major depressive disorders. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
5.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
The trimmed mean is a method of dealing with patient dropout in clinical trials that considers early discontinuation of treatment a bad outcome rather than leading to missing data. The present investigation is the first comprehensive assessment of the approach across a broad set of simulated clinical trial scenarios. In the trimmed mean approach, all patients who discontinue treatment prior to the primary endpoint are excluded from analysis by trimming an equal percentage of bad outcomes from each treatment arm. The untrimmed values are used to calculated means or mean changes. An explicit intent of trimming is to favor the group with lower dropout because having more completers is a beneficial effect of the drug, or conversely, higher dropout is a bad effect. In the simulation study, difference between treatments estimated from trimmed means was greater than the corresponding effects estimated from untrimmed means when dropout favored the experimental group, and vice versa. The trimmed mean estimates a unique estimand. Therefore, comparisons with other methods are difficult to interpret and the utility of the trimmed mean hinges on the reasonableness of its assumptions: dropout is an equally bad outcome in all patients, and adherence decisions in the trial are sufficiently similar to clinical practice in order to generalize the results. Trimming might be applicable to other inter‐current events such as switching to or adding rescue medicine. Given the well‐known biases in some methods that estimate effectiveness, such as baseline observation carried forward and non‐responder imputation, the trimmed mean may be a useful alternative when its assumptions are justifiable.  相似文献   

7.
An important evolution in the missing data arena has been the recognition of need for clarity in objectives. The objectives of primary focus in clinical trials can often be categorized as assessing efficacy or effectiveness. The present investigation illustrated a structured framework for choosing estimands and estimators when testing investigational drugs to treat the symptoms of chronic illnesses. Key issues were discussed and illustrated using a reanalysis of the confirmatory trials from a new drug application in depression. The primary analysis used a likelihood‐based approach to assess efficacy: mean change to the planned endpoint of the trial assuming patients stayed on drug. Secondarily, effectiveness was assessed using a multiple imputation approach. The imputation model—derived solely from the placebo group—was used to impute missing values for both the drug and placebo groups. Therefore, this so‐called placebo multiple imputation (a.k.a. controlled imputation) approach assumed patients had reduced benefit from the drug after discontinuing it. Results from the example data provided clear evidence of efficacy for the experimental drug and characterized its effectiveness. Data after discontinuation of study medication were not required for these analyses. Given the idiosyncratic nature of drug development, no estimand or approach is universally appropriate. However, the general practice of pairing efficacy and effectiveness estimands may often be useful in understanding the overall risks and benefits of a drug. Controlled imputation approaches, such as placebo multiple imputation, can be a flexible and transparent framework for formulating primary analyses of effectiveness estimands and sensitivity analyses for efficacy estimands. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Summary. We examine three pattern–mixture models for making inference about parameters of the distribution of an outcome of interest Y that is to be measured at the end of a longitudinal study when this outcome is missing in some subjects. We show that these pattern–mixture models also have an interpretation as selection models. Because these models make unverifiable assumptions, we recommend that inference about the distribution of Y be repeated under a range of plausible assumptions. We argue that, of the three models considered, only one admits a parameterization that facilitates the examination of departures from the assumption of sequential ignorability. The three models are nonparametric in the sense that they do not impose restrictions on the class of observed data distributions. Owing to the curse of dimensionality, the assumptions that are encoded in these models are sufficient for identification but not for inference. We describe additional flexible and easily interpretable assumptions under which it is possible to construct estimators that are well behaved with moderate sample sizes. These assumptions define semiparametric models for the distribution of the observed data. We describe a class of estimators which, up to asymptotic equivalence, comprise all the consistent and asymptotically normal estimators of the parameters of interest under the postulated semiparametric models. We illustrate our methods with the analysis of data from a randomized clinical trial of contracepting women.  相似文献   

9.
Summary. Missing observations are a common problem that complicate the analysis of clustered data. In the Connecticut child surveys of childhood psychopathology, it was possible to identify reasons why outcomes were not observed. Of note, some of these causes of missingness may be assumed to be ignorable , whereas others may be non-ignorable . We consider logistic regression models for incomplete bivariate binary outcomes and propose mixture models that permit estimation assuming that there are two distinct types of missingness mechanisms: one that is ignorable; the other non-ignorable. A feature of the mixture modelling approach is that additional analyses to assess the sensitivity to assumptions about the missingness are relatively straightforward to incorporate. The methods were developed for analysing data from the Connecticut child surveys, where there are missing informant reports of child psychopathology and different reasons for missingness can be distinguished.  相似文献   

10.
Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.  相似文献   

11.
Statistical analyses of recurrent event data have typically been based on the missing at random assumption. One implication of this is that, if data are collected only when patients are on their randomized treatment, the resulting de jure estimator of treatment effect corresponds to the situation in which the patients adhere to this regime throughout the study. For confirmatory analysis of clinical trials, sensitivity analyses are required to investigate alternative de facto estimands that depart from this assumption. Recent publications have described the use of multiple imputation methods based on pattern mixture models for continuous outcomes, where imputation for the missing data for one treatment arm (e.g. the active arm) is based on the statistical behaviour of outcomes in another arm (e.g. the placebo arm). This has been referred to as controlled imputation or reference‐based imputation. In this paper, we use the negative multinomial distribution to apply this approach to analyses of recurrent events and other similar outcomes. The methods are illustrated by a trial in severe asthma where the primary endpoint was rate of exacerbations and the primary analysis was based on the negative binomial model. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
It is very well known that analyses for missing data depend on untestable assumptions. As a consequence, in such settings, sensitivity analyses are often sensible. One such class of analyses assesses the dependence of conclusions on an explicit missing value mechanism. Inevitably, there is an association between such dependence and the actual (but unknown) distribution of the missing data. In a particular parametric framework for dropout in this paper, an approach is presented that reduces (but never removes) the impact of incorrect assumptions on the form of the association. It is shown how these models can be formulated and fitted relatively simply using hierarchical likelihood. These are applied directly to an example involving mastitis in dairy cattle, and an extensive simulation study is described to show the properties of the methods.  相似文献   

13.
Missing data are a prevalent and widespread data analytic issue and previous studies have performed simulations to compare the performance of missing data methods in various contexts and for various models; however, one such context that has yet to receive much attention in the literature is the handling of missing data with small samples, particularly when the missingness is arbitrary. Prior studies have either compared methods for small samples with monotone missingness commonly found in longitudinal studies or have investigated the performance of a single method to handle arbitrary missingness with small samples but studies have yet to compare the relative performance of commonly implemented missing data methods for small samples with arbitrary missingness. This study conducts a simulation study to compare and assess the small sample performance of maximum likelihood, listwise deletion, joint multiple imputation, and fully conditional specification multiple imputation for a single-level regression model with a continuous outcome. Results showed that, provided assumptions are met, joint multiple imputation unanimously performed best of the methods examined in the conditions under study.  相似文献   

14.
In the past, many clinical trials have withdrawn subjects from the study when they prematurely stopped their randomised treatment and have therefore only collected ‘on‐treatment’ data. Thus, analyses addressing a treatment policy estimand have been restricted to imputing missing data under assumptions drawn from these data only. Many confirmatory trials are now continuing to collect data from subjects in a study even after they have prematurely discontinued study treatment as this event is irrelevant for the purposes of a treatment policy estimand. However, despite efforts to keep subjects in a trial, some will still choose to withdraw. Recent publications for sensitivity analyses of recurrent event data have focused on the reference‐based imputation methods commonly applied to continuous outcomes, where imputation for the missing data for one treatment arm is based on the observed outcomes in another arm. However, the existence of data from subjects who have prematurely discontinued treatment but remained in the study has now raised the opportunity to use this ‘off‐treatment’ data to impute the missing data for subjects who withdraw, potentially allowing more plausible assumptions for the missing post‐study‐withdrawal data than reference‐based approaches. In this paper, we introduce a new imputation method for recurrent event data in which the missing post‐study‐withdrawal event rate for a particular subject is assumed to reflect that observed from subjects during the off‐treatment period. The method is illustrated in a trial in chronic obstructive pulmonary disease (COPD) where the primary endpoint was the rate of exacerbations, analysed using a negative binomial model.  相似文献   

15.
Missing data, a common but challenging issue in most studies, may lead to biased and inefficient inferences if handled inappropriately. As a natural and powerful way for dealing with missing data, Bayesian approach has received much attention in the literature. This paper reviews the recent developments and applications of Bayesian methods for dealing with ignorable and non-ignorable missing data. We firstly introduce missing data mechanisms and Bayesian framework for dealing with missing data, and then introduce missing data models under ignorable and non-ignorable missing data circumstances based on the literature. After that, important issues of Bayesian inference, including prior construction, posterior computation, model comparison and sensitivity analysis, are discussed. Finally, several future issues that deserve further research are summarized and concluded.  相似文献   

16.
In many clinical research applications the time to occurrence of one event of interest, that may be obscured by another??so called competing??event, is investigated. Specific interventions can only have an effect on the endpoint they address or research questions might focus on risk factors for a certain outcome. Different approaches for the analysis of time-to-event data in the presence of competing risks were introduced in the last decades including some new methodologies, which are not yet frequently used in the analysis of competing risks data. Cause-specific hazard regression, subdistribution hazard regression, mixture models, vertical modelling and the analysis of time-to-event data based on pseudo-observations are described in this article and are applied to a dataset of a cohort study intended to establish risk stratification for cardiac death after myocardial infarction. Data analysts are encouraged to use the appropriate methods for their specific research questions by comparing different regression approaches in the competing risks setting regarding assumptions, methodology and interpretation of the results. Notes on application of the mentioned methods using the statistical software R are presented and extensions to the presented standard methods proposed in statistical literature are mentioned.  相似文献   

17.
Effective component relabeling in Bayesian analyses of mixture models is critical to the routine use of mixtures in classification with analysis based on Markov chain Monte Carlo methods. The classification-based relabeling approach here is computationally attractive and statistically effective, and scales well with sample size and number of mixture components concordant with enabling routine analyses of increasingly large data sets. Building on the best of existing methods, practical relabeling aims to match data:component classification indicators in MCMC iterates with those of a defined reference mixture distribution. The method performs as well as or better than existing methods in small dimensional problems, while being practically superior in problems with larger data sets as the approach is scalable. We describe examples and computational benchmarks, and provide supporting code with efficient computational implementation of the algorithm that will be of use to others in practical applications of mixture models.  相似文献   

18.
Likelihood-based, mixed-effects models for repeated measures (MMRMs) are occasionally used in primary analyses for group comparisons of incomplete continuous longitudinal data. Although MMRM analysis is generally valid under missing-at-random assumptions, it is invalid under not-missing-at-random (NMAR) assumptions. We consider the possibility of bias of estimated treatment effect using standard MMRM analysis in a motivational case, and propose simple and easily implementable pattern mixture models within the framework of mixed-effects modeling, to handle the NMAR data with differential missingness between treatment groups. The proposed models are a new form of pattern mixture model that employ a categorical time variable when modeling the outcome and a continuous time variable when modeling the missingness-data patterns. The models can directly provide an overall estimate of the treatment effect of interest using the average of the distribution of the missingness indicator and a categorical time variable in the same manner as MMRM analysis. Our simulation results indicate that the bias of the treatment effect for MMRM analysis was considerably larger than that for the pattern mixture model analysis under NMAR assumptions. In the case study, it would be dangerous to interpret only the results of the MMRM analysis, and the proposed pattern mixture model would be useful as a sensitivity analysis for treatment effect evaluation.  相似文献   

19.
This study compares two methods for handling missing data in longitudinal trials: one using the last-observation-carried-forward (LOCF) method and one based on a multivariate or mixed model for repeated measurements (MMRM). Using data sets simulated to match six actual trials, I imposed several drop-out mechanisms, and compared the methods in terms of bias in the treatment difference and power of the treatment comparison. With equal drop-out in Active and Placebo arms, LOCF generally underestimated the treatment effect; but with unequal drop-out, bias could be much larger and in either direction. In contrast, bias with the MMRM method was much smaller; and whereas MMRM rarely caused a difference in power of greater than 20%, LOCF caused a difference in power of greater than 20% in nearly half the simulations. Use of the LOCF method is therefore likely to misrepresent the results of a trial seriously, and so is not a good choice for primary analysis. In contrast, the MMRM method is unlikely to result in serious misinterpretation, unless the drop-out mechanism is missing not at random (MNAR) and there is substantially unequal drop-out. Moreover, MMRM is clearly more reliable and better grounded statistically. Neither method is capable of dealing on its own with trials involving MNAR drop-out mechanisms, for which sensitivity analysis is needed using more complex methods.  相似文献   

20.
Novel imaging techniques are playing an increasingly important role in drug development, providing insight into the mechanism of action of new chemical entities. The data sets obtained by these methods can be large with complex inter-relationships, but the most appropriate statistical analysis for handling this data is often uncertain--precisely because of the exploratory nature of the way the data are collected. We present an example from a clinical trial using magnetic resonance imaging to assess changes in atherosclerotic plaques following treatment with a tool compound with established clinical benefit. We compared two specific approaches to handle the correlations due to physical location and repeated measurements: two-level and four-level multilevel models. The two methods identified similar structural variables, but higher level multilevel models had the advantage of explaining a greater proportion of variation, and the modeling assumptions appeared to be better satisfied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号