首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
Missing data cause challenging issues, particularly in phase III registration trials, as highlighted by the European Medicines Agency (EMA) and the US National Research Council. We explore, as a case study, how the issues from missing data were tackled in a double‐blind phase III trial in subjects with autosomal dominant polycystic kidney disease. A total of 1445 subjects were randomized in a 2:1 ratio to receive active treatment (tolvaptan), or placebo. The primary outcome, the rate of change in total kidney volume, favored tolvaptan (P < .0001). The key secondary efficacy endpoints of clinical progression of disease and rate of decline in kidney function also favored tolvaptan. However, as highlighted by Food and Drug Administration and EMA, the interpretation of results was hampered by a high number of unevenly distributed dropouts, particularly early dropouts. In this paper, we outline the analyses undertaken to address the issue of missing data thoroughly. “Tipping point analyses” were performed to explore how extreme and detrimental outcomes among subjects with missing data must be to overturn the positive treatment effect attained in those subjects who had complete data. Nonparametric rank‐based analyses were also performed accounting for missing data. In conclusion, straightforward and transparent analyses directly taking into account missing data convincingly support the robustness of the preplanned analyses on the primary and secondary endpoints. Tolvaptan was confirmed to be effective in slowing total kidney volume growth, which is considered an efficacy endpoint by EMA, and in lessening the decline in renal function in patients with autosomal dominant polycystic kidney disease.  相似文献   

2.
A draft addendum to ICH E9 has been released for public consultation in August 2017. The addendum focuses on two topics particularly relevant for randomized confirmatory clinical trials: estimands and sensitivity analyses. The need to amend ICH E9 grew out of the realization of a lack of alignment between the objectives of a clinical trial stated in the protocol and the accompanying quantification of the “treatment effect” reported in a regulatory submission. We embed time‐to‐event endpoints in the estimand framework and discuss how the four estimand attributes described in the addendum apply to time‐to‐event endpoints. We point out that if the proportional hazards assumption is not met, the estimand targeted by the most prevalent methods used to analyze time‐to‐event endpoints, logrank test, and Cox regression depends on the censoring distribution. We discuss for a large randomized clinical trial how the analyses for the primary and secondary endpoints as well as the sensitivity analyses actually performed in the trial can be seen in the context of the addendum. To the best of our knowledge, this is the first attempt to do so for a trial with a time‐to‐event endpoint. Questions that remain open with the addendum for time‐to‐event endpoints and beyond are formulated, and recommendations for planning of future trials are given. We hope that this will provide a contribution to developing a common framework based on the final version of the addendum that can be applied to design, protocols, statistical analysis plans, and clinical study reports in the future.  相似文献   

3.
The complementary roles fulfilled by observational studies and randomized controlled trials in the population science research agenda is illustrated using results from the Women’s Health Initiative (WHI). Comparative and joint analyses of clinical trial and observational study data can enhance observational study design and analysis choices, and can augment randomized trial implications. These concepts are described in the context of findings from the WHI randomized trials of postmenopausal hormone therapy and of a low-fat dietary pattern, especially in relation to coronary heart disease, stroke, and breast cancer. The role of biomarkers of exposure and outcome, including high-dimensional genomic and proteomic biomarkers, in the elucidation of disease associations, will also be discussed in these same contexts.  相似文献   

4.
Selection of treatments to fit the specific needs for a certain patient is one major challenge in modern medicine. Personalized treatments rely on established patient–treatment interactions. In recent years, various statistical methods for the identification and estimation of interactions between relevant covariates and treatment were proposed. In this article, different available methods for detection and estimation of a covariate–treatment interaction for a time-to-event outcome, namely the standard Cox regression model assuming a linear interaction, the fractional polynomials approach for interaction, the modified outcome approach, the local partial-likelihood approach, and STEPP (Subpopulation Treatment Effect Pattern Plots) were applied to data from the SPACE trial, a randomized clinical trial comparing stent-protected angioplasty (CAS) to carotid endarterectomy (CEA) in patients with symptomatic stenosis, with the aim to analyse the interaction between age and treatment. Time from primary intervention to the first relevant event (any stroke or death) was considered as outcome parameter. The analyses suggest a qualitative interaction between patient age and treatment indicating a lower risk after treatment with CAS compared to CEA for younger patients, while for elderly patients a lower risk after CEA was observed. Differences in the statistical methods regarding the observed results, applicability, and interpretation are discussed.  相似文献   

5.
Statistical analyses of recurrent event data have typically been based on the missing at random assumption. One implication of this is that, if data are collected only when patients are on their randomized treatment, the resulting de jure estimator of treatment effect corresponds to the situation in which the patients adhere to this regime throughout the study. For confirmatory analysis of clinical trials, sensitivity analyses are required to investigate alternative de facto estimands that depart from this assumption. Recent publications have described the use of multiple imputation methods based on pattern mixture models for continuous outcomes, where imputation for the missing data for one treatment arm (e.g. the active arm) is based on the statistical behaviour of outcomes in another arm (e.g. the placebo arm). This has been referred to as controlled imputation or reference‐based imputation. In this paper, we use the negative multinomial distribution to apply this approach to analyses of recurrent events and other similar outcomes. The methods are illustrated by a trial in severe asthma where the primary endpoint was rate of exacerbations and the primary analysis was based on the negative binomial model. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
For a trial with primary endpoint overall survival for a molecule with curative potential, statistical methods that rely on the proportional hazards assumption may underestimate the power and the time to final analysis. We show how a cure proportion model can be used to get the necessary number of events and appropriate timing via simulation. If phase 1 results for the new drug are exceptional and/or the medical need in the target population is high, a phase 3 trial might be initiated after phase 1. Building in a futility interim analysis into such a pivotal trial may mitigate the uncertainty of moving directly to phase 3. However, if cure is possible, overall survival might not be mature enough at the interim to support a futility decision. We propose to base this decision on an intermediate endpoint that is sufficiently associated with survival. Planning for such an interim can be interpreted as making a randomized phase 2 trial a part of the pivotal trial: If stopped at the interim, the trial data would be analyzed, and a decision on a subsequent phase 3 trial would be made. If the trial continues at the interim, then the phase 3 trial is already underway. To select a futility boundary, a mechanistic simulation model that connects the intermediate endpoint and survival is proposed. We illustrate how this approach was used to design a pivotal randomized trial in acute myeloid leukemia and discuss historical data that informed the simulation model and operational challenges when implementing it.  相似文献   

7.
The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a “super-covariate”—that is, a patient-specific prediction of the control group outcome—as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind “super-covariates” to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a “super-covariate”.  相似文献   

8.
Conducting a clinical trial at multiple study centres raises the issue of whether and how to adjust for centre heterogeneity in the statistical analysis. In this paper, we address this issue for multicentre clinical trials with a time?to?event outcome. Based on simulations, we show that the current practice of ignoring centre heterogeneity can be seriously misleading, and we illustrate the performances of the frailty modelling approach over competing methods. A special attention is paid to the problem of misspecification of the frailty distribution. The appendix provides sample codes in R and in SAS to perform the analyses in this paper. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Abstract

A convention in designing randomized clinical trials has been to choose sample sizes that yield specified statistical power when testing hypotheses about treatment response. Manski and Tetenov recently critiqued this convention and proposed enrollment of sufficiently many subjects to enable near-optimal treatment choices. This article develops a refined version of that analysis applicable to trials comparing aggressive treatment of patients with surveillance. The need for a refined analysis arises because the earlier work assumed that there is only a primary health outcome of interest, without secondary outcomes. An important aspect of choice between surveillance and aggressive treatment is that the latter may have side effects. One should then consider how the primary outcome and side effects jointly determine patient welfare. This requires new analysis of sample design. As a case study, we reconsider a trial comparing nodal observation and lymph node dissection when treating patients with cutaneous melanoma. Using a statistical power calculation, the investigators assigned 971 patients to dissection and 968 to observation. We conclude that assigning 244 patients to each option would yield findings that enable suitably near-optimal treatment choice. Thus, a much smaller sample size would have sufficed to inform clinical practice.  相似文献   

10.
Subgroup by treatment interaction assessments are routinely performed when analysing clinical trials and are particularly important for phase 3 trials where the results may affect regulatory labelling. Interpretation of such interactions is particularly difficult, as on one hand the subgroup finding can be due to chance, but equally such analyses are known to have a low chance of detecting differential treatment effects across subgroup levels, so may overlook important differences in therapeutic efficacy. EMA have therefore issued draft guidance on the use of subgroup analyses in this setting. Although this guidance provided clear proposals on the importance of pre‐specification of likely subgroup effects and how to use this when interpreting trial results, it is less clear which analysis methods would be reasonable, and how to interpret apparent subgroup effects in terms of whether further evaluation or action is necessary. A PSI/EFSPI Working Group has therefore been investigating a focused set of analysis approaches to assess treatment effect heterogeneity across subgroups in confirmatory clinical trials that take account of the number of subgroups explored and also investigating the ability of each method to detect such subgroup heterogeneity. This evaluation has shown that the plotting of standardised effects, bias‐adjusted bootstrapping method and SIDES method all perform more favourably than traditional approaches such as investigating all subgroup‐by‐treatment interactions individually or applying a global test of interaction. Therefore, these approaches should be considered to aid interpretation and provide context for observed results from subgroup analyses conducted for phase 3 clinical trials.  相似文献   

11.
In phase III clinical trials, some adverse events may not be rare or unexpected and can be considered as a primary measure for safety, particularly in trials of life-threatening conditions, such as stroke or traumatic brain injury. In some clinical areas, efficacy endpoints may be highly correlated with safety endpoints, yet the interim efficacy analyses under group sequential designs usually do not consider safety measures formally in the analyses. Furthermore, safety is often statistically monitored more frequently than efficacy measures. Because early termination of a trial in this situation can be triggered by either efficacy or safety, the impact of safety monitoring on the error probabilities of efficacy analyses may be nontrivial if the original design does not take the multiplicity effect into account. We estimate the actual error probabilities for a bivariate binary efficacy-safety response in large confirmatory group sequential trials. The estimated probabilities are verified by Monte Carlo simulation. Our findings suggest that type I error for efficacy analyses decreases as efficacy-safety correlation or between-group difference in the safety event rate increases. In addition, although power for efficacy is robust to misspecification of the efficacy-safety correlation, it decreases dramatically as between-group difference in the safety event rate increases.  相似文献   

12.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

14.
15.
In this paper we present an approach to using historical control data to augment information from a randomized controlled clinical trial, when it is not possible to continue the control regimen to obtain the most reliable and valid assessment of long term treatment effects. Using an adjustment procedure to the historical control data, we investigate a method of estimating the long term survival function for the clinical trial control group and for evaluating the long term treatment effect. The suggested method is simple to interpret, and particularly motivated in clinical trial settings when ethical considerations preclude the long term follow-up of placebo controls. A simulation study reveals that the bias in parameter estimates that arises in the setting of group sequential monitoring will be attenuated when long term historical control information is used in the proposed manner. Data from the first and second National Wilms' Tumor studies are used to illustrate the method.  相似文献   

16.
For trials with repeated measurements of outcome, analyses often focus on univariate outcomes, such as analysis of summary measures or of the last on‐treatment observation. Methods which model the whole data set provide a rich source of approaches to analysis. For continuous data, mixed‐effect modelling is increasingly used. For binary and categorical data, models based on use of generalized estimating equations account for intra‐subject correlation and allow exploration of the time course of response, as well as providing a useful way to account for missing data, when such data can be maintained as missing in the analysis. The utility of this approach is illustrated by an example from a trial in influenza. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

17.
This paper describes how a multistage analysis strategy for a clinical trial can assess a sequence of hypotheses that pertain to successively more stringent criteria for excess risk exclusion or superiority for a primary endpoint with a low event rate. The criteria for assessment can correspond to excess risk of an adverse event or to a guideline for sufficient efficacy as in the case of vaccine trials. The proposed strategy is implemented through a set of interim analyses, and success for one or more of the less stringent criteria at an interim analysis can be the basis for a regulatory submission, whereas the clinical trial continues to accumulate information to address the more stringent, but not futile, criteria. Simulations show that the proposed strategy is satisfactory for control of type I error, sufficient power, and potential success at interim analyses when the true relative risk is more favorable than assumed for the planned sample size. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
The term “intercurrent events” has recently been used to describe events in clinical trials that may complicate the definition and calculation of the treatment effect estimand. This paper focuses on the use of an attributable estimand to address intercurrent events. Those events that are considered to be adversely related to randomized treatment (eg, discontinuation due to adverse events or lack of efficacy) are considered attributable and handled with a composite estimand strategy, while a hypothetical estimand strategy is used for intercurrent events not considered to be related to randomized treatment (eg, unrelated adverse events). We explore several options for how to implement this approach and compare them to hypothetical “efficacy” and treatment policy estimand strategies through a series of simulation studies whose design is inspired by recent trials in chronic obstructive pulmonary disease (COPD), and we illustrate through an analysis of a recently completed COPD trial.  相似文献   

19.
Mixed treatment comparison (MTC) models rely on estimates of relative effectiveness from randomized clinical trials so as to respect randomization across treatment arms. This approach could potentially be simplified by an alternative parameterization of the way effectiveness is modeled. We introduce a treatment‐based parameterization of the MTC model that estimates outcomes on both the study and treatment levels. We compare the proposed model to the commonly used MTC models using a simulation study as well as three randomized clinical trial datasets from published systematic reviews comparing (i) treatments on bleeding after cirrhosis, (ii) the impact of antihypertensive drugs in diabetes mellitus, and (iii) smoking cessation strategies. The simulation results suggest similar or sometimes better performance of the treatment‐based MTC model. Moreover, from the real data analyses, little differences were observed on the inference extracted from both models. Overall, our proposed MTC approach performed as good, or better, than the commonly applied indirect and MTC models and is simpler, fast, and easier to implement in standard statistical software. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We present new statistical analyses of data arising from a clinical trial designed to compare two-stage dynamic treatment regimes (DTRs) for advanced prostate cancer. The trial protocol mandated that patients were to be initially randomized among four chemotherapies, and that those who responded poorly were to be rerandomized to one of the remaining candidate therapies. The primary aim was to compare the DTRs' overall success rates, with success defined by the occurrence of successful responses in each of two consecutive courses of the patient's therapy. Of the one hundred and fifty study participants, forty seven did not complete their therapy per the algorithm. However, thirty five of them did so for reasons that precluded further chemotherapy; i.e. toxicity and/or progressive disease. Consequently, rather than comparing the overall success rates of the DTRs in the unrealistic event that these patients had remained on their assigned chemotherapies, we conducted an analysis that compared viable switch rules defined by the per-protocol rules but with the additional provision that patients who developed toxicity or progressive disease switch to a non-prespecified therapeutic or palliative strategy. This modification involved consideration of bivariate per-course outcomes encoding both efficacy and toxicity. We used numerical scores elicited from the trial's Principal Investigator to quantify the clinical desirability of each bivariate per-course outcome, and defined one endpoint as their average over all courses of treatment. Two other simpler sets of scores as well as log survival time also were used as endpoints. Estimation of each DTR-specific mean score was conducted using inverse probability weighted methods that assumed that missingness in the twelve remaining drop-outs was informative but explainable in that it only depended on past recorded data. We conducted additional worst-best case analyses to evaluate sensitivity of our findings to extreme departures from the explainable drop-out assumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号