首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The estimand framework included in the addendum to the ICH E9 guideline facilitates discussions to ensure alignment between the key question of interest, the analysis, and interpretation. Therapeutic knowledge and drug mechanism play a crucial role in determining the strategy and defining the estimand for clinical trial designs. Clinical trials in patients with hematological malignancies often present unique challenges for trial design due to complexity of treatment options and existence of potential curative but highly risky procedures, for example, stem cell transplant or treatment sequence across different phases (induction, consolidation, maintenance). Here, we illustrate how to apply the estimand framework in hematological clinical trials and how the estimand framework can address potential difficulties in trial result interpretation. This paper is a result of a cross-industry collaboration to connect the International Conference on Harmonisation (ICH) E9 addendum concepts to applications. Three randomized phase 3 trials will be used to consider common challenges including intercurrent events in hematologic oncology trials to illustrate different scientific questions and the consequences of the estimand choice for trial design, data collection, analysis, and interpretation. Template language for describing estimand in both study protocols and statistical analysis plans is suggested for statisticians' reference.  相似文献   

2.
The addendum of the ICH E9 guideline on the statistical principles for clinical trials introduced the estimand framework. The framework is designed to strengthen the dialog between different stakeholders, to introduce greater clarity in the clinical trial objectives and to provide alignment between the estimand and statistical analysis. Estimand framework related publications thus far have mainly focused on randomized clinical trials. The intention of the Early Development Estimand Nexus (EDEN), a task force of the cross-industry Oncology Estimand Working Group ( www.oncoestimand.org ), is to apply it to single arms Phase 1b or Phase 2 trials designed to detect a treatment-related efficacy signal, typically measured by objective response rate. Key recommendations regarding the estimand attributes include that in a single arm early clinical trial, the treatment attribute should start when the first dose is received by the participant. Focusing on the estimation of an absolute effect, the population-level summary measure should reflect only the property used for the estimation. Another major component introduced in the ICH E9 addendum is the definition of intercurrent events and the associated possible ways to handle them. Different strategies reflect different clinical questions of interest that can be answered based on the journeys an individual subject can take during a trial. We provide detailed strategy recommendations for intercurrent events typically seen in early-stage oncology. We highlight where implicit assumptions should be made transparent as whenever follow-up is suspended, a while-on-treatment strategy is implied.  相似文献   

3.
The draft addendum to the ICH E9 regulatory guideline asks for explicit definition of the treatment effect to be estimated in clinical trials. The draft guideline also introduces the concept of intercurrent events to describe events that occur post‐randomisation that may affect efficacy assessment. Composite estimands allow incorporation of intercurrent events in the definition of the endpoint. A common example of an intercurrent event is discontinuation of randomised treatment and use of a composite strategy would assess treatment effect based on a variable that combines the outcome variable of interest with discontinuation of randomised treatment. Use of a composite estimand may avoid the need for imputation which would be required by a treatment policy estimand. The draft guideline gives the example of a binary approach for specifying a composite estimand. When the variable is measured on a non‐binary scale, other options are available where the intercurrent event is given an extreme unfavourable value, for example comparison of median values or analysis based on categories of response. This paper reviews approaches to deriving a composite estimand and contrasts the use of this estimand to the treatment policy estimand. The benefits of using each strategy are discussed and examples of the use of the different approaches are given for a clinical trial in nasal polyposis and a steroid reduction trial in severe asthma.  相似文献   

4.
The estimand framework requires a precise definition of the clinical question of interest (the estimand) as different ways of accounting for “intercurrent” events post randomization may result in different scientific questions. The initiation of subsequent therapy is common in oncology clinical trials and is considered an intercurrent event if the start of such therapy occurs prior to a recurrence or progression event. Three possible ways to account for this intercurrent event in the analysis are to censor at initiation, consider recurrence or progression events (including death) that occur before and after the initiation of subsequent therapy, or consider the start of subsequent therapy as an event in and of itself. The new estimand framework clarifies that these analyses address different questions (“does the drug delay recurrence if no patient had received subsequent therapy?” vs “does the drug delay recurrence with or without subsequent therapy?” vs “does the drug delay recurrence or start of subsequent therapy?”). The framework facilitates discussions during clinical trial planning and design to ensure alignment between the key question of interest, the analysis, and interpretation. This article is a result of a cross-industry collaboration to connect the International Council for Harmonisation E9 addendum concepts to applications. Data from previously reported randomized phase 3 studies in the renal cell carcinoma setting are used to consider common intercurrent events in solid tumor studies, and to illustrate different scientific questions and the consequences of the estimand choice for study design, data collection, analysis, and interpretation.  相似文献   

5.
The International Council for Harmonisation (ICH) guideline E9 Statistical Principles for Clinical Trials (1) was issued in 1998. In October 2014, an addendum to ICH E9 was proposed on statistical principles relating to estimands and sensitivity analyses. The final version of the addendum to ICH E9 (R1) (2) was issued in November 2019. This virtual edition of Pharmaceutical Statistics takes a closer look at some of the progress that has been made since 2018 when implementing the estimand framework within clinical research. The articles discussed in this virtual issue are not new, but a compilation from previous issues. This specific article will act as a refresher for those not familiar with the topic and discuss the ABCs of estimands and their proposed deployment for improving the quality of clinical research. An overview of the more recent Pharmaceutical Statistics articles on estimands will be provided, signifying areas where progress have been made. The articles should be considered as contributions to the ongoing discussions rather than the final word. Finally, a personal perspective on the estimand success story and remaining challenges with proposed solutions will be discussed.  相似文献   

6.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
The International Council for Harmonization (ICH) E9(R1) addendum recommends choosing an appropriate estimand based on the study objectives in advance of trial design. One defining attribute of an estimand is the intercurrent event, specifically what is considered an intercurrent event and how it should be handled. The primary objective of a clinical study is usually to assess a product's effectiveness and safety based on the planned treatment regimen instead of the actual treatment received. The estimand using the treatment policy strategy, which collects and analyzes data regardless of the occurrence of intercurrent events, is usually utilized. In this article, we explain how missing data can be handled using the treatment policy strategy from the authors' viewpoint in connection with antihyperglycemic product development programs. The article discusses five statistical methods to impute missing data occurring after intercurrent events. All five methods are applied within the framework of the treatment policy strategy. The article compares the five methods via Markov Chain Monte Carlo simulations and showcases how three of these five methods have been applied to estimate the treatment effects published in the labels for three antihyperglycemic agents currently on the market.  相似文献   

8.
9.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

10.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

11.
Clinical trials with event‐time outcomes as co‐primary contrasts are common in many areas such as infectious disease, oncology, and cardiovascular disease. We discuss methods for calculating the sample size for randomized superiority clinical trials with two correlated time‐to‐event outcomes as co‐primary contrasts when the time‐to‐event outcomes are exponentially distributed. The approach is simple and easily applied in practice. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   

13.
A randomized trial allows estimation of the causal effect of an intervention compared to a control in the overall population and in subpopulations defined by baseline characteristics. Often, however, clinical questions also arise regarding the treatment effect in subpopulations of patients, which would experience clinical or disease related events post-randomization. Events that occur after treatment initiation and potentially affect the interpretation or the existence of the measurements are called intercurrent events in the ICH E9(R1) guideline. If the intercurrent event is a consequence of treatment, randomization alone is no longer sufficient to meaningfully estimate the treatment effect. Analyses comparing the subgroups of patients without the intercurrent events for intervention and control will not estimate a causal effect. This is well known, but post-hoc analyses of this kind are commonly performed in drug development. An alternative approach is the principal stratum strategy, which classifies subjects according to their potential occurrence of an intercurrent event on both study arms. We illustrate with examples that questions formulated through principal strata occur naturally in drug development and argue that approaching these questions with the ICH E9(R1) estimand framework has the potential to lead to more transparent assumptions as well as more adequate analyses and conclusions. In addition, we provide an overview of assumptions required for estimation of effects in principal strata. Most of these assumptions are unverifiable and should hence be based on solid scientific understanding. Sensitivity analyses are needed to assess robustness of conclusions.  相似文献   

14.
15.
For clinical trials with multiple endpoints, the primary interest is usually to evaluate the relationship of these endpoints and treatment interventions. Studying the correlation of two clinical trial endpoints can also be of interests. For example, the association between patient‐reported outcome and clinically assessed endpoint could answer important research questions and also generate interesting hypothesis for future research. However, it is not straightforward to quantify such association. In this article, we proposed a multiple event approach to profile such association with a temporal correlation function, visualized by a correlation function plot over time with a confidence band. We developed this approach by extending the existing methodology in recurrent event literature. This approach was shown to be generally unbiased and could be a useful tool for data visualization and inference. We demonstrated the use of this method with data from a real clinical trial. Although this approach was developed to evaluate the association between patient‐reported outcome and adverse events, it can also be used to evaluate the association of any two endpoints that can be translated to time‐to‐event endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina.  相似文献   

17.
A variety of primary endpoints are used in clinical trials treating patients with severe infectious diseases, and existing guidelines do not provide a consistent recommendation. We propose to study simultaneously two primary endpoints, cure and death, in a comprehensive multistate cure‐death model as starting point for a treatment comparison. This technique enables us to study the temporal dynamic of the patient‐relevant probability to be cured and alive. We describe and compare traditional and innovative methods suitable for a treatment comparison based on this model. Traditional analyses using risk differences focus on one prespecified timepoint only. A restricted logrank‐based test of treatment effect is sensitive to ordered categories of responses and integrates information on duration of response. The pseudo‐value regression provides a direct regression model for examination of treatment effect via difference in transition probabilities. Applied to a topical real data example and simulation scenarios, we demonstrate advantages and limitations and provide an insight into how these methods can handle different kinds of treatment imbalances. The cure‐death model provides a suitable framework to gain a better understanding of how a new treatment influences the time‐dynamic cure and death process. This might help the future planning of randomised clinical trials, sample size calculations, and data analyses.  相似文献   

18.
In drug development, we ask ourselves which population, endpoint and treatment comparison should be investigated. In this context, we also debate what matters most to the different stakeholders that are involved in clinical drug development, for example, patients, physicians, regulators and payers. With the publication of draft ICH E9 addendum on estimands in 2017, we now have a common framework and language to discuss such questions in an informed and transparent way. This has led to the estimand discussion being a key element in study development, including design, analysis and interpretation of a treatment effect. At an invited session at the 2018 PSI annual conference, PSI hosted a role‐play debate where the aim of the session was to mimic a regulatory and payer scientific advice discussion for a COPD drug. Including role‐play views from an industry sponsor, a patient, a regulator and a payer. This paper presents the invented COPD case‐study design and considerations relating to appropriate estimands are discussed by each of the stakeholders from their differing viewpoints with the additional inclusion of a technical (academic) perspective. The rationale for each perspective on approaches for handling intercurrent events is presented, with a key emphasis on the application of while‐on‐treatment and treatment policy estimands in this context. It is increasingly recognised that the treatment effect estimated by the treatment policy approach may not always be of primary clinical interest and may not appropriately communicate to patients the efficacy they can expect if they take the treatment as directed.  相似文献   

19.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号