首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The current guidelines, ICH E14, for the evaluation of non-antiarrhythmic compounds require a 'thorough' QT study (TQT) conducted during clinical development (ICH Guidance for Industry E14, 2005). Owing to the regulatory choice of margin (10 ms), the TQT studies must be conducted to rigorous standards to ensure that variability is minimized. Some of the key sources of variation can be controlled by use of randomization, crossover design, standardization of electrocardiogram (ECG) recording conditions and collection of replicate ECGs at each time point. However, one of the key factors in these studies is the baseline measurement, which if not controlled and consistent across studies could lead to significant misinterpretation. In this article, we examine three types of baseline methods widely used in the TQT studies to derive a change from baseline in QTc (time-matched, time-averaged and pre-dose-averaged baseline). We discuss the impact of the baseline values on the guidance-recommended 'largest time-matched' analyses. Using simulation we have shown the impact of these baseline approaches on the type I error and power for both crossover and parallel group designs. In this article, we show that the power of study decreases as the number of time points tested in TQT study increases. A time-matched baseline method is recommended by several authors (Drug Saf. 2005; 28(2):115-125, Health Canada guidance document: guide for the analysis and review of QT/QTc interval data, 2006) due to the existence of the circadian rhythm in QT. However, the impact of the time-matched baseline method on statistical inference and sample size should be considered carefully during the design of TQT study. The time-averaged baseline had the highest power in comparison with other baseline approaches.  相似文献   

2.
The revised ICH E14 Question and Answer (R3) document issued in December 2015 enables pharmaceutical companies to use concentration‐QTc (C‐QTc) modeling as the primary analysis for assessing QTc prolongation risk of new drugs. A new approach by including the time effect into the current C‐QTc model is introduced. Through a simulation study, we evaluated performances of different C‐QTc modeling with different dependent variables, covariates, and covariance structures. This simulation study shows that C‐QTc models with ΔQTc being dependent variable without time effect inflate false negative rate and that fitting C‐QTc models with different dependent variables, covariates, and covariance structures impacts the control of false negative and false positive rates. Appropriate C‐QTc modeling strategies with good control of false negative rate and false positive rate are recommended.  相似文献   

3.
The QTc interval of the electrocardiogram is a pharmacodynamic biomarker for drug-induced cardiac toxicity. The ICH E14 guideline Questions and Answers offer a solution for evaluating a concentration-QTc relationship in early clinical studies as an alternative to conducting a thorough QT/QTc study. We focused on covariance structures of QTc intervals on the baseline day and dosing day (two-day covariance structure,) and proposed a two-day QTc model to analyze a concentration-QTc relationship for placebo-controlled parallel phase 1 single ascending dose studies. The proposed two-day QTc model is based on a constrained longitudinal data analysis model and a mixed effects model, thus allowing various variance components to capture the two-day covariance structure. We also propose a one-day QTc model for the situation where no baseline day or only a pre-dose baseline is available and models for multiple ascending dose studies where concentration and QTc intervals are available over multiple days. A simulation study shows that the proposed models control the false negative rate for positive drugs and have both higher accuracy and power for negative drugs than existing models in a variety of settings for the two-day covariance structure. The proposed models will promote early and accurate evaluation of the cardiac safety of new drugs.  相似文献   

4.
ICH E14 calls for public comment by epidemiologists and statisticians on the practical implications of thresholds to be used for regulatory decision‐making. Readers involved in QT/QTc assessment in drug development and those with an interest in this area are encouraged to give the topic some thought and to be prepared to engage in public debate on the proposed ICH E14 guidance in late 2004 and early 2005. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

5.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.  相似文献   

6.
One of the primary purposes of an oncology dose‐finding trial is to identify an optimal dose (OD) that is both tolerable and has an indication of therapeutic benefit for subjects in subsequent clinical trials. In addition, it is quite important to accelerate early stage trials to shorten the entire period of drug development. However, it is often challenging to make adaptive decisions of dose escalation and de‐escalation in a timely manner because of the fast accrual rate, the difference of outcome evaluation periods for efficacy and toxicity and the late‐onset outcomes. To solve these issues, we propose the time‐to‐event Bayesian optimal interval design to accelerate dose‐finding based on cumulative and pending data of both efficacy and toxicity. The new design, named “TITE‐BOIN‐ET” design, is nonparametric and a model‐assisted design. Thus, it is robust, much simpler, and easier to implement in actual oncology dose‐finding trials compared with the model‐based approaches. These characteristics are quite useful from a practical point of view. A simulation study shows that the TITE‐BOIN‐ET design has advantages compared with the model‐based approaches in both the percentage of correct OD selection and the average number of patients allocated to the ODs across a variety of realistic settings. In addition, the TITE‐BOIN‐ET design significantly shortens the trial duration compared with the designs without sequential enrollment and therefore has the potential to accelerate early stage dose‐finding trials.  相似文献   

7.
A draft addendum to ICH E9 has been released for public consultation in August 2017. The addendum focuses on two topics particularly relevant for randomized confirmatory clinical trials: estimands and sensitivity analyses. The need to amend ICH E9 grew out of the realization of a lack of alignment between the objectives of a clinical trial stated in the protocol and the accompanying quantification of the “treatment effect” reported in a regulatory submission. We embed time‐to‐event endpoints in the estimand framework and discuss how the four estimand attributes described in the addendum apply to time‐to‐event endpoints. We point out that if the proportional hazards assumption is not met, the estimand targeted by the most prevalent methods used to analyze time‐to‐event endpoints, logrank test, and Cox regression depends on the censoring distribution. We discuss for a large randomized clinical trial how the analyses for the primary and secondary endpoints as well as the sensitivity analyses actually performed in the trial can be seen in the context of the addendum. To the best of our knowledge, this is the first attempt to do so for a trial with a time‐to‐event endpoint. Questions that remain open with the addendum for time‐to‐event endpoints and beyond are formulated, and recommendations for planning of future trials are given. We hope that this will provide a contribution to developing a common framework based on the final version of the addendum that can be applied to design, protocols, statistical analysis plans, and clinical study reports in the future.  相似文献   

8.
Historically early phase oncology drug development programmes have been based on the belief that “more is better”. Furthermore, rule-based study designs such as the “3 + 3” design are still often used to identify the MTD. Phillips and Clark argue that newer Bayesian model-assisted designs such as the BOIN design should become the go to designs for statisticians for MTD finding. This short communication goes one stage further and argues that Bayesian model-assisted designs such as the BOIN12 which balances risk-benefit should be included as one of the go to designs for early phase oncology trials, depending on the study objectives. Identifying the optimal biological dose for future research for many modern targeted drugs, immunotherapies, cell therapies and vaccine therapies can save significant time and resources.  相似文献   

9.
The estimand framework requires a precise definition of the clinical question of interest (the estimand) as different ways of accounting for “intercurrent” events post randomization may result in different scientific questions. The initiation of subsequent therapy is common in oncology clinical trials and is considered an intercurrent event if the start of such therapy occurs prior to a recurrence or progression event. Three possible ways to account for this intercurrent event in the analysis are to censor at initiation, consider recurrence or progression events (including death) that occur before and after the initiation of subsequent therapy, or consider the start of subsequent therapy as an event in and of itself. The new estimand framework clarifies that these analyses address different questions (“does the drug delay recurrence if no patient had received subsequent therapy?” vs “does the drug delay recurrence with or without subsequent therapy?” vs “does the drug delay recurrence or start of subsequent therapy?”). The framework facilitates discussions during clinical trial planning and design to ensure alignment between the key question of interest, the analysis, and interpretation. This article is a result of a cross-industry collaboration to connect the International Council for Harmonisation E9 addendum concepts to applications. Data from previously reported randomized phase 3 studies in the renal cell carcinoma setting are used to consider common intercurrent events in solid tumor studies, and to illustrate different scientific questions and the consequences of the estimand choice for study design, data collection, analysis, and interpretation.  相似文献   

10.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

11.
As described in the ICH E5 guidelines, a bridging study is an additional study executed in a new geographical region or subpopulation to link or “build a bridge” from global clinical trial outcomes to the new region. The regulatory and scientific goals of a bridging study is to evaluate potential subpopulation differences while minimizing duplication of studies and meeting unmet medical needs expeditiously. Use of historical data (borrowing) from global studies is an attractive approach to meet these conflicting goals. Here, we propose a practical and relevant approach to guide the optimal borrowing rate (percent of subjects in earlier studies) and the number of subjects in the new regional bridging study. We address the limitations in global/regional exchangeability through use of a Bayesian power prior method and then optimize bridging study design with a return on investment viewpoint. The method is demonstrated using clinical data from global and Japanese trials in dapagliflozin for type 2 diabetes.  相似文献   

12.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   

13.
Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data.  相似文献   

14.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

15.
16.
The analysis of covariance (ANCOVA) is often used in analyzing clinical trials that make use of “baseline” response. Unlike Crager [1987. Analysis of covariance in parallel-group clinical trials with pretreatment baseline. Biometrics 43, 895–901.], we show that for random baseline covariate, the ordinary least squares (OLS)-based ANCOVA method provides invalid unconditional inference for the test of treatment effect when heterogeneous regression exists for the baseline covariate across different treatments. To correctly address the random feature of baseline response, we propose to directly model the pre- and post-treatment measurements as repeated outcome values of a subject. This bivariate modeling method is evaluated and compared with the ANCOVA method by a simulation study under a wide variety of settings. We find that the bivariate modeling method, applying the Kenward–Roger approximation and assuming distinct general variance–covariance matrix for different treatments, performs the best in analyzing a clinical trial that makes use of random baseline measurements.  相似文献   

17.
The standard methods for analyzing data arising from a ‘thorough QT/QTc study’ are based on multivariate normal models with common variance structure for both drug and placebo. Such modeling assumptions may be violated and when the sample sizes are small, the statistical inference can be sensitive to such stringent assumptions. This article proposes a flexible class of parametric models to address the above‐mentioned limitations of the currently used models. A Bayesian methodology is used for data analysis and models are compared using the deviance information criteria. Superior performance of the proposed models over the current models is illustrated through a real dataset obtained from a GlaxoSmithKline (GSK) conducted ‘thorough QT/QTc study’. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Regulatory agencies typically evaluate the efficacy and safety of new interventions and grant commercial approval based on randomized controlled trials (RCTs). Other major healthcare stakeholders, such as insurance companies and health technology assessment agencies, while basing initial access and reimbursement decisions on RCT results, are also keenly interested in whether results observed in idealized trial settings will translate into comparable outcomes in real world settings—that is, into so-called “real world” effectiveness. Unfortunately, evidence of real world effectiveness for new interventions is not available at the time of initial approval. To bridge this gap, statistical methods are available to extend the estimated treatment effect observed in a RCT to a target population. The generalization is done by weighting the subjects who participated in a RCT so that the weighted trial population resembles a target population. We evaluate a variety of alternative estimation and weight construction procedures using both simulations and a real world data example using two clinical trials of an investigational intervention for Alzheimer's disease. Our results suggest an optimal approach to estimation depends on the characteristics of source and target populations, including degree of selection bias and treatment effect heterogeneity.  相似文献   

19.
Randomized controlled trials (RCTs) are the gold standard for evaluation of the efficacy and safety of investigational interventions. If every patient in an RCT were to adhere to the randomized treatment, one could simply analyze the complete data to infer the treatment effect. However, intercurrent events (ICEs) including the use of concomitant medication for unsatisfactory efficacy, treatment discontinuation due to adverse events, or lack of efficacy may lead to interventions that deviate from the original treatment assignment. Therefore, defining the appropriate estimand (the appropriate parameter to be estimated) based on the primary objective of the study is critical prior to determining the statistical analysis method and analyzing the data. The International Council for Harmonisation (ICH) E9 (R1), adopted on November 20, 2019, provided five strategies to define the estimand: treatment policy, hypothetical, composite variable, while on treatment, and principal stratum. In this article, we propose an estimand using a mix of strategies in handling ICEs. This estimand is an average of the “null” treatment difference for those with ICEs potentially related to safety and the treatment difference for the other patients if they would complete the assigned treatments. Two examples from clinical trials evaluating antidiabetes treatments are provided to illustrate the estimation of this proposed estimand and to compare it with the estimates for estimands using hypothetical and treatment policy strategies in handling ICEs.  相似文献   

20.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号