首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Phase I clinical trials aim to identify a maximum tolerated dose (MTD), the highest possible dose that does not cause an unacceptable amount of toxicity in the patients. In trials of combination therapies, however, many different dose combinations may have a similar probability of causing a dose‐limiting toxicity, and hence, a number of MTDs may exist. Furthermore, escalation strategies in combination trials are more complex, with possible escalation/de‐escalation of either or both drugs. This paper investigates the properties of two existing proposed Bayesian adaptive models for combination therapy dose‐escalation when a number of different escalation strategies are applied. We assess operating characteristics through a series of simulation studies and show that strategies that only allow ‘non‐diagonal’ moves in the escalation process (that is, both drugs cannot increase simultaneously) are inefficient and identify fewer MTDs for Phase II comparisons. Such strategies tend to escalate a single agent first while keeping the other agent fixed, which can be a severe restriction when exploring dose surfaces using a limited sample size. Meanwhile, escalation designs based on Bayesian D‐optimality allow more varied experimentation around the dose space and, consequently, are better at identifying more MTDs. We argue that for Phase I combination trials it is sensible to take forward a number of identified MTDs for Phase II experimentation so that their efficacy can be directly compared. Researchers, therefore, need to carefully consider the escalation strategy and model that best allows the identification of these MTDs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Nowadays, treatment regimens for cancer often involve a combination of drugs. The determination of the doses of each of the combined drugs in phase I dose escalation studies poses methodological challenges. The most common phase I design, the classic ‘3+3' design, has been criticized for poorly estimating the maximum tolerated dose (MTD) and for treating too many subjects at doses below the MTD. In addition, the classic ‘3+3' is not able to address the challenges posed by combinations of drugs. Here, we assume that a control drug (commonly used and well‐studied) is administered at a fixed dose in combination with a new agent (the experimental drug) of which the appropriate dose has to be determined. We propose a randomized design in which subjects are assigned to the control or to the combination of the control and experimental. The MTD is determined using a model‐based Bayesian technique based on the difference of probability of dose limiting toxicities (DLT) between the control and the combination arm. We show, through a simulation study, that this approach provides better and more accurate estimates of the MTD. We argue that this approach may differentiate between an extreme high probability of DLT observed from the control and a high probability of DLT of the combination. We also report on a fictive (simulation) analysis based on published data of a phase I trial of ifosfamide combined with sunitinib.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Because of the recent regulatory emphasis on issues related to drug‐induced cardiac repolarization that can potentially lead to sudden death, QT interval analysis has received much attention in the clinical trial literature. The analysis of QT data is complicated by the fact that the QT interval is correlated with heart rate and other prognostic factors. Several attempts have been made in the literature to derive an optimal method for correcting the QT interval for heart rate; however the QT correction formulae obtained are not universal because of substantial variability observed across different patient populations. It is demonstrated in this paper that the widely used fixed QT correction formulae do not provide an adequate fit to QT and RR data and bias estimates of treatment effect. It is also shown that QT correction formulae derived from baseline data in clinical trials are likely to lead to Type I error rate inflation. This paper develops a QT interval analysis framework based on repeated‐measures models accomodating the correlation between QT interval and heart rate and the correlation among QT measurements collected over time. The proposed method of QT analysis controls the Type I error rate and is at least as powerful as traditional QT correction methods with respect to detecting drug‐related QT interval prolongation. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
We investigate mixed analysis of covariance models for the 'one-step' assessment of conditional QT prolongation. Initially, we consider three different covariance structures for the data, where between-treatment covariance of repeated measures is modelled respectively through random effects, random coefficients, and through a combination of random effects and random coefficients. In all three of those models, an unstructured covariance pattern is used to model within-treatment covariance. In a fourth model, proposed earlier in the literature, between-treatment covariance is modelled through random coefficients but the residuals are assumed to be independent identically distributed (i.i.d.). Finally, we consider a mixed model with saturated covariance structure. We investigate the precision and robustness of those models by fitting them to a large group of real data sets from thorough QT studies. Our findings suggest: (i) Point estimates of treatment contrasts from all five models are similar. (ii) The random coefficients model with i.i.d. residuals is not robust; the model potentially leads to both under- and overestimation of standard errors of treatment contrasts and therefore cannot be recommended for the analysis of conditional QT prolongation. (iii) The combined random effects/random coefficients model does not always converge; in the cases where it converges, its precision is generally inferior to the other models considered. (iv) Both the random effects and the random coefficients model are robust. (v) The random effects, the random coefficients, and the saturated model have similar precision and all three models are suitable for the one-step assessment of conditional QT prolongation.  相似文献   

5.
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy. Patient toxicity outcomes are used for determining admissibility to each dose combination and are not used for selection of the dose combination itself. In cases where the objective is not to find the optimum dose combination solely for efficacy but regarding both toxicity and efficacy, the need exists to allocate patients to dose combinations with consideration of the balance of existing trade‐offs between toxicity and efficacy. We propose a Bayesian hierarchical model and an adaptive randomization with consideration for the relationship with toxicity and efficacy. Using the toxicity and efficacy outcomes of patients, the Bayesian hierarchical model is used to estimate the toxicity probability and efficacy probability in each of the dose combinations. Here, we use Bayesian moving‐reference adaptive randomization on the basis of desirability computed from the obtained estimator. Computer simulations suggest that the proposed method will likely recommend a higher percentage of target dose combinations than a previously proposed method.  相似文献   

6.
Many phase I drug combination designs have been proposed to find the maximum tolerated combination (MTC). Due to the two‐dimension nature of drug combination trials, these designs typically require complicated statistical modeling and estimation, which limit their use in practice. In this article, we propose an easy‐to‐implement Bayesian phase I combination design, called Bayesian adaptive linearization method (BALM), to simplify the dose finding for drug combination trials. BALM takes the dimension reduction approach. It selects a subset of combinations, through a procedure called linearization, to convert the two‐dimensional dose matrix into a string of combinations that are fully ordered in toxicity. As a result, existing single‐agent dose‐finding methods can be directly used to find the MTC. In case that the selected linear path does not contain the MTC, a dose‐insertion procedure is performed to add new doses whose expected toxicity rate is equal to the target toxicity rate. Our simulation studies show that the proposed BALM design performs better than competing, more complicated combination designs.  相似文献   

7.
Various statistical models have been proposed for two‐dimensional dose finding in drug‐combination trials. However, it is often a dilemma to decide which model to use when conducting a particular drug‐combination trial. We make a comprehensive comparison of four dose‐finding methods, and for fairness, we apply the same dose‐finding algorithm under the four model structures. Through extensive simulation studies, we compare the operating characteristics of these methods in various practical scenarios. The results show that different models may lead to different design properties and that no single model performs uniformly better in all scenarios. As a result, we propose using Bayesian model averaging to overcome the arbitrariness of the model specification and enhance the robustness of the design. We assign a discrete probability mass to each model as the prior model probability and then estimate the toxicity probabilities of combined doses in the Bayesian model averaging framework. During the trial, we adaptively allocated each new cohort of patients to the most appropriate dose combination by comparing the posterior estimates of the toxicity probabilities with the prespecified toxicity target. The simulation results demonstrate that the Bayesian model averaging approach is robust under various scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme. The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria. The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making. In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044–0.142 for an ACR20 outcome and 0.057–0.213 for an ACR50 outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
A diverse range of non‐cardiovascular drugs are associated with QT interval prolongation, which may be associated with a potentially fatal ventricular arrhythmia known as torsade de pointes. QT interval has been assessed for two recent submissions at GlaxoSmithKline. Meta‐analyses of ECG data from several clinical pharmacology studies were conducted for the two submissions. A general fixed effects meta‐analysis approach using summaries of the individual studies was used to calculate a pooled estimate and 90% confidence interval for the difference between each active dose and placebo following both single and repeat dosing separately. The meta‐analysis approach described provided a pragmatic solution to pooling complex and varied studies, and is a good way of addressing regulatory questions on QTc prolongation. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

10.
Baseline adjustment is an important consideration in thorough QT studies for non‐antiarrhythmic drugs. For crossover studies with period‐specific pre‐dose baselines, we propose a by‐time‐point analysis of covariance model with change from pre‐dose baseline as response, treatment as a fixed effect, pre‐dose baseline for current treatment and pre‐dose baseline averaged across treatments as covariates, and subject as a random effect. Additional factors such as period and sex should be included in the model as appropriate. Multiple pre‐dose measurements can be averaged to obtain a pre‐dose‐averaged baseline and used in the model. We provide conditions under which the proposed model is more efficient than other models. We demonstrate the efficiency and robustness of the proposed model both analytically and through simulation studies. The advantage of the proposed model is also illustrated using the data from a real clinical trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
We investigate mixed models for repeated measures data from cross-over studies in general, but in particular for data from thorough QT studies. We extend both the conventional random effects model and the saturated covariance model for univariate cross-over data to repeated measures cross-over (RMC) data; the resulting models we call the RMC model and Saturated model, respectively. Furthermore, we consider a random effects model for repeated measures cross-over data previously proposed in the literature. We assess the standard errors of point estimates and the coverage properties of confidence intervals for treatment contrasts under the various models. Our findings suggest: (i) Point estimates of treatment contrasts from all models considered are similar; (ii) Confidence intervals for treatment contrasts under the random effects model previously proposed in the literature do not have adequate coverage properties; the model therefore cannot be recommended for analysis of marginal QT prolongation; (iii) The RMC model and the Saturated model have similar precision and coverage properties; both models are suitable for assessment of marginal QT prolongation; and (iv) The Akaike Information Criterion (AIC) is not a reliable criterion for selecting a covariance model for RMC data in the following sense: the model with the smallest AIC is not necessarily associated with the highest precision for the treatment contrasts, even if the model with the smallest AIC value is also the most parsimonious model.  相似文献   

12.
One of the primary purposes of an oncology dose‐finding trial is to identify an optimal dose (OD) that is both tolerable and has an indication of therapeutic benefit for subjects in subsequent clinical trials. In addition, it is quite important to accelerate early stage trials to shorten the entire period of drug development. However, it is often challenging to make adaptive decisions of dose escalation and de‐escalation in a timely manner because of the fast accrual rate, the difference of outcome evaluation periods for efficacy and toxicity and the late‐onset outcomes. To solve these issues, we propose the time‐to‐event Bayesian optimal interval design to accelerate dose‐finding based on cumulative and pending data of both efficacy and toxicity. The new design, named “TITE‐BOIN‐ET” design, is nonparametric and a model‐assisted design. Thus, it is robust, much simpler, and easier to implement in actual oncology dose‐finding trials compared with the model‐based approaches. These characteristics are quite useful from a practical point of view. A simulation study shows that the TITE‐BOIN‐ET design has advantages compared with the model‐based approaches in both the percentage of correct OD selection and the average number of patients allocated to the ODs across a variety of realistic settings. In addition, the TITE‐BOIN‐ET design significantly shortens the trial duration compared with the designs without sequential enrollment and therefore has the potential to accelerate early stage dose‐finding trials.  相似文献   

13.
Many new anticancer agents can be combined with existing drugs, as combining a number of drugs may be expected to have a better therapeutic effect than monotherapy owing to synergistic effects. Furthermore, to drive drug development and to reduce the associated cost, there has been a growing tendency to combine these as phase I/II trials. With respect to phase I/II oncology trials for the assessment of dose combinations, in the existing methodologies in which efficacy based on tumor response and safety based on toxicity are modeled as binary outcomes, it is not possible to enroll and treat the next cohort of patients unless the best overall response has been determined in the current cohort. Thus, the trial duration might be potentially extended to an unacceptable degree. In this study, we proposed a method that randomizes the next cohort of patients in the phase II part to the dose combination based on the estimated response rate using all the available observed data upon determination of the overall response in the current cohort. We compared the proposed method to the existing method using simulation studies. These demonstrated that the percentage of optimal dose combinations selected in the proposed method is not less than that in the existing method and that the trial duration in the proposed method is shortened compared to that in the existing method. The proposed method meets both ethical and financial requirements, and we believe it has the potential to contribute to expedite drug development.  相似文献   

14.
Dose‐escalation trials commonly assume a homogeneous trial population to identify a single recommended dose of the experimental treatment for use in future trials. Wrongly assuming a homogeneous population can lead to a diluted treatment effect. Equally, exclusion of a subgroup that could in fact benefit from the treatment can cause a beneficial treatment effect to be missed. Accounting for a potential subgroup effect (ie, difference in reaction to the treatment between subgroups) in dose‐escalation can increase the chance of finding the treatment to be efficacious in a larger patient population. A standard Bayesian model‐based method of dose‐escalation is extended to account for a subgroup effect by including covariates for subgroup membership in the dose‐toxicity model. A stratified design performs well but uses available data inefficiently and makes no inferences concerning presence of a subgroup effect. A hypothesis test could potentially rectify this problem but the small sample sizes result in a low‐powered test. As an alternative, the use of spike and slab priors for variable selection is proposed. This method continually assesses the presence of a subgroup effect, enabling efficient use of the available trial data throughout escalation and in identifying the recommended dose(s). A simulation study, based on real trial data, was conducted and this design was found to be both promising and feasible.  相似文献   

15.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   

16.
Model‐based dose‐finding methods for a combination therapy involving two agents in phase I oncology trials typically include four design aspects namely, size of the patient cohort, three‐parameter dose‐toxicity model, choice of start‐up rule, and whether or not to include a restriction on dose‐level skipping. The effect of each design aspect on the operating characteristics of the dose‐finding method has not been adequately studied. However, some studies compared the performance of rival dose‐finding methods using design aspects outlined by the original studies. In this study, we featured the well‐known four design aspects and evaluated the impact of each independent effect on the operating characteristics of the dose‐finding method including these aspects. We performed simulation studies to examine the effect of these design aspects on the determination of the true maximum tolerated dose combinations as well as exposure to unacceptable toxic dose combinations. The results demonstrated that the selection rates of maximum tolerated dose combinations and UTDCs vary depending on the patient cohort size and restrictions on dose‐level skipping However, the three‐parameter dose‐toxicity models and start‐up rules did not affect these parameters. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Incorporating historical data has a great potential to improve the efficiency of phase I clinical trials and to accelerate drug development. For model-based designs, such as the continuous reassessment method (CRM), this can be conveniently carried out by specifying a “skeleton,” that is, the prior estimate of dose limiting toxicity (DLT) probability at each dose. In contrast, little work has been done to incorporate historical data into model-assisted designs, such as the Bayesian optimal interval (BOIN), Keyboard, and modified toxicity probability interval (mTPI) designs. This has led to the misconception that model-assisted designs cannot incorporate prior information. In this paper, we propose a unified framework that allows for incorporating historical data into model-assisted designs. The proposed approach uses the well-established “skeleton” approach, combined with the concept of prior effective sample size, thus it is easy to understand and use. More importantly, our approach maintains the hallmark of model-assisted designs: simplicity—the dose escalation/de-escalation rule can be tabulated prior to the trial conduct. Extensive simulation studies show that the proposed method can effectively incorporate prior information to improve the operating characteristics of model-assisted designs, similarly to model-based designs.  相似文献   

18.
In early phase dose‐finding cancer studies, the objective is to determine the maximum tolerated dose, defined as the highest dose with an acceptable dose‐limiting toxicity rate. Finding this dose for drug‐combination trials is complicated because of drug–drug interactions, and many trial designs have been proposed to address this issue. These designs rely on complicated statistical models that typically are not familiar to clinicians, and are rarely used in practice. The aim of this paper is to propose a Bayesian dose‐finding design for drug combination trials based on standard logistic regression. Under the proposed design, we continuously update the posterior estimates of the model parameters to make the decisions of dose assignment and early stopping. Simulation studies show that the proposed design is competitive and outperforms some existing designs. We also extend our design to handle delayed toxicities. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Drug combinations in preclinical tumor xenograft studies are often assessed using fixed doses. Assessing the joint action of drug combinations with fixed doses has not been well developed in the literature. Here, an interaction index is proposed for fixed‐dose drug combinations in a subcutaneous tumor xenograft model. Furthermore, a bootstrap percentile interval of the interaction index is also developed. The joint action of two drugs can be assessed on the basis of confidence limits of the interaction index. Tumor xenograft data from actual two‐drug combination studies are analyzed to illustrate the proposed method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Baseline adjustment is an important consideration in thorough QT studies for nonantiarrhythmic drugs. For crossover studies with period‐specific baseline days, we propose an analysis of covariance model with change from time‐matched baseline as response, time‐matched baseline for the current treatment, day‐averaged baseline for the current treatment, time‐matched baseline averaged across treatments, and day‐averaged baseline averaged across treatments as covariates. This model adjusts for within‐subject diurnal effects for each treatment and is more efficient than commonly used models for treatment comparisons. We illustrate the benefit using real clinical trial data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号