首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Various statistical models have been proposed for two‐dimensional dose finding in drug‐combination trials. However, it is often a dilemma to decide which model to use when conducting a particular drug‐combination trial. We make a comprehensive comparison of four dose‐finding methods, and for fairness, we apply the same dose‐finding algorithm under the four model structures. Through extensive simulation studies, we compare the operating characteristics of these methods in various practical scenarios. The results show that different models may lead to different design properties and that no single model performs uniformly better in all scenarios. As a result, we propose using Bayesian model averaging to overcome the arbitrariness of the model specification and enhance the robustness of the design. We assign a discrete probability mass to each model as the prior model probability and then estimate the toxicity probabilities of combined doses in the Bayesian model averaging framework. During the trial, we adaptively allocated each new cohort of patients to the most appropriate dose combination by comparing the posterior estimates of the toxicity probabilities with the prespecified toxicity target. The simulation results demonstrate that the Bayesian model averaging approach is robust under various scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities.  相似文献   

3.

Bayesian monitoring strategies based on predictive probabilities are widely used in phase II clinical trials that involve a single efficacy binary variable. The essential idea is to control the predictive probability that the trial will show a conclusive result at the scheduled end of the study, given the information at the interim stage and the prior beliefs. In this paper, we present an extension of this approach to incorporate toxicity considerations in single-arm phase II trials. We consider two binary endpoints representing response and toxicity of the experimental treatment and define the result as successful at the conclusion of the study if the posterior probability of an high efficacy and that of a small toxicity are both sufficiently large. At any interim look, the Multinomial-Dirichlet distribution provides the predictive probability of each possible combination of future efficacy and toxicity outcomes. It is exploited to obtain the predictive probability that the trial will yield a positive outcome, if it continues to the planned end. Different possible interim situations are considered to investigate the behaviour of the proposed predictive rules and the differences with the monitoring strategies based on posterior probabilities are highlighted. Simulation studies are also performed to evaluate the frequentist operating characteristics of the proposed design and to calibrate the design parameters.

  相似文献   

4.
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies.  相似文献   

5.
We develop a transparent and efficient two-stage nonparametric (TSNP) phase I/II clinical trial design to identify the optimal biological dose (OBD) of immunotherapy. We propose a nonparametric approach to derive the closed-form estimates of the joint toxicity–efficacy response probabilities under the monotonic increasing constraint for the toxicity outcomes. These estimates are then used to measure the immunotherapy's toxicity–efficacy profiles at each dose and guide the dose finding. The first stage of the design aims to explore the toxicity profile. The second stage aims to find the OBD, which can achieve the optimal therapeutic effect by considering both the toxicity and efficacy outcomes through a utility function. The closed-form estimates and concise dose-finding algorithm make the TSNP design appealing in practice. The simulation results show that the TSNP design yields superior operating characteristics than the existing Bayesian parametric designs. User-friendly computational software is freely available to facilitate the application of the proposed design to real trials. We provide comprehensive illustrations and examples about implementing the proposed design with associated software.  相似文献   

6.
Model‐based phase I dose‐finding designs rely on a single model throughout the study for estimating the maximum tolerated dose (MTD). Thus, one major concern is about the choice of the most suitable model to be used. This is important because the dose allocation process and the MTD estimation depend on whether or not the model is reliable, or whether or not it gives a better fit to toxicity data. The aim of our work was to propose a method that would remove the need for a model choice prior to the trial onset and then allow it sequentially at each patient's inclusion. In this paper, we described model checking approach based on the posterior predictive check and model comparison approach based on the deviance information criterion, in order to identify a more reliable or better model during the course of a trial and to support clinical decision making. Further, we presented two model switching designs for a phase I cancer trial that were based on the aforementioned approaches, and performed a comparison between designs with or without model switching, through a simulation study. The results showed that the proposed designs had the advantage of decreasing certain risks, such as those of poor dose allocation and failure to find the MTD, which could occur if the model is misspecified. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

8.
Incorporating historical data has a great potential to improve the efficiency of phase I clinical trials and to accelerate drug development. For model-based designs, such as the continuous reassessment method (CRM), this can be conveniently carried out by specifying a “skeleton,” that is, the prior estimate of dose limiting toxicity (DLT) probability at each dose. In contrast, little work has been done to incorporate historical data into model-assisted designs, such as the Bayesian optimal interval (BOIN), Keyboard, and modified toxicity probability interval (mTPI) designs. This has led to the misconception that model-assisted designs cannot incorporate prior information. In this paper, we propose a unified framework that allows for incorporating historical data into model-assisted designs. The proposed approach uses the well-established “skeleton” approach, combined with the concept of prior effective sample size, thus it is easy to understand and use. More importantly, our approach maintains the hallmark of model-assisted designs: simplicity—the dose escalation/de-escalation rule can be tabulated prior to the trial conduct. Extensive simulation studies show that the proposed method can effectively incorporate prior information to improve the operating characteristics of model-assisted designs, similarly to model-based designs.  相似文献   

9.
In studies of combinations of agents in phase I oncology trials, the dose–toxicity relationship may not be monotone for all combinations, in which case the toxicity probabilities follow a partial order. The continual reassessment method for partial orders (PO‐CRM) is a design for phase I trials of combinations that leans upon identifying possible complete orders associated with the partial order. This article addresses some practical design considerations not previously undertaken when describing the PO‐CRM. We describe an approach in choosing a proper subset of possible orderings, formulated according to the known toxicity relationships within a matrix of combination therapies. Other design issues, such as working model selection and stopping rules, are also discussed. We demonstrate the practical ability of PO‐CRM as a phase I design for combinations through its use in a recent trial designed at the University of Virginia Cancer Center. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
In this article, we extend a previously formulated threshold dose-response model with random litter effects that was applied to a data set from a developmental toxicity study. The dose-response pattern of the data indicates that a threshold dose level may exist. Additionally, there is noticeable variation between the responses across the dose levels. With threshold estimation being critical, the assumed variability structure should adequately model the variation while not taking away from the estimation of the threshold as well as the other parameters directly involved in the dose-response relationship. In the prior formulation, the random effect was modeled assuming identical variation in the interlitter response probabilities across all dose levels, that is, the model had a single parameter to account for the interlitter variability. In this new model, the random effect is modeled as having different response variability across dose levels, that is, multiple interlitter variability parameters. We performed the likelihood ratio test (LRT) to compare our extended model to the previous model. We conducted a simulation study to compare the bias of each model when fit to data generated with the underlying parametric structure of the opposing model. The extended threshold dose-response model with multiple response variation was less biased.  相似文献   

11.
Phase I studies of a cytotoxic agent often aim to identify the dose that provides an investigator specified target dose-limiting toxicity (DLT) probability. In practice, an initial cohort receives a dose with a putative low DLT probability, and subsequent dosing follows by consecutively deciding whether to retain the current dose, escalate to the adjacent higher dose, or de-escalate to the adjacent lower dose. This article proposes a Phase I design derived using a Bayesian decision-theoretic approach to this sequential decision-making process. The design consecutively chooses the action that minimizes posterior expected loss where the loss reflects the distance on the log-odds scale between the target and the DLT probability of the dose that would be given to the next cohort under the corresponding action. A logistic model is assumed for the log odds of a DLT at the current dose with a weakly informative t-distribution prior centered at the target. The key design parameters are the pre-specified odds ratios for the DLT probabilities at the adjacent higher and lower doses. Dosing rules may be pre-tabulated, as these only depend on the outcomes at the current dose, which greatly facilitates implementation. The recommended default version of the proposed design improves dose selection relative to many established designs across a variety of scenarios.  相似文献   

12.
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy. Patient toxicity outcomes are used for determining admissibility to each dose combination and are not used for selection of the dose combination itself. In cases where the objective is not to find the optimum dose combination solely for efficacy but regarding both toxicity and efficacy, the need exists to allocate patients to dose combinations with consideration of the balance of existing trade‐offs between toxicity and efficacy. We propose a Bayesian hierarchical model and an adaptive randomization with consideration for the relationship with toxicity and efficacy. Using the toxicity and efficacy outcomes of patients, the Bayesian hierarchical model is used to estimate the toxicity probability and efficacy probability in each of the dose combinations. Here, we use Bayesian moving‐reference adaptive randomization on the basis of desirability computed from the obtained estimator. Computer simulations suggest that the proposed method will likely recommend a higher percentage of target dose combinations than a previously proposed method.  相似文献   

13.
The goal of a phase I clinical trial in oncology is to find a dose with acceptable dose‐limiting toxicity rate. Often, when a cytostatic drug is investigated or when the maximum tolerated dose is defined using a toxicity score, the main endpoint in a phase I trial is continuous. We propose a new method to use in a dose‐finding trial with continuous endpoints. The new method selects the right dose on par with other methods and provides more flexibility in assigning patients to doses in the course of the trial when the rate of accrual is fast relative to the follow‐up time. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

15.
In modern oncology drug development, adaptive designs have been proposed to identify the recommended phase 2 dose. The conventional dose finding designs focus on the identification of maximum tolerated dose (MTD). However, designs ignoring efficacy could put patients under risk by pushing to the MTD. Especially in immuno-oncology and cell therapy, the complex dose-toxicity and dose-efficacy relationships make such MTD driven designs more questionable. Additionally, it is not uncommon to have data available from other studies that target on similar mechanism of action and patient population. Due to the high variability from phase I trial, it is beneficial to borrow historical study information into the design when available. This will help to increase the model efficiency and accuracy and provide dose specific recommendation rules to avoid toxic dose level and increase the chance of patient allocation at potential efficacious dose levels. In this paper, we propose iBOIN-ET design that uses prior distribution extracted from historical studies to minimize the probability of decision error. The proposed design utilizes the concept of skeleton from both toxicity and efficacy data, coupled with prior effective sample size to control the amount of historical information to be incorporated. Extensive simulation studies across a variety of realistic settings are reported including a comparison of iBOIN-ET design to other model based and assisted approaches. The proposed novel design demonstrates the superior performances in percentage of selecting the correct optimal dose (OD), average number of patients allocated to the correct OD, and overdosing control during dose escalation process.  相似文献   

16.
Designs for early phase dose finding clinical trials typically are either phase I based on toxicity, or phase I-II based on toxicity and efficacy. These designs rely on the implicit assumption that the dose of an experimental agent chosen using these short-term outcomes will maximize the agent's long-term therapeutic success rate. In many clinical settings, this assumption is not true. A dose selected in an early phase oncology trial may give suboptimal progression-free survival or overall survival time, often due to a high rate of relapse following response. To address this problem, a new family of Bayesian generalized phase I-II designs is proposed. First, a conventional phase I-II design based on short-term outcomes is used to identify a set of candidate doses, rather than selecting one dose. Additional patients then are randomized among the candidates, patients are followed for a predefined longer time period, and a final dose is selected to maximize the long-term therapeutic success rate, defined in terms of duration of response. Dose-specific sample sizes in the randomization are determined adaptively to obtain a desired level of selection reliability. The design was motivated by a phase I-II trial to find an optimal dose of natural killer cells as targeted immunotherapy for recurrent or treatment-resistant B-cell hematologic malignancies. A simulation study shows that, under a range of scenarios in the context of this trial, the proposed design has much better performance than two conventional phase I-II designs.  相似文献   

17.
A robust Bayesian design is presented for a single-arm phase II trial with an early stopping rule to monitor a time to event endpoint. The assumed model is a piecewise exponential distribution with non-informative gamma priors on the hazard parameters in subintervals of a fixed follow up interval. As an additional comparator, we also define and evaluate a version of the design based on an assumed Weibull distribution. Except for the assumed models, the piecewise exponential and Weibull model based designs are identical to an established design that assumes an exponential event time distribution with an inverse gamma prior on the mean event time. The three designs are compared by simulation under several log-logistic and Weibull distributions having different shape parameters, and for different monitoring schedules. The simulations show that, compared to the exponential inverse gamma model based design, the piecewise exponential design has substantially better performance, with much higher probabilities of correctly stopping the trial early, and shorter and less variable trial duration, when the assumed median event time is unacceptably low. Compared to the Weibull model based design, the piecewise exponential design does a much better job of maintaining small incorrect stopping probabilities in cases where the true median survival time is desirably large.  相似文献   

18.
We propose a phase I clinical trial design that seeks to determine the cumulative safety of a series of administrations of a fixed dose of an investigational agent. In contrast with traditional phase I trials that are designed solely to find the maximum tolerated dose of the agent, our design instead identifies a maximum tolerated schedule that includes a maximum tolerated dose as well as a vector of recommended administration times. Our model is based on a non-mixture cure model that constrains the probability of dose limiting toxicity for all patients to increase monotonically with both dose and the number of administrations received. We assume a specific parametric hazard function for each administration and compute the total hazard of dose limiting toxicity for a schedule as a sum of individual administration hazards. Throughout a variety of settings motivated by an actual study in allogeneic bone marrow transplant recipients, we demonstrate that our approach has excellent operating characteristics and performs as well as the only other currently published design for schedule finding studies. We also present arguments for the preference of our non-mixture cure model over the existing model.  相似文献   

19.
Sequential administration of immunotherapy following radiotherapy (immunoRT) has attracted much attention in cancer research. Due to its unique feature that radiotherapy upregulates the expression of a predictive biomarker for immunotherapy, novel clinical trial designs are needed for immunoRT to identify patient subgroups and the optimal dose for each subgroup. In this article, we propose a Bayesian phase I/II design for immunotherapy administered after standard-dose radiotherapy for this purpose. We construct a latent subgroup membership variable and model it as a function of the baseline and pre-post radiotherapy change in the predictive biomarker measurements. Conditional on the latent subgroup membership of each patient, we jointly model the continuous immune response and the binary efficacy outcome using plateau models, and model toxicity using the equivalent toxicity score approach to account for toxicity grades. During the trial, based on accumulating data, we continuously update model estimates and adaptively randomize patients to admissible doses. Simulation studies and an illustrative trial application show that our design has good operating characteristics in terms of identifying both patient subgroups and the optimal dose for each subgroup.  相似文献   

20.
This study investigates the use of stratification to improve discrimination when prior probabilities vary across strata of a population of interest. Sources of heterogeneity in prior probabilities include differences in geographic locale, age differences in the population studied, or differences in the time component of the data collected. The article suggests using logistic regression both to identify the underlying stratification and to estimate prior probabilities. A simulation study compares misclassification rates under two alternative stratification schemes with the traditional discriminant approach that ignores stratification in favor of pooled prior estimates. The simulations show that large asymptotic gains can be realized by stratification, and that these gains can be realized in finite samples, given moderate differences in prior probabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号