首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

2.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

3.
The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme. The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria. The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making. In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044–0.142 for an ACR20 outcome and 0.057–0.213 for an ACR50 outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
Modelling and simulation are buzz words in clinical drug development. But is clinical trial simulation (CTS) really a revolutionary technique? There is not much more to CTS than applying standard methods of modelling, statistics and decision theory. However, doing this in a systematic way can mean a significant improvement in pharmaceutical research. This paper describes in simple examples how modelling could be used in clinical development. Four steps are identified: gathering relevant information about a drug and the disease; building a mathematical model; predicting the results of potential future trials; and optimizing clinical trials and the entire clinical programme. We discuss these steps and give a number of examples of model components, demonstrating that relatively unsophisticated models may also prove useful. We stress that modelling and simulation are decision tools and point out the benefits of integrating them with decision analysis. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
以净现值法为代表的传统投资决策方法已不适应外部环境的快速变化,越来越难以正确评价投资机会的真实价值。针对传统方法在项目投资决策应用中的缺陷,展开用实物期权理论来修正传统项目投资决策方法的研究,并对企业应用实物期权法提出了一些建议:正确认识外部环境的不确定性,将实物期权法恰当应用到投资项目决策中,培养管理者认识、分析和处理风险的能力,根据项目的不确定性以及企业资源能力基础选择不同的投资决策方法。  相似文献   

6.
This paper illustrates an approach to setting the decision framework for a study in early clinical drug development. It shows how the criteria for a go and a stop decision are calculated based on pre‐specified target and lower reference values. The framework can lead to a three‐outcome approach by including a consider zone; this could enable smaller studies to be performed in early development, with other information either external to or within the study used to reach a go or stop decision. In this way, Phase I/II trials can be geared towards providing actionable decision‐making rather than the traditional focus on statistical significance. The example provided illustrates how the decision criteria were calculated for a Phase II study, including an interim analysis, and how the operating characteristics were assessed to ensure the decision criteria were robust. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Different arguments have been put forward why drug developers should commit themselves early for what they are planning to do for children. By EU regulation, paediatric investigation plans should be agreed on in early phases of drug development in adults. Here, extrapolation from adults to children is widely applied to reduce the burden and avoids unnecessary clinical trials in children, but early regulatory decisions on how far extrapolation can be used may be highly uncertain. Under special circumstances, the regulatory process should allow for adaptive paediatric investigation plans explicitly foreseeing a re‐evaluation of the early decision based on the information accumulated later from adults or elsewhere. A small step towards adaptivity and learning from experience may improve the quality of regulatory decisions in particular with regard to how much information can be borrowed from adults. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Decision making is a critical component of a new drug development process. Based on results from an early clinical trial such as a proof of concept trial, the sponsor can decide whether to continue, stop, or defer the development of the drug. To simplify and harmonize the decision‐making process, decision criteria have been proposed in the literature. One of them is to exam the location of a confidence bar relative to the target value and lower reference value of the treatment effect. In this research, we modify an existing approach by moving some of the “stop” decision to “consider” decision so that the chance of directly terminating the development of a potentially valuable drug can be reduced. As Bayesian analysis has certain flexibilities and can borrow historical information through an inferential prior, we apply the Bayesian analysis to the trial planning and decision making. Via a design prior, we can also calculate the probabilities of various decision outcomes in relationship with the sample size and the other parameters to help the study design. An example and a series of computations are used to illustrate the applications, assess the operating characteristics, and compare the performances of different approaches.  相似文献   

9.
For oncology drug development, phase II proof‐of‐concept studies have played a key role in determining whether or not to advance to a confirmatory phase III trial. With the increasing number of immunotherapies, efficient design strategies are crucial in moving successful drugs quickly to market. Our research examines drug development decision making under the framework of maximizing resource investment, characterized by benefit cost ratios (BCRs). In general, benefit represents the likelihood that a drug is successful, and cost is characterized by the risk adjusted total sample size of the phases II and III studies. Phase III studies often include a futility interim analysis; this sequential component can also be incorporated into BCRs. Under this framework, multiple scenarios can be considered. For example, for a given drug and cancer indication, BCRs can yield insights into whether to use a randomized control trial or a single‐arm study. Importantly, any uncertainty in historical control estimates that are used to benchmark single‐arm studies can be explicitly incorporated into BCRs. More complex scenarios, such as restricted resources or multiple potential cancer indications, can also be examined. Overall, BCR analyses indicate that single‐arm trials are favored for proof‐of‐concept trials when there is low uncertainty in historical control data and smaller phase III sample sizes. Otherwise, especially if the most likely to succeed tumor indication can be identified, randomized controlled trials may be a better option. While the findings are consistent with intuition, we provide a more objective approach.  相似文献   

10.
In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose‐escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose‐concentration and concentration‐QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug–drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose‐escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision‐making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

12.
This article deals with the estimation of continuous-time stochastic volatility models of option pricing. We argue that option prices are much more informative about the parameters than are asset prices. This is confirmed in a Monte Carlo experiment that compares two very simple strategies based on the different information sets. Both approaches are based on indirect inference and avoid any discretization bias by simulating the continuous-time model. We assume an Ornstein-Uhlenbeck process for the log of the volatility, a zero-volatility risk premium, and no leverage effect. We do not pursue asymptotic efficiency or specification issues; rather, we stick to a framework with no overidentifying restrictions and show that, given our option-pricing model, estimation based on option prices is much more precise in samples of typical size, without increasing the computational burden.  相似文献   

13.
The conventional phase II trial design paradigm is to make the go/no-go decision based on the hypothesis testing framework. Statistical significance itself alone, however, may not be sufficient to establish that the drug is clinically effective enough to warrant confirmatory phase III trials. We propose the Bayesian optimal phase II trial design with dual-criterion decision making (BOP2-DC), which incorporates both statistical significance and clinical relevance into decision making. Based on the posterior probability that the treatment effect reaches the lower reference value (statistical significance) and the clinically meaningful value (clinical significance), BOP2-DC allows for go/consider/no-go decisions, rather than a binary go/no-go decision. BOP2-DC is highly flexible and accommodates various types of endpoints, including binary, continuous, time-to-event, multiple, and coprimary endpoints, in single-arm and randomized trials. The decision rule of BOP2-DC is optimized to maximize the probability of a go decision when the treatment is effective or minimize the expected sample size when the treatment is futile. Simulation studies show that the BOP2-DC design yields desirable operating characteristics. The software to implement BOP2-DC is freely available at www.trialdesign.org .  相似文献   

14.
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies.  相似文献   

15.
Optimal designs for logistic models generally require prior information about the values of the regression parameters. However, experimenters usually do not have full knowledge of these parameters. We propose a design that is D-optimal on a restricted design region. This design assigns an equal weight to design points that contain more information and ignores those design points that contain less information about the regression parameters. The design can be constructed in practice by means of the rank order of the outcome variances. A numerical study compares the proposed design with the D-optimal and completely balanced designs in terms of efficiency.  相似文献   

16.
Decision making with adaptive utility provides a generalisation to classical Bayesian decision theory, allowing the creation of a normative theory for decision selection when preferences are initially uncertain. In this paper we address some of the foundational issues of adaptive utility as seen from the perspective of a Bayesian statistician. The implications that such a generalisation has upon the traditional utility concepts of value of information and risk aversion are also explored, with a new concept of trial aversion introduced that is similar to risk aversion, but which concerns a decision maker's aversion to selecting decisions with high uncertainty over resulting utility.  相似文献   

17.
This article is addressed to those interested in how Bayesian approaches can be brought to bear on research and development planning and management issues. It provides a conceptual framework for estimating the value of information to environmental policy decisions. The methodology is applied to assess the expected value of research concerning the effects of acidic deposition on forests. To calculate the expected value of research requires modeling the possible actions of policymakers under conditions of uncertainty. Information is potentially valuable only if it leads to actions that differ from the actions that would be taken without the information. The relevant issue is how research on forest effects would change choices of emissions controls from those that would be made in the absence of such research. The approach taken is to model information with a likelihood function embedded in a decision tree describing possible policy options. The value of information is then calculated as a function of information accuracy. The results illustrate how accurate the information must be to have an impact on the choice of policy options. The results also illustrate situations in which additional research can have a negative value.  相似文献   

18.
Phase I clinical trials aim to identify a maximum tolerated dose (MTD), the highest possible dose that does not cause an unacceptable amount of toxicity in the patients. In trials of combination therapies, however, many different dose combinations may have a similar probability of causing a dose‐limiting toxicity, and hence, a number of MTDs may exist. Furthermore, escalation strategies in combination trials are more complex, with possible escalation/de‐escalation of either or both drugs. This paper investigates the properties of two existing proposed Bayesian adaptive models for combination therapy dose‐escalation when a number of different escalation strategies are applied. We assess operating characteristics through a series of simulation studies and show that strategies that only allow ‘non‐diagonal’ moves in the escalation process (that is, both drugs cannot increase simultaneously) are inefficient and identify fewer MTDs for Phase II comparisons. Such strategies tend to escalate a single agent first while keeping the other agent fixed, which can be a severe restriction when exploring dose surfaces using a limited sample size. Meanwhile, escalation designs based on Bayesian D‐optimality allow more varied experimentation around the dose space and, consequently, are better at identifying more MTDs. We argue that for Phase I combination trials it is sensible to take forward a number of identified MTDs for Phase II experimentation so that their efficacy can be directly compared. Researchers, therefore, need to carefully consider the escalation strategy and model that best allows the identification of these MTDs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
The typical approach in change-point theory is to perform the statistical analysis based on a sample of fixed size. Alternatively, one observes some random phenomenon sequentially and takes action as soon as one observes some statistically significant deviation from the "normal" behaviour. Based on the, perhaps, more realistic situation that the process can only be partially observed, we consider the counting process related to the original process observed at equidistant time points, after which action is taken or not depending on the number of observations between those time points. In order for the procedure to stop also when everything is in order, we introduce a fixed time horizon n at which we stop declaring "no change" if the observed data did not suggest any action until then. We propose some stopping rules and consider their asymptotics under the null hypothesis as well as under alternatives. The main basis for the proofs are strong invariance principles for renewal processes and extreme value asymptotics for Gaussian processes.  相似文献   

20.
Model‐informed drug discovery and development offers the promise of more efficient clinical development, with increased productivity and reduced cost through scientific decision making and risk management. Go/no‐go development decisions in the pharmaceutical industry are often driven by effect size estimates, with the goal of meeting commercially generated target profiles. Sufficient efficacy is critical for eventual success, but the decision to advance development phase is also dependent on adequate knowledge of appropriate dose and dose‐response. Doses which are too high or low pose risk of clinical or commercial failure. This paper addresses this issue and continues the evolution of formal decision frameworks in drug development. Here, we consider the integration of both efficacy and dose‐response estimation accuracy into the go/no‐go decision process, using a model‐based approach. Using prespecified target and lower reference values associated with both efficacy and dose accuracy, we build a decision framework to more completely characterize development risk. Given the limited knowledge of dose response in early development, our approach incorporates a set of dose‐response models and uses model averaging. The approach and its operating characteristics are illustrated through simulation. Finally, we demonstrate the decision approach on a post hoc analysis of the phase 2 data for naloxegol (a drug approved for opioid‐induced constipation).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号