首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.  相似文献   

2.
In drug development, after completion of phase II proof‐of‐concept trials, the sponsor needs to make a go/no‐go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision‐making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k0 () trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
This paper illustrates an approach to setting the decision framework for a study in early clinical drug development. It shows how the criteria for a go and a stop decision are calculated based on pre‐specified target and lower reference values. The framework can lead to a three‐outcome approach by including a consider zone; this could enable smaller studies to be performed in early development, with other information either external to or within the study used to reach a go or stop decision. In this way, Phase I/II trials can be geared towards providing actionable decision‐making rather than the traditional focus on statistical significance. The example provided illustrates how the decision criteria were calculated for a Phase II study, including an interim analysis, and how the operating characteristics were assessed to ensure the decision criteria were robust. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
The conventional phase II trial design paradigm is to make the go/no-go decision based on the hypothesis testing framework. Statistical significance itself alone, however, may not be sufficient to establish that the drug is clinically effective enough to warrant confirmatory phase III trials. We propose the Bayesian optimal phase II trial design with dual-criterion decision making (BOP2-DC), which incorporates both statistical significance and clinical relevance into decision making. Based on the posterior probability that the treatment effect reaches the lower reference value (statistical significance) and the clinically meaningful value (clinical significance), BOP2-DC allows for go/consider/no-go decisions, rather than a binary go/no-go decision. BOP2-DC is highly flexible and accommodates various types of endpoints, including binary, continuous, time-to-event, multiple, and coprimary endpoints, in single-arm and randomized trials. The decision rule of BOP2-DC is optimized to maximize the probability of a go decision when the treatment is effective or minimize the expected sample size when the treatment is futile. Simulation studies show that the BOP2-DC design yields desirable operating characteristics. The software to implement BOP2-DC is freely available at www.trialdesign.org .  相似文献   

5.
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time‐to‐event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no‐go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no‐go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Model‐informed drug discovery and development offers the promise of more efficient clinical development, with increased productivity and reduced cost through scientific decision making and risk management. Go/no‐go development decisions in the pharmaceutical industry are often driven by effect size estimates, with the goal of meeting commercially generated target profiles. Sufficient efficacy is critical for eventual success, but the decision to advance development phase is also dependent on adequate knowledge of appropriate dose and dose‐response. Doses which are too high or low pose risk of clinical or commercial failure. This paper addresses this issue and continues the evolution of formal decision frameworks in drug development. Here, we consider the integration of both efficacy and dose‐response estimation accuracy into the go/no‐go decision process, using a model‐based approach. Using prespecified target and lower reference values associated with both efficacy and dose accuracy, we build a decision framework to more completely characterize development risk. Given the limited knowledge of dose response in early development, our approach incorporates a set of dose‐response models and uses model averaging. The approach and its operating characteristics are illustrated through simulation. Finally, we demonstrate the decision approach on a post hoc analysis of the phase 2 data for naloxegol (a drug approved for opioid‐induced constipation).  相似文献   

7.
In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose‐escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose‐concentration and concentration‐QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug–drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose‐escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision‐making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
This paper describes the distinction between the concept of statistical power and the probability of getting a successful trial. While one can choose a very high statistical power to detect a certain treatment effect, the high statistical power does not necessarily translate to a high success probability if the treatment effect to detect is based on the perceived ability of the drug candidate. The crucial factor hinges on our knowledge of the drug's ability to deliver the effect used to power the study. The paper discusses a framework to calculate the 'average success probability' and demonstrates how uncertainty about the treatment effect could affect the average success probability for a confirmatory trial. It complements an earlier work by O'Hagan et al. (Pharmaceutical Statistics 2005; 4:187-201) published in this journal. Computer codes to calculate the average success probability are included.  相似文献   

9.
A late‐stage clinical development program typically contains multiple trials. Conventionally, the program's success or failure may not be known until the completion of all trials. Nowadays, interim analyses are often used to allow evaluation for early success and/or futility for each individual study by calculating conditional power, predictive power and other indexes. It presents a good opportunity for us to estimate the probability of program success (POPS) for the entire clinical development earlier. The sponsor may abandon the program early if the estimated POPS is very low and therefore permit resource savings and reallocation to other products. We provide a method to calculate probability of success (POS) at an individual study level and also POPS for clinical programs with multiple trials in binary outcomes. Methods for calculating variation and confidence measures of POS and POPS and timing for interim analysis will be discussed and evaluated through simulations. We also illustrate our approaches on historical data retrospectively from a completed clinical program for depression. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Benefit-risk assessment is a fundamental element of drug development with the aim to strengthen decision making for the benefit of public health. Appropriate benefit-risk assessment can provide useful information for proactive intervention in health care settings, which could save lives, reduce litigation, improve patient safety and health care outcomes, and furthermore, lower overall health care costs. Recent development in this area presents challenges and opportunities to statisticians in the pharmaceutical industry. We review the development and examine statistical issues in comparative benefit-risk assessment. We argue that a structured benefit-risk assessment should be a multi-disciplinary effort involving experts in clinical science, safety assessment, decision science, health economics, epidemiology and statistics. Well planned and conducted analyses with clear consideration on benefit and risk are critical for appropriate benefit-risk assessment. Pharmaceutical statisticians should extend their knowledge to relevant areas such as pharmaco-epidemiology, decision analysis, modeling, and simulation to play an increasingly important role in comparative benefit-risk assessment.  相似文献   

11.
‘Success’ in drug development is bringing to patients a new medicine that has an acceptable benefit–risk profile and that is also cost‐effective. Cost‐effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision‐makers in the determination of the cost‐effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost‐effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost‐effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost‐effectiveness analyses. In summary, we recommend planning subgroup analyses for cost‐effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost‐effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Immuno‐oncology has emerged as an exciting new approach to cancer treatment. Common immunotherapy approaches include cancer vaccine, effector cell therapy, and T‐cell–stimulating antibody. Checkpoint inhibitors such as cytotoxic T lymphocyte–associated antigen 4 and programmed death‐1/L1 antagonists have shown promising results in multiple indications in solid tumors and hematology. However, the mechanisms of action of these novel drugs pose unique statistical challenges in the accurate evaluation of clinical safety and efficacy, including late‐onset toxicity, dose optimization, evaluation of combination agents, pseudoprogression, and delayed and lasting clinical activity. Traditional statistical methods may not be the most accurate or efficient. It is highly desirable to develop the most suitable statistical methodologies and tools to efficiently investigate cancer immunotherapies. In this paper, we summarize these issues and discuss alternative methods to meet the challenges in the clinical development of these novel agents. For safety evaluation and dose‐finding trials, we recommend the use of a time‐to‐event model‐based design to handle late toxicities, a simple 3‐step procedure for dose optimization, and flexible rule‐based or model‐based designs for combination agents. For efficacy evaluation, we discuss alternative endpoints/designs/tests including the time‐specific probability endpoint, the restricted mean survival time, the generalized pairwise comparison method, the immune‐related response criteria, and the weighted log‐rank or weighted Kaplan‐Meier test. The benefits and limitations of these methods are discussed, and some recommendations are provided for applied researchers to implement these methods in clinical practice.  相似文献   

13.
The option to stop a project is fundamental in drug development. The majority of drugs do not reach the market. Furthermore, many marketed drugs do not repay their development costs. It is therefore crucial to optimize the value of the option to stop. We formulate two examples of statistical models. One is based on success/failure in a series of trials; the other assumes that the commercial value evolves as a stochastic process as more information becomes available. These models are used to study a number of issues: the number and timing of decision points; value of information; speed of development; and order of trials. The results quantify the value of options. They show that early information that can change key decisions is most valuable. That is, we should nip bad projects in the bud. Modelling is also useful to analyse more complex decisions, for example, weighting the value of decision points against the cost of information or the speed of development. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

14.
The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme. The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria. The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making. In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044–0.142 for an ACR20 outcome and 0.057–0.213 for an ACR50 outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
For oncology drug development, phase II proof‐of‐concept studies have played a key role in determining whether or not to advance to a confirmatory phase III trial. With the increasing number of immunotherapies, efficient design strategies are crucial in moving successful drugs quickly to market. Our research examines drug development decision making under the framework of maximizing resource investment, characterized by benefit cost ratios (BCRs). In general, benefit represents the likelihood that a drug is successful, and cost is characterized by the risk adjusted total sample size of the phases II and III studies. Phase III studies often include a futility interim analysis; this sequential component can also be incorporated into BCRs. Under this framework, multiple scenarios can be considered. For example, for a given drug and cancer indication, BCRs can yield insights into whether to use a randomized control trial or a single‐arm study. Importantly, any uncertainty in historical control estimates that are used to benchmark single‐arm studies can be explicitly incorporated into BCRs. More complex scenarios, such as restricted resources or multiple potential cancer indications, can also be examined. Overall, BCR analyses indicate that single‐arm trials are favored for proof‐of‐concept trials when there is low uncertainty in historical control data and smaller phase III sample sizes. Otherwise, especially if the most likely to succeed tumor indication can be identified, randomized controlled trials may be a better option. While the findings are consistent with intuition, we provide a more objective approach.  相似文献   

16.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

17.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

18.
Using former maps, geographers intend to study the evolution of the land cover in order to have a prospective approach on the future landscape; predictions of the future land cover, by the use of older maps and environmental variables, are usually done through the GIS (Geographic Information System). We propose here to confront this classical geographical approach with statistical approaches: a linear parametric model (polychotomous regression modeling) and a nonparametric one (multilayer perceptron). These methodologies have been tested on two real areas on which the land cover is known at various dates; this allows us to emphasize the benefit of these two statistical approaches compared to GIS and to discuss the way GIS could be improved by the use of statistical models.  相似文献   

19.
Often, single‐arm trials are used in phase II to gather the first evidence of an oncological drug's efficacy, with drug activity determined through tumour response using the RECIST criterion. Provided the null hypothesis of ‘insufficient drug activity’ is rejected, the next step could be a randomised two‐arm trial. However, single‐arm trials may provide a biased treatment effect because of patient selection, and thus, this development plan may not be an efficient use of resources. Therefore, we compare the performance of development plans consisting of single‐arm trials followed by randomised two‐arm trials with stand‐alone single‐stage or group sequential randomised two‐arm trials. Through this, we are able to investigate the utility of single‐arm trials and determine the most efficient drug development plans, setting our work in the context of a published single‐arm non‐small‐cell lung cancer trial. Reference priors, reflecting the opinions of ‘sceptical’ and ‘enthusiastic’ investigators, are used to quantify and guide the suitability of single‐arm trials in this setting. We observe that the explored development plans incorporating single‐arm trials are often non‐optimal. Moreover, even the most pessimistic reference priors have a considerable probability in favour of alternative plans. Analysis suggests expected sample size savings of up to 25% could have been made, and the issues associated with single‐arm trials avoided, for the non‐small‐cell lung cancer treatment through direct progression to a group sequential randomised two‐arm trial. Careful consideration should thus be given to the use of single‐arm trials in oncological drug development when a randomised trial will follow. Copyright © 2015 The Authors. Pharmaceutical Statistics published by JohnWiley & Sons Ltd.  相似文献   

20.
Clinical trials with multiple strata are increasingly used in drug development. They may sometimes be the only option to study a new treatment, for example in small populations and rare diseases. In early phase trials, where data are often sparse, good statistical inference and subsequent decision‐making can be challenging. Inferences from simple pooling or stratification are known to be inferior to hierarchical modeling methods, which build on exchangeable strata parameters and allow borrowing information across strata. However, the standard exchangeability (EX) assumption bears the risk of too much shrinkage and excessive borrowing for extreme strata. We propose the exchangeability–nonexchangeability (EXNEX) approach as a robust mixture extension of the standard EX approach. It allows each stratum‐specific parameter to be exchangeable with other similar strata parameters or nonexchangeable with any of them. While EXNEX computations can be performed easily with standard Bayesian software, model specifications and prior distributions are more demanding and require a good understanding of the context. Two case studies from phases I and II (with three and four strata) show promising results for EXNEX. Data scenarios reveal tempered degrees of borrowing for extreme strata, and frequentist operating characteristics perform well for estimation (bias, mean‐squared error) and testing (less type‐I error inflation). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号