首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Decision making is a critical component of a new drug development process. Based on results from an early clinical trial such as a proof of concept trial, the sponsor can decide whether to continue, stop, or defer the development of the drug. To simplify and harmonize the decision‐making process, decision criteria have been proposed in the literature. One of them is to exam the location of a confidence bar relative to the target value and lower reference value of the treatment effect. In this research, we modify an existing approach by moving some of the “stop” decision to “consider” decision so that the chance of directly terminating the development of a potentially valuable drug can be reduced. As Bayesian analysis has certain flexibilities and can borrow historical information through an inferential prior, we apply the Bayesian analysis to the trial planning and decision making. Via a design prior, we can also calculate the probabilities of various decision outcomes in relationship with the sample size and the other parameters to help the study design. An example and a series of computations are used to illustrate the applications, assess the operating characteristics, and compare the performances of different approaches.  相似文献   

2.
Development of new pharmacological treatments for osteoarthritis that address unmet medical needs in a competitive market place is challenging. Bayesian approaches to trial design offer advantages in defining treatment benefits by addressing clinically relevant magnitude of effects relative to comparators and in optimizing efficiency in analysis. Such advantages are illustrated by a motivating case study, a proof of concept, and dose finding study in patients with osteoarthritis. Patients with osteoarthritis were randomized to receive placebo, celecoxib, or 1 of 4 doses of galcanezumab. Primary outcome measure was change from baseline WOMAC pain after 8 weeks of treatment. Literature review of clinical trials with targeted comparator therapies quantified treatment effects versus placebo. Two success criteria were defined: one to address superiority to placebo with adequate precision and another to ensure a clinically relevant treatment effect. Trial simulations used a Bayesian dose response and longitudinal model. An interim analysis for futility was incorporated. Simulations indicated the study had ≥85% power to detect a 14‐mm improvement and ≤1% risk for a placebo‐like drug to pass. The addition of the second success criterion substantially reduced the risk of an inadequate, weakly efficacious drug proceeding to future development. The study was terminated at the interim analysis due to inadequate analgesic efficacy. A Bayesian approach using probabilistic statements enables clear understanding of success criteria, leading to informed decisions for study conduct. Incorporating an interim analysis can effectively reduce sample size, save resources, and minimize exposure of patients to an inadequate treatment.  相似文献   

3.
For oncology drug development, phase II proof‐of‐concept studies have played a key role in determining whether or not to advance to a confirmatory phase III trial. With the increasing number of immunotherapies, efficient design strategies are crucial in moving successful drugs quickly to market. Our research examines drug development decision making under the framework of maximizing resource investment, characterized by benefit cost ratios (BCRs). In general, benefit represents the likelihood that a drug is successful, and cost is characterized by the risk adjusted total sample size of the phases II and III studies. Phase III studies often include a futility interim analysis; this sequential component can also be incorporated into BCRs. Under this framework, multiple scenarios can be considered. For example, for a given drug and cancer indication, BCRs can yield insights into whether to use a randomized control trial or a single‐arm study. Importantly, any uncertainty in historical control estimates that are used to benchmark single‐arm studies can be explicitly incorporated into BCRs. More complex scenarios, such as restricted resources or multiple potential cancer indications, can also be examined. Overall, BCR analyses indicate that single‐arm trials are favored for proof‐of‐concept trials when there is low uncertainty in historical control data and smaller phase III sample sizes. Otherwise, especially if the most likely to succeed tumor indication can be identified, randomized controlled trials may be a better option. While the findings are consistent with intuition, we provide a more objective approach.  相似文献   

4.
This paper illustrates how the design and statistical analysis of the primary endpoint of a proof‐of‐concept study can be formulated within a Bayesian framework and is motivated by and illustrated with a Pfizer case study in chronic kidney disease. It is shown how decision criteria for success can be formulated, and how the study design can be assessed in relation to these, both using the traditional approach of probability of success conditional on the true treatment difference and also using Bayesian assurance and pre‐posterior probabilities. The case study illustrates how an informative prior on placebo response can have a dramatic effect in reducing sample size, saving time and resource, and we argue that in some cases, it can be considered unethical not to include relevant literature data in this way. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
Intention‐to‐treat (ITT) analysis is widely used to establish efficacy in randomized clinical trials. However, in a long‐term outcomes study where non‐adherence to study drug is substantial, the on‐treatment effect of the study drug may be underestimated using the ITT analysis. The analyses presented herein are from the EVOLVE trial, a double‐blind, placebo‐controlled, event‐driven cardiovascular outcomes study conducted to assess whether a treatment regimen including cinacalcet compared with placebo in addition to other conventional therapies reduces the risk of mortality and major cardiovascular events in patients receiving hemodialysis with secondary hyperparathyroidism. Pre‐specified sensitivity analyses were performed to assess the impact of non‐adherence on the estimated effect of cinacalcet. These analyses included lag‐censoring, inverse probability of censoring weights (IPCW), rank preserving structural failure time model (RPSFTM) and iterative parameter estimation (IPE). The relative hazard (cinacalcet versus placebo) of mortality and major cardiovascular events was 0.93 (95% confidence interval 0.85, 1.02) using the ITT analysis; 0.85 (0.76, 0.95) using lag‐censoring analysis; 0.81 (0.70, 0.92) using IPCW; 0.85 (0.66, 1.04) using RPSFTM and 0.85 (0.75, 0.96) using IPE. These analyses, while not providing definitive evidence, suggest that the intervention may have an effect while subjects are receiving treatment. The ITT method remains the established method to evaluate efficacy of a new treatment; however, additional analyses should be considered to assess the on‐treatment effect when substantial non‐adherence to study drug is expected or observed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
In early clinical development of new medicines, a single‐arm study with a limited number of patients is often used to provide a preliminary assessment of a response rate. A multi‐stage design may be indicated, especially when the first stage should only include very few patients so as to enable rapid identification of an ineffective drug. We used decision rules based on several types of nominal confidence intervals to evaluate a three‐stage design for a study that includes at most 30 patients. For each decision rule, we used exact binomial calculations to determine the probability of continuing to further stages as well as to evaluate Type I and Type II error rates. Examples are provided to illustrate the methods for evaluating alternative decision rules and to provide guidance on how to extend the methods to situations with modifications to the number of stages or number of patients per stage in the study design. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
A study design with two or more doses of a test drug and placebo is frequently used in clinical drug development. Multiplicity issues arise when there are multiple comparisons between doses of test drug and placebo, and also when there are comparisons of doses with one another. An appropriate analysis strategy needs to be specified in advance to avoid spurious results through insufficient control of Type I error, as well as to avoid the loss of power due to excessively conservative adjustments for multiplicity. For evaluation of alternative strategies with possibly complex management of multiplicity, we compare the performance of several testing procedures through the simulated data that represent various patterns of treatment differences. The purpose is to identify which methods perform better or more robustly than the others and under what conditions. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
Phase II clinical trials designed for evaluating a drug's treatment effect can be either single‐arm or double‐arm. A single‐arm design tests the null hypothesis that the response rate of a new drug is lower than a fixed threshold, whereas a double‐arm scheme takes a more objective comparison of the response rate between the new treatment and the standard of care through randomization. Although the randomized design is the gold standard for efficacy assessment, various situations may arise where a single‐arm pilot study prior to a randomized trial is necessary. To combine the single‐ and double‐arm phases and pool the information together for better decision making, we propose a Single‐To‐double ARm Transition design (START) with switching hypotheses tests, where the first stage compares the new drug's response rate with a minimum required level and imposes a continuation criterion, and the second stage utilizes randomization to determine the treatment's superiority. We develop a software package in R to calibrate the frequentist error rates and perform simulation studies to assess the trial characteristics. Finally, a metastatic pancreatic cancer trial is used for illustrating the decision rules under the proposed START design.  相似文献   

9.
In this paper, we extend the use of assurance for a single study to explore how meeting a study's pre-defined success criteria could update our beliefs about the true treatment effect and impact the assurance of subsequent studies. This concept of conditional assurance, the assurance of a subsequent study conditional on success in an initial study, can be used assess the de-risking potential of the study requiring immediate investment, to ensure it provides value within the overall development plan. If the planned study does not discharge sufficient later phase risk, alternative designs and/or success criteria should be explored. By transparently laying out the different design options and the risks associated, this allows for decision makers to make quantitative investment choices based on their risk tolerance levels and potential return on investment. This paper lays out the derivation of conditional assurance, discusses how changing the design of a planned study will impact the conditional assurance of a future study, as well as presenting a simple illustrative example of how this methodology could be used to transparently compare development plans to aid decision making within an organisation.  相似文献   

10.
Background: Inferentially seamless studies are one of the best‐known adaptive trial designs. Statistical inference for these studies is a well‐studied problem. Regulatory guidance suggests that statistical issues associated with study conduct are not as well understood. Some of these issues are caused by the need for early pre‐specification of the phase III design and the absence of sponsor access to unblinded data. Before statisticians decide to choose a seamless IIb/III design for their programme, they should consider whether these pitfalls will be an issue for their programme. Methods: We consider four case studies. Each design met with varying degrees of success. We explore the reasons for this variation to identify characteristics of drug development programmes that lend themselves well to inferentially seamless trials and other characteristics that warn of difficulties. Results: Seamless studies require increased upfront investment and planning to enable the phase III design to be specified at the outset of phase II. Pivotal, inferentially seamless studies are unlikely to allow meaningful sponsor access to unblinded data before study completion. This limits a sponsor's ability to reflect new information in the phase III portion. Conclusions: When few clinical data have been gathered about a drug, phase II data will answer many unresolved questions. Committing to phase III plans and study designs before phase II begins introduces extra risk to drug development. However, seamless pivotal studies may be an attractive option when the clinical setting and development programme allow, for example, when revisiting dose selection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

12.
Clinical studies, which have a small number of patients, are conducted by pharmaceutical companies and research institutions. Examples of constraints that lead to a small clinical study include a single investigative site with a highly specialized expertise or equipment, rare diseases, and limited time and budget. We consider the following topics, which we believe will be helpful for the investigator and statistician working together on the design and analysis of small clinical studies: definitions of various types of small studies (exploratory, pilot, proof of concept); bias and ways to mitigate the bias; commonly used study designs for randomized and nonrandomized studies, and some less commonly used designs; potential ethical issues associated with small underpowered clinical studies; sample size for small studies; statistical analysis methods for different types of variables and multiplicity issues. We conclude the paper with recommendations made by an Institute of Medicine committee, which was asked to assess the current methodologies and appropriate situations for conducting small clinical studies.  相似文献   

13.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

14.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one‐day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
An important evolution in the missing data arena has been the recognition of need for clarity in objectives. The objectives of primary focus in clinical trials can often be categorized as assessing efficacy or effectiveness. The present investigation illustrated a structured framework for choosing estimands and estimators when testing investigational drugs to treat the symptoms of chronic illnesses. Key issues were discussed and illustrated using a reanalysis of the confirmatory trials from a new drug application in depression. The primary analysis used a likelihood‐based approach to assess efficacy: mean change to the planned endpoint of the trial assuming patients stayed on drug. Secondarily, effectiveness was assessed using a multiple imputation approach. The imputation model—derived solely from the placebo group—was used to impute missing values for both the drug and placebo groups. Therefore, this so‐called placebo multiple imputation (a.k.a. controlled imputation) approach assumed patients had reduced benefit from the drug after discontinuing it. Results from the example data provided clear evidence of efficacy for the experimental drug and characterized its effectiveness. Data after discontinuation of study medication were not required for these analyses. Given the idiosyncratic nature of drug development, no estimand or approach is universally appropriate. However, the general practice of pairing efficacy and effectiveness estimands may often be useful in understanding the overall risks and benefits of a drug. Controlled imputation approaches, such as placebo multiple imputation, can be a flexible and transparent framework for formulating primary analyses of effectiveness estimands and sensitivity analyses for efficacy estimands. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
A 3‐arm trial design that includes an experimental treatment, an active reference treatment, and a placebo is useful for assessing the noninferiority of an experimental treatment. The inclusion of a placebo arm enables the assessment of assay sensitivity and internal validation, in addition to the testing of the noninferiority of the experimental treatment compared with the reference treatment. In 3‐arm noninferiority trials, various statistical test procedures have been considered to evaluate the following 3 hypotheses: (i) superiority of the experimental treatment over the placebo, (ii) superiority of the reference treatment over the placebo, and (iii) noninferiority of the experimental treatment compared with the reference treatment. However, hypothesis (ii) can be insufficient and may not accurately assess the assay sensitivity for the noninferiority of the experimental treatment compared with the reference treatment. Thus, demonstrating that the superiority of the reference treatment over the placebo is greater than the noninferiority margin (the nonsuperiority of the reference treatment compared with the placebo) can be necessary. Here, we propose log‐rank statistical procedures for evaluating data obtained from 3‐arm noninferiority trials to assess assay sensitivity with a prespecified margin Δ. In addition, we derive the approximate sample size and optimal allocation required to minimize the total sample size and that of the placebo treatment sample size, hierarchically.  相似文献   

18.
In recent years, global collaboration has become a conventional strategy for new drug development. To accelerate the development process and shorten approval time, the design of multi-regional clinical trials (MRCTs) incorporates subjects from many countries/regions around the world under the same protocol. After showing the overall efficacy of a drug in a global trial, one can also simultaneously evaluate the possibility of applying the overall trial results to all regions and subsequently support drug registration in each region. However, most of the recent approaches developed for the design and evaluation of MRCTs focus on establishing criteria to examine whether the overall results from the MRCT can be applied to a specific region. In this paper, we use the consistency criterion of Method 1 from the Japanese Ministry of Health, Labour and Welfare (MHLW) guidance to assess whether the overall results from the MRCT can be applied to all regions. Sample size determination for the MRCT is also provided to take all the consistency criteria from each individual region into account. Numerical examples are given to illustrate applications of the proposed approach.  相似文献   

19.
In modern oncology drug development, adaptive designs have been proposed to identify the recommended phase 2 dose. The conventional dose finding designs focus on the identification of maximum tolerated dose (MTD). However, designs ignoring efficacy could put patients under risk by pushing to the MTD. Especially in immuno-oncology and cell therapy, the complex dose-toxicity and dose-efficacy relationships make such MTD driven designs more questionable. Additionally, it is not uncommon to have data available from other studies that target on similar mechanism of action and patient population. Due to the high variability from phase I trial, it is beneficial to borrow historical study information into the design when available. This will help to increase the model efficiency and accuracy and provide dose specific recommendation rules to avoid toxic dose level and increase the chance of patient allocation at potential efficacious dose levels. In this paper, we propose iBOIN-ET design that uses prior distribution extracted from historical studies to minimize the probability of decision error. The proposed design utilizes the concept of skeleton from both toxicity and efficacy data, coupled with prior effective sample size to control the amount of historical information to be incorporated. Extensive simulation studies across a variety of realistic settings are reported including a comparison of iBOIN-ET design to other model based and assisted approaches. The proposed novel design demonstrates the superior performances in percentage of selecting the correct optimal dose (OD), average number of patients allocated to the correct OD, and overdosing control during dose escalation process.  相似文献   

20.
In this article, we extend a previously formulated threshold dose-response model with random litter effects that was applied to a data set from a developmental toxicity study. The dose-response pattern of the data indicates that a threshold dose level may exist. Additionally, there is noticeable variation between the responses across the dose levels. With threshold estimation being critical, the assumed variability structure should adequately model the variation while not taking away from the estimation of the threshold as well as the other parameters directly involved in the dose-response relationship. In the prior formulation, the random effect was modeled assuming identical variation in the interlitter response probabilities across all dose levels, that is, the model had a single parameter to account for the interlitter variability. In this new model, the random effect is modeled as having different response variability across dose levels, that is, multiple interlitter variability parameters. We performed the likelihood ratio test (LRT) to compare our extended model to the previous model. We conducted a simulation study to compare the bias of each model when fit to data generated with the underlying parametric structure of the opposing model. The extended threshold dose-response model with multiple response variation was less biased.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号