首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

2.
This paper illustrates how the design and statistical analysis of the primary endpoint of a proof‐of‐concept study can be formulated within a Bayesian framework and is motivated by and illustrated with a Pfizer case study in chronic kidney disease. It is shown how decision criteria for success can be formulated, and how the study design can be assessed in relation to these, both using the traditional approach of probability of success conditional on the true treatment difference and also using Bayesian assurance and pre‐posterior probabilities. The case study illustrates how an informative prior on placebo response can have a dramatic effect in reducing sample size, saving time and resource, and we argue that in some cases, it can be considered unethical not to include relevant literature data in this way. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Modelling and simulation are buzz words in clinical drug development. But is clinical trial simulation (CTS) really a revolutionary technique? There is not much more to CTS than applying standard methods of modelling, statistics and decision theory. However, doing this in a systematic way can mean a significant improvement in pharmaceutical research. This paper describes in simple examples how modelling could be used in clinical development. Four steps are identified: gathering relevant information about a drug and the disease; building a mathematical model; predicting the results of potential future trials; and optimizing clinical trials and the entire clinical programme. We discuss these steps and give a number of examples of model components, demonstrating that relatively unsophisticated models may also prove useful. We stress that modelling and simulation are decision tools and point out the benefits of integrating them with decision analysis. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
The use of Bayesian approaches in the regulated world of pharmaceutical drug development has not been without its difficulties or its critics. The recent Food and Drug Administration regulatory guidance on the use of Bayesian approaches in device submissions has mandated an investigation into the operating characteristics of Bayesian approaches and has suggested how to make adjustments in order that the proposed approaches are in a sense calibrated. In this paper, I present examples of frequentist calibration of Bayesian procedures and argue that we need not necessarily aim for perfect calibration but should be allowed to use procedures, which are well‐calibrated, a position supported by the guidance. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
The aim of a phase II clinical trial is to decide whether or not to develop an experimental therapy further through phase III clinical evaluation. In this paper, we present a Bayesian approach to the phase II trial, although we assume that subsequent phase III clinical trials will have standard frequentist analyses. The decision whether to conduct the phase III trial is based on the posterior predictive probability of a significant result being obtained. This fusion of Bayesian and frequentist techniques accepts the current paradigm for expressing objective evidence of therapeutic value, while optimizing the form of the phase II investigation that leads to it. By using prior information, we can assess whether a phase II study is needed at all, and how much or what sort of evidence is required. The proposed approach is illustrated by the design of a phase II clinical trial of a multi‐drug resistance modulator used in combination with standard chemotherapy in the treatment of metastatic breast cancer. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

6.
For the case of a one‐sample experiment with known variance σ2=1, it has been shown that at interim analysis the sample size (SS) may be increased by any arbitrary amount provided: (1) The conditional power (CP) at interim is ?50% and (2) there can be no decision to decrease the SS (stop the trial early). In this paper we verify this result for the case of a two‐sample experiment with proportional SS in the treatment groups and an arbitrary common variance. Numerous authors have presented the formula for the CP at interim for a two‐sample test with equal SS in the treatment groups and an arbitrary common variance, for both the one‐ and two‐sided hypothesis tests. In this paper we derive the corresponding formula for the case of unequal, but proportional SS in the treatment groups for both one‐sided superiority and two‐sided hypothesis tests. Finally, we present an SAS macro for doing this calculation and provide a worked out hypothetical example. In discussion we note that this type of trial design trades the ability to stop early (for lack of efficacy) for the elimination of the Type I error penalty. The loss of early stopping requires that such a design employs a data monitoring committee, blinding of the sponsor to the interim calculations, and pre‐planning of how much and under what conditions to increase the SS and that this all be formally written into an interim analysis plan before the start of the study. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
The option to stop a project is fundamental in drug development. The majority of drugs do not reach the market. Furthermore, many marketed drugs do not repay their development costs. It is therefore crucial to optimize the value of the option to stop. We formulate two examples of statistical models. One is based on success/failure in a series of trials; the other assumes that the commercial value evolves as a stochastic process as more information becomes available. These models are used to study a number of issues: the number and timing of decision points; value of information; speed of development; and order of trials. The results quantify the value of options. They show that early information that can change key decisions is most valuable. That is, we should nip bad projects in the bud. Modelling is also useful to analyse more complex decisions, for example, weighting the value of decision points against the cost of information or the speed of development. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
With advancement of technologies such as genomic sequencing, predictive biomarkers have become a useful tool for the development of personalized medicine. Predictive biomarkers can be used to select subsets of patients, which are most likely to benefit from a treatment. A number of approaches for subgroup identification were proposed over the last years. Although overviews of subgroup identification methods are available, systematic comparisons of their performance in simulation studies are rare. Interaction trees (IT), model‐based recursive partitioning, subgroup identification based on differential effect, simultaneous threshold interaction modeling algorithm (STIMA), and adaptive refinement by directed peeling were proposed for subgroup identification. We compared these methods in a simulation study using a structured approach. In order to identify a target population for subsequent trials, a selection of the identified subgroups is needed. Therefore, we propose a subgroup criterion leading to a target subgroup consisting of the identified subgroups with an estimated treatment difference no less than a pre‐specified threshold. In our simulation study, we evaluated these methods by considering measures for binary classification, like sensitivity and specificity. In settings with large effects or huge sample sizes, most methods perform well. For more realistic settings in drug development involving data from a single trial only, however, none of the methods seems suitable for selecting a target population. Using the subgroup criterion as alternative to the proposed pruning procedures, STIMA and IT can improve their performance in some settings. The methods and the subgroup criterion are illustrated by an application in amyotrophic lateral sclerosis.  相似文献   

9.
For a group‐sequential trial with two pre‐planned analyses, stopping boundaries can be calculated using a simple SAS? programme on the basis of the asymptotic bivariate normality of the interim and final test statistics. Given the simplicity and transparency of this approach, it is appropriate for researchers to apply their own bespoke spending function as long as the rate of alpha spend is pre‐specified. One such application could be an oncology trial where progression free survival (PFS) is the primary endpoint and overall survival (OS) is also assessed, both at the same time as the analysis of PFS and also later following further patient follow‐up. In many circumstances it is likely, if PFS is significantly extended, that the protocol will be amended to allow patients in the control arm to start receiving the experimental regimen. Such an eventuality is likely to result in the diminution of any effect on OS. It is shown that spending a greater proportion of alpha at the first analysis of OS, using either Pocock or bespoke boundaries, will maintain and in some cases result in greater power given a fixed number of events. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
11.
In clinical trials with survival data, investigators may wish to re-estimate the sample size based on the observed effect size while the trial is ongoing. Besides the inflation of the type-I error rate due to sample size re-estimation, the method for calculating the sample size in an interim analysis should be carefully considered because the data in each stage are mutually dependent in trials with survival data. Although the interim hazard estimate is commonly used to re-estimate the sample size, the estimate can sometimes be considerably higher or lower than the hypothesized hazard by chance. We propose an interim hazard ratio estimate that can be used to re-estimate the sample size under those circumstances. The proposed method was demonstrated through a simulation study and an actual clinical trial as an example. The effect of the shape parameter for the Weibull survival distribution on the sample size re-estimation is presented.  相似文献   

12.
In phase II single‐arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single‐arm two‐stage designs that allows to incorporate uncertainty in the response rate of the standard treatment. In both stages, the sample size determination criterion is based on the concepts of conditional and predictive Bayesian power functions. Different kinds of prior distributions, which play different roles in the designs, are introduced, and some guidelines for their elicitation are described. Finally, some numerical results about the performance of the designs are provided and a real data example is illustrated. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

14.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Non‐inferiority trials aim to demonstrate whether an experimental therapy is not unacceptably worse than an active reference therapy already in use. When applicable, a three‐arm non‐inferiority trial, including an experiment therapy, an active reference therapy, and a placebo, is often recommended to assess assay sensitivity and internal validity of a trial. In this paper, we share some practical considerations based on our experience from a phase III three‐arm non‐inferiority trial. First, we discuss the determination of the total sample size and its optimal allocation based on the overall power of the non‐inferiority testing procedure and provide ready‐to‐use R code for implementation. Second, we consider the non‐inferiority goal of ‘capturing all possibilities’ and show that it naturally corresponds to a simple two‐step testing procedure. Finally, using this two‐step non‐inferiority testing procedure as an example, we compare extensively commonly used frequentist p ‐value methods with the Bayesian posterior probability approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
To gain regulatory approval, a new medicine must demonstrate that its benefits outweigh any potential risks, ie, that the benefit‐risk balance is favourable towards the new medicine. For transparency and clarity of the decision, a structured and consistent approach to benefit‐risk assessment that quantifies uncertainties and accounts for underlying dependencies is desirable. This paper proposes two approaches to benefit‐risk evaluation, both based on the idea of joint modelling of mixed outcomes that are potentially dependent at the subject level. Using Bayesian inference, the two approaches offer interpretability and efficiency to enhance qualitative frameworks. Simulation studies show that accounting for correlation leads to a more accurate assessment of the strength of evidence to support benefit‐risk profiles of interest. Several graphical approaches are proposed that can be used to communicate the benefit‐risk balance to project teams. Finally, the two approaches are illustrated in a case study using real clinical trial data.  相似文献   

18.
There has recently been increasing demand for better designs to conduct first‐into‐man dose‐escalation studies more efficiently, more accurately and more quickly. The authors look into the Bayesian decision‐theoretic approach and use simulation as a tool to investigate the impact of compromises with conventional practice that might make the procedures more acceptable for implementation. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Clinical noninferiority trials with at least three groups have received much attention recently, perhaps due to the fact that regulatory agencies often require that a placebo group be evaluated along with a new experimental drug and an active control. The authors discuss likelihood ratio tests for binary endpoints and various noninferiority hypotheses. They find that, depending on the particular hypothesis, the test reduces asymptotically either to the intersection‐union test or to a test which follows asymptotically a mixture of generalized chi‐squared distributions. They investigate the performance of this asymptotic test and provide an exact modification. They show that this test considerably outperforms multiple testing methods such as the Bonferroni adjustment with respect to power. They illustrate their methods with a cancer study to compare antiemetic agents. Finally, they discuss the extension of the results to other settings, such as Gaussian endpoints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号