首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The aim of a phase II clinical trial is to decide whether or not to develop an experimental therapy further through phase III clinical evaluation. In this paper, we present a Bayesian approach to the phase II trial, although we assume that subsequent phase III clinical trials will have standard frequentist analyses. The decision whether to conduct the phase III trial is based on the posterior predictive probability of a significant result being obtained. This fusion of Bayesian and frequentist techniques accepts the current paradigm for expressing objective evidence of therapeutic value, while optimizing the form of the phase II investigation that leads to it. By using prior information, we can assess whether a phase II study is needed at all, and how much or what sort of evidence is required. The proposed approach is illustrated by the design of a phase II clinical trial of a multi‐drug resistance modulator used in combination with standard chemotherapy in the treatment of metastatic breast cancer. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

2.
This paper illustrates an approach to setting the decision framework for a study in early clinical drug development. It shows how the criteria for a go and a stop decision are calculated based on pre‐specified target and lower reference values. The framework can lead to a three‐outcome approach by including a consider zone; this could enable smaller studies to be performed in early development, with other information either external to or within the study used to reach a go or stop decision. In this way, Phase I/II trials can be geared towards providing actionable decision‐making rather than the traditional focus on statistical significance. The example provided illustrates how the decision criteria were calculated for a Phase II study, including an interim analysis, and how the operating characteristics were assessed to ensure the decision criteria were robust. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
There has recently been increasing demand for better designs to conduct first‐into‐man dose‐escalation studies more efficiently, more accurately and more quickly. The authors look into the Bayesian decision‐theoretic approach and use simulation as a tool to investigate the impact of compromises with conventional practice that might make the procedures more acceptable for implementation. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

5.
Decision making is a critical component of a new drug development process. Based on results from an early clinical trial such as a proof of concept trial, the sponsor can decide whether to continue, stop, or defer the development of the drug. To simplify and harmonize the decision‐making process, decision criteria have been proposed in the literature. One of them is to exam the location of a confidence bar relative to the target value and lower reference value of the treatment effect. In this research, we modify an existing approach by moving some of the “stop” decision to “consider” decision so that the chance of directly terminating the development of a potentially valuable drug can be reduced. As Bayesian analysis has certain flexibilities and can borrow historical information through an inferential prior, we apply the Bayesian analysis to the trial planning and decision making. Via a design prior, we can also calculate the probabilities of various decision outcomes in relationship with the sample size and the other parameters to help the study design. An example and a series of computations are used to illustrate the applications, assess the operating characteristics, and compare the performances of different approaches.  相似文献   

6.
7.
In this paper we set out what we consider to be a set of best practices for statisticians in the reporting of pharmaceutical industry‐sponsored clinical trials. We make eight recommendations covering: author responsibilities and recognition; publication timing; conflicts of interest; freedom to act; full author access to data; trial registration and independent review. These recommendations are made in the context of the prominent role played by statisticians in the design, conduct, analysis and reporting of pharmaceutical sponsored trials and the perception of the reporting of these trials in the wider community. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
This paper considers optimal parametric designs, i.e. designs represented by probability measures determined by a set of parameters, for nonlinear models and illustrates their use in designs for pharmacokinetic (PK) and pharmacokinetic/pharmacodynamic (PK/PD) trials. For some practical problems, such as designs for modelling PK/PD relationship, this is often the only feasible type of design, as the design points follow a PK model and cannot be directly controlled. Even for ordinary design problems the parametric designs have some advantages over the traditional designs, which often have too few design points for model checking and may not be robust to model and parameter misspecifications. We first describe methods and algorithms to construct the parametric design for ordinary nonlinear design problems and show that the parametric designs are robust to parameter misspecification and have good power for model discrimination. Then we extend this design method to construct optimal repeated measurement designs for nonlinear mixed models. We also use this parametric design for modelling a PK/PD relationship and propose a simulation based algorithm. The application of parametric designs is illustrated with a three-parameter open one-compartment PK model for the ordinary design and repeated measurement design, and an Emax model for the phamacokinetic/pharmacodynamic trial design.  相似文献   

9.
Compound optimal designs are considered where one component of the design criterion is a traditional optimality criterion such as the D‐optimality criterion, and the other component accounts for higher efficacy with low toxicity. With reference to the dose‐finding problem, we suggest the technique to choose weights for the two components that makes the optimization problem simpler than the traditional penalized design. We allow general bivariate responses for efficacy and toxicity. We then extend the procedure in the presence of nondesignable covariates such as age, sex, or other health conditions. A new breast cancer treatment is considered to illustrate the procedures. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
This article is concerned with the simulation of one‐day cricket matches. Given that only a finite number of outcomes can occur on each ball that is bowled, a discrete generator on a finite set is developed where the outcome probabilities are estimated from historical data involving one‐day international cricket matches. The probabilities depend on the batsman, the bowler, the number of wickets lost, the number of balls bowled and the innings. The proposed simulator appears to do a reasonable job at producing realistic results. The simulator allows investigators to address complex questions involving one‐day cricket matches. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

11.
One of the primary purposes of an oncology dose‐finding trial is to identify an optimal dose (OD) that is both tolerable and has an indication of therapeutic benefit for subjects in subsequent clinical trials. In addition, it is quite important to accelerate early stage trials to shorten the entire period of drug development. However, it is often challenging to make adaptive decisions of dose escalation and de‐escalation in a timely manner because of the fast accrual rate, the difference of outcome evaluation periods for efficacy and toxicity and the late‐onset outcomes. To solve these issues, we propose the time‐to‐event Bayesian optimal interval design to accelerate dose‐finding based on cumulative and pending data of both efficacy and toxicity. The new design, named “TITE‐BOIN‐ET” design, is nonparametric and a model‐assisted design. Thus, it is robust, much simpler, and easier to implement in actual oncology dose‐finding trials compared with the model‐based approaches. These characteristics are quite useful from a practical point of view. A simulation study shows that the TITE‐BOIN‐ET design has advantages compared with the model‐based approaches in both the percentage of correct OD selection and the average number of patients allocated to the ODs across a variety of realistic settings. In addition, the TITE‐BOIN‐ET design significantly shortens the trial duration compared with the designs without sequential enrollment and therefore has the potential to accelerate early stage dose‐finding trials.  相似文献   

12.
Modelling of the relationship between concentration (PK) and response (PD) plays an important role in drug development. The modelling becomes complicated when the drug concentration and response measurements are not taken simultaneously and/or hysteresis occurs between the response and the concentration. A model‐based approach fits a joint pharmacokinetic (PK) and concentration–response (PK/PD) model, including an effect compartment if necessary, to concentration and response data. However, this approach relies on the PK data being well described by a common PK model. We propose an algorithm for a semi‐parametric approach to fitting nonlinear mixed PK/PD models including an effect compartment using linear interpolation and extrapolation for concentration data. This approach is independent of the PK model, and the algorithm can easily be implemented using SAS PROC NLMIXED. Practical issues in programming and computing are also discussed. The properties of this approach are examined using simulations. This approach is used to analyse data from a study of the PK/PD relationship between insulin and glucose levels. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
We discuss 3 alternative approaches to sample size calculation: traditional sample size calculation based on power to show a statistically significant effect, sample size calculation based on assurance, and sample size based on a decision‐theoretic approach. These approaches are compared head‐to‐head for clinical trial situations in rare diseases. Specifically, we consider 3 case studies of rare diseases (Lyell disease, adult‐onset Still disease, and cystic fibrosis) with the aim to plan the sample size for an upcoming clinical trial. We outline in detail the reasonable choice of parameters for these approaches for each of the 3 case studies and calculate sample sizes. We stress that the influence of the input parameters needs to be investigated in all approaches and recommend investigating different sample size approaches before deciding finally on the trial size. Highly influencing for the sample size are choice of treatment effect parameter in all approaches and the parameter for the additional cost of the new treatment in the decision‐theoretic approach. These should therefore be discussed extensively.  相似文献   

14.
Phase II clinical trials designed for evaluating a drug's treatment effect can be either single‐arm or double‐arm. A single‐arm design tests the null hypothesis that the response rate of a new drug is lower than a fixed threshold, whereas a double‐arm scheme takes a more objective comparison of the response rate between the new treatment and the standard of care through randomization. Although the randomized design is the gold standard for efficacy assessment, various situations may arise where a single‐arm pilot study prior to a randomized trial is necessary. To combine the single‐ and double‐arm phases and pool the information together for better decision making, we propose a Single‐To‐double ARm Transition design (START) with switching hypotheses tests, where the first stage compares the new drug's response rate with a minimum required level and imposes a continuation criterion, and the second stage utilizes randomization to determine the treatment's superiority. We develop a software package in R to calibrate the frequentist error rates and perform simulation studies to assess the trial characteristics. Finally, a metastatic pancreatic cancer trial is used for illustrating the decision rules under the proposed START design.  相似文献   

15.
Two‐stage design is very useful in clinical trials for evaluating the validity of a specific treatment regimen. When the second stage is allowed to continue, the method used to estimate the response rate based on the results of both stages is critical for the subsequent design. The often‐used sample proportion has an evident upward bias. However, the maximum likelihood estimator or the moment estimator tends to underestimate the response rate. A mean‐square error weighted estimator is considered here; its performance is thoroughly investigated via Simon's optimal and minimax designs and Shuster's design. Compared with the sample proportion, the proposed method has a smaller bias, and compared with the maximum likelihood estimator, the proposed method has a smaller mean‐square error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
The Simon's two‐stage design is the most commonly applied among multi‐stage designs in phase IIA clinical trials. It combines the sample sizes at the two stages in order to minimize either the expected or the maximum sample size. When the uncertainty about pre‐trial beliefs on the expected or desired response rate is high, a Bayesian alternative should be considered since it allows to deal with the entire distribution of the parameter of interest in a more natural way. In this setting, a crucial issue is how to construct a distribution from the available summaries to use as a clinical prior in a Bayesian design. In this work, we explore the Bayesian counterparts of the Simon's two‐stage design based on the predictive version of the single threshold design. This design requires specifying two prior distributions: the analysis prior, which is used to compute the posterior probabilities, and the design prior, which is employed to obtain the prior predictive distribution. While the usual approach is to build beta priors for carrying out a conjugate analysis, we derived both the analysis and the design distributions through linear combinations of B‐splines. The motivating example is the planning of the phase IIA two‐stage trial on anti‐HER2 DNA vaccine in breast cancer, where initial beliefs formed from elicited experts' opinions and historical data showed a high level of uncertainty. In a sample size determination problem, the impact of different priors is evaluated.  相似文献   

17.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

19.
Clinical phase II trials in oncology are conducted to determine whether the activity of a new anticancer treatment is promising enough to merit further investigation. Two‐stage designs are commonly used for this situation to allow for early termination. Designs proposed in the literature so far have the common drawback that the sample sizes for the two stages have to be specified in the protocol and have to be adhered to strictly during the course of the trial. As a consequence, designs that allow a higher extent of flexibility are desirable. In this article, we propose a new adaptive method that allows an arbitrary modification of the sample size of the second stage using the results of the interim analysis or external information while controlling the type I error rate. If the sample size is not changed during the trial, the proposed design shows very similar characteristics to the optimal two‐stage design proposed by Chang et al. (Biometrics 1987; 43:865–874). However, the new design allows the use of mid‐course information for the planning of the second stage, thus meeting practical requirements when performing clinical phase II trials in oncology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome. Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect. We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two‐sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号