首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

2.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

3.
Phase II clinical trials designed for evaluating a drug's treatment effect can be either single‐arm or double‐arm. A single‐arm design tests the null hypothesis that the response rate of a new drug is lower than a fixed threshold, whereas a double‐arm scheme takes a more objective comparison of the response rate between the new treatment and the standard of care through randomization. Although the randomized design is the gold standard for efficacy assessment, various situations may arise where a single‐arm pilot study prior to a randomized trial is necessary. To combine the single‐ and double‐arm phases and pool the information together for better decision making, we propose a Single‐To‐double ARm Transition design (START) with switching hypotheses tests, where the first stage compares the new drug's response rate with a minimum required level and imposes a continuation criterion, and the second stage utilizes randomization to determine the treatment's superiority. We develop a software package in R to calibrate the frequentist error rates and perform simulation studies to assess the trial characteristics. Finally, a metastatic pancreatic cancer trial is used for illustrating the decision rules under the proposed START design.  相似文献   

4.
A two-stage group acceptance sampling plan based on a truncated life test is proposed, which can be used regardless of the underlying lifetime distribution when multi-item testers are employed. The decision upon lot acceptance can be made in the first or second stage according to the number of failures from each group. The design parameters of the proposed plan such as number of groups required and the acceptance number for each of two stages are determined independently of an underlying lifetime distribution so as to satisfy the consumer's risk at the specified unreliability. Single-stage group sampling plans are also considered as special cases of the proposed plan and compared with the proposed plan in terms of the average sample number and the operating characteristics. Some important distributions are considered to explain the procedure developed here.  相似文献   

5.
The statistical analysis of late‐stage variety evaluation trials using a mixed model is described, with one‐ or two‐stage approaches to the analysis. Two sets of trials, from Australia and the UK, were used to provide realistic scenarios for a simulation study to evaluate the different methods of analysis. This study showed that a one‐stage approach gave the most accurate predictions of variety performance overall or within each environment, across a range of models, as measured by mean squared error of prediction or realized genetic gain. A weighted two‐stage approach performed adequately for variety predictions both overall and within environments, but a two‐stage unweighted approach performed poorly in both cases. A generalized heritability measure was developed to compare methods.  相似文献   

6.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

7.
With the development of molecular targeted drugs, predictive biomarkers have played an increasingly important role in identifying patients who are likely to receive clinically meaningful benefits from experimental drugs (i.e., sensitive subpopulation) even in early clinical trials. For continuous biomarkers, such as mRNA levels, it is challenging to determine cutoff value for the sensitive subpopulation, and widely accepted study designs and statistical approaches are not currently available. In this paper, we propose the Bayesian adaptive patient enrollment restriction (BAPER) approach to identify the sensitive subpopulation while restricting enrollment of patients from the insensitive subpopulation based on the results of interim analyses, in a randomized phase 2 trial with time‐to‐endpoint outcome and a single biomarker. Applying a four‐parameter change‐point model to the relationship between the biomarker and hazard ratio, we calculate the posterior distribution of the cutoff value that exhibits the target hazard ratio and use it for the restriction of the enrollment and the identification of the sensitive subpopulation. We also consider interim monitoring rules for termination because of futility or efficacy. Extensive simulations demonstrated that our proposed approach reduced the number of enrolled patients from the insensitive subpopulation, relative to an approach with no enrollment restriction, without reducing the likelihood of a correct decision for next trial (no‐go, go with entire population, or go with sensitive subpopulation) or correct identification of the sensitive subpopulation. Additionally, the four‐parameter change‐point model had a better performance over a wide range of simulation scenarios than a commonly used dichotomization approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
In 2008, this group published a paper on approaches for two‐stage crossover bioequivalence (BE) studies that allowed for the reestimation of the second‐stage sample size based on the variance estimated from the first‐stage results. The sequential methods considered used an assumed GMR of 0.95 as part of the method for determining power and sample size. This note adds results for an assumed GMR = 0.90. Two of the methods recommended for GMR = 0.95 in the earlier paper have some unacceptable increases in Type I error rate when the GMR is changed to 0.90. If a sponsor wants to assume 0.90 for the GMR, Method D is recommended. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Two‐stage design is very useful in clinical trials for evaluating the validity of a specific treatment regimen. When the second stage is allowed to continue, the method used to estimate the response rate based on the results of both stages is critical for the subsequent design. The often‐used sample proportion has an evident upward bias. However, the maximum likelihood estimator or the moment estimator tends to underestimate the response rate. A mean‐square error weighted estimator is considered here; its performance is thoroughly investigated via Simon's optimal and minimax designs and Shuster's design. Compared with the sample proportion, the proposed method has a smaller bias, and compared with the maximum likelihood estimator, the proposed method has a smaller mean‐square error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution.  相似文献   

11.
12.
Repeated confidence interval (RCI) is an important tool for design and monitoring of group sequential trials according to which we do not need to stop the trial with planned statistical stopping rules. In this article, we derive RCIs when data from each stage of the trial are not independent thus it is no longer a Brownian motion (BM) process. Under this assumption, a larger class of stochastic processes fractional Brownian motion (FBM) is considered. Comparisons of RCI width and sample size requirement are made to those under Brownian motion for different analysis times, Type I error rates and number of interim analysis. Power family spending functions including Pocock, O'Brien-Fleming design types are considered for these simulations. Interim data from BHAT and oncology trials is used to illustrate how to derive RCIs under FBM for efficacy and futility monitoring.  相似文献   

13.
Methods for analysing unbalanced factorial designs can be traced back to the work of Frank Yates in the 1930s . Yet, still today the question on how his methods of fitting constants (Type II) and weighted squares of means (Type III) behave when negligible or insignificant interactions exist, is still unanswered. In this paper, by means of a simulation study, Type II and Type III ANOVA results are examined for all unbalanced structures originating from a 2x3 balanced factorial design within homogeneous groups (design types) accounting for structure, number of observations lost and which cells contained the missing observations. The two level factor is further analysed to test the null hypothesis, for both Type II and Type III analyses, that the unbalanced structures within each design type provide comparable F values. These results are summarised and the conclusion shows that this work agrees with statements made by Yates Burdick and Herr and Shaw and Mitchell-Olds, but there are some results which require further investigation.  相似文献   

14.
In pharmaceutical‐related research, we usually use clinical trials methods to identify valuable treatments and compare their efficacy with that of a standard control therapy. Although clinical trials are essential for ensuring the efficacy and postmarketing safety of a drug, conducting clinical trials is usually costly and time‐consuming. Moreover, to allocate patients to the little therapeutic effect treatments is inappropriate due to the ethical and cost imperative. Hence, there are several 2‐stage designs in the literature where, for reducing cost and shortening duration of trials, they use the conditional power obtained from interim analysis results to appraise whether we should continue the lower efficacious treatments in the next stage. However, there is a lack of discussion about the influential impacts on the conditional power of a trial at the design stage in the literature. In this article, we calculate the optimal conditional power via the receiver operating characteristic curve method to assess the impacts on the quality of a 2‐stage design with multiple treatments and propose an optimal design using the minimum expected sample size for choosing the best or promising treatment(s) among several treatments under an optimal conditional power constraint. In this paper, we provide tables of the 2‐stage design subject to optimal conditional power for various combinations of design parameters and use an example to illustrate our methods.  相似文献   

15.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
Immunotherapy—treatments that enlist the immune system to battle tumors—has received widespread attention in cancer research. Due to its unique features and mechanisms for treating cancer, immunotherapy requires novel clinical trial designs. We propose a Bayesian seamless phase I/II randomized design for immunotherapy trials (SPIRIT) to find the optimal biological dose (OBD) defined in terms of the restricted mean survival time. We jointly model progression‐free survival and the immune response. Progression‐free survival is used as the primary endpoint to determine the OBD, and the immune response is used as an ancillary endpoint to quickly screen out futile doses. Toxicity is monitored throughout the trial. The design consists of two seamlessly connected stages. The first stage identifies a set of safe doses. The second stage adaptively randomizes patients to the safe doses identified and uses their progression‐free survival and immune response to find the OBD. The simulation study shows that the SPIRIT has desirable operating characteristics and outperforms the conventional design.  相似文献   

17.
With increased costs of drug development the need for efficient studies has become critical. A key decision point on the development pathway has become the proof of concept study. These studies must provide clear information to the project teams to enable decision making about further developing a drug candidate but also to gain evidence that any effect size is sufficient to warrant this development given the current market environment. Our case study outlines one such proof of concept trial where a new candidate therapy for neuropathic pain was investigated to assess dose-response and to evaluate the magnitude of its effect compared to placebo. A Normal Dynamic Linear Model was used to estimate the dose-response--enforcing some smoothness in the dose-response, but allowing for the fact that the dose-response may be non-monotonic. A pragmatic, parallel group study design was used with interim analyses scheduled to allow the sponsor to drop ineffective doses or to stop the study. Simulations were performed to assess the operating characteristics of the study design. The study results are presented. Significant cost savings were made when it transpired that the new candidate drug did not show superior efficacy when compared placebo and the study was stopped.  相似文献   

18.
The success rate of drug development has been declined dramatically in recent years and the current paradigm of drug development is no longer functioning. It requires a major undertaking on breakthrough strategies and methodology for designs to minimize sample sizes and to shorten duration of the development. We propose an alternative phase II/III design based on continuous efficacy endpoints, which consists of two stages: a selection stage and a confirmation stage. For the selection stage, a randomized parallel design with several doses with a placebo group is employed for selection of doses. After the best dose is chosen, the patients of the selected dose group and placebo group continue to enter the confirmation stage. New patients will also be recruited and randomized to receive the selected dose or placebo group. The final analysis is performed with the cumulative data of patients from both stages. With the pre‐specified probabilities of rejecting the drug at each stage, sample sizes and critical values for both stages can be determined. As it is a single trial with controlling overall type I and II error rates, the proposed phase II/III adaptive design may not only reduce the sample size but also improve the success rate. An example illustrates the applications of the proposed phase II/III adaptive design. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
One of the main aims of early phase clinical trials is to identify a safe dose with an indication of therapeutic benefit to administer to subjects in further studies. Ideally therefore, dose‐limiting events (DLEs) and responses indicative of efficacy should be considered in the dose‐escalation procedure. Several methods have been suggested for incorporating both DLEs and efficacy responses in early phase dose‐escalation trials. In this paper, we describe and evaluate a Bayesian adaptive approach based on one binary response (occurrence of a DLE) and one continuous response (a measure of potential efficacy) per subject. A logistic regression and a linear log‐log relationship are used respectively to model the binary DLEs and the continuous efficacy responses. A gain function concerning both the DLEs and efficacy responses is used to determine the dose to administer to the next cohort of subjects. Stopping rules are proposed to enable efficient decision making. Simulation results shows that our approach performs better than taking account of DLE responses alone. To assess the robustness of the approach, scenarios where the efficacy responses of subjects are generated from an E max model, but modelled by the linear log–log model are also considered. This evaluation shows that the simpler log–log model leads to robust recommendations even under this model showing that it is a useful approximation to the difficulty in estimating E max model. Additionally, we find comparable performance to alternative approaches using efficacy and safety for dose‐finding. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号