首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Both treatment efficacy and safety are typically the primary endpoints in Phase II, and even in some Phase III, clinical trials. Efficacy is frequently measured by time to response, death, or some other milestone event and thus is a continuous, possibly censored, outcome. Safety, however, is frequently measured on a discrete scale; in Eastern Cooperative Oncology Group clinical trial E2290, it was measured as the number of weekly rounds of chemotherapy that were tolerable to colorectal cancer patients. For the joint analysis of efficacy and safety, we propose a non-parametric, computationally simple estimator for the bivariate survival function when one time-to-event is continuous, one is discrete, and both are subject to right-censoring. The bivariate censoring times may depend on each other, but they are assumed to be independent of both event times. We derive a closed-form covariance estimator for the survivor function which allows for inference to be based on any of several possible statistics of interest. In addition, we derive its covariance with respect to calendar time of analysis, allowing for its use in sequential studies.  相似文献   

2.
When phase I clinical trials were found to be unable to precisely estimate the frequency of toxicity, Brayan and Day proposed incorporating toxicity considerations into two-stage designs in phase II clinical trials. Conaway and Petroni further pointed out that it is important to evaluate the clinical activity and safety simultaneously in studying cancer treatments with more toxic chemotherapies in a phase II clinical trial. Therefore, they developed multi-stage designs with two dependent binary endpoints. However, the usual sample sizes in phase II trials make these designs difficult to control the type I error rate at a desired level over the entire null region and still have sufficient power against reasonable alternatives. Therefore, the curtailed sampling procedure summarized by Phatak and Bhatt will be applied to the two-stage designs with two dependent binary endpoints in this paper to reduce sample sizes and speed up the development process for drugs.  相似文献   

3.
During a new drug development process, it is desirable to timely detect potential safety signals. For this purpose, repeated meta‐analyses may be performed sequentially on accumulating safety data. Moreover, if the amount of safety data from the originally planned program is not enough to ensure adequate power to test a specific hypothesis (e.g., the noninferiority hypothesis of an event of interest), the total sample size may be increased by adding new studies to the program. Without appropriate adjustment, it is well known that the type I error rate will be inflated because of repeated analyses and sample size adjustment. In this paper, we discuss potential issues associated with adaptive and repeated cumulative meta‐analyses of safety data conducted during a drug development process. We consider both frequentist and Bayesian approaches. A new drug development example is used to demonstrate the application of the methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
The success rate of drug development has been declined dramatically in recent years and the current paradigm of drug development is no longer functioning. It requires a major undertaking on breakthrough strategies and methodology for designs to minimize sample sizes and to shorten duration of the development. We propose an alternative phase II/III design based on continuous efficacy endpoints, which consists of two stages: a selection stage and a confirmation stage. For the selection stage, a randomized parallel design with several doses with a placebo group is employed for selection of doses. After the best dose is chosen, the patients of the selected dose group and placebo group continue to enter the confirmation stage. New patients will also be recruited and randomized to receive the selected dose or placebo group. The final analysis is performed with the cumulative data of patients from both stages. With the pre‐specified probabilities of rejecting the drug at each stage, sample sizes and critical values for both stages can be determined. As it is a single trial with controlling overall type I and II error rates, the proposed phase II/III adaptive design may not only reduce the sample size but also improve the success rate. An example illustrates the applications of the proposed phase II/III adaptive design. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
We introduce a multi-step variance minimization algorithm for numerical estimation of Type I and Type II error probabilities in sequential tests. The algorithm can be applied to general test statistics and easily built into general design algorithms for sequential tests. Our simulation results indicate that the proposed algorithm is particularly useful for estimating tail probabilities, and may lead to significant computational efficiency gains over the crude Monte Carlo method.  相似文献   

6.
In this paper we propose an importance sampling method for evaluating probabilities associated with the margins of a 2 × 2 table in a fixed or multistage design. These probabilities arise in the work of Conaway and Petroni (1994) on sequential phase I1 trials with bivariate endpoints. The importance sampling method proposed in this paper provides an efficient method, relative to exact enumeration and Monte Carlo techniques with no variance reduction, of estimating these probabilities. The method can also be adapted to estimate probabilities for the designs of Jennison and Turnbull (1993).  相似文献   

7.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

8.
A draft addendum to ICH E9 has been released for public consultation in August 2017. The addendum focuses on two topics particularly relevant for randomized confirmatory clinical trials: estimands and sensitivity analyses. The need to amend ICH E9 grew out of the realization of a lack of alignment between the objectives of a clinical trial stated in the protocol and the accompanying quantification of the “treatment effect” reported in a regulatory submission. We embed time‐to‐event endpoints in the estimand framework and discuss how the four estimand attributes described in the addendum apply to time‐to‐event endpoints. We point out that if the proportional hazards assumption is not met, the estimand targeted by the most prevalent methods used to analyze time‐to‐event endpoints, logrank test, and Cox regression depends on the censoring distribution. We discuss for a large randomized clinical trial how the analyses for the primary and secondary endpoints as well as the sensitivity analyses actually performed in the trial can be seen in the context of the addendum. To the best of our knowledge, this is the first attempt to do so for a trial with a time‐to‐event endpoint. Questions that remain open with the addendum for time‐to‐event endpoints and beyond are formulated, and recommendations for planning of future trials are given. We hope that this will provide a contribution to developing a common framework based on the final version of the addendum that can be applied to design, protocols, statistical analysis plans, and clinical study reports in the future.  相似文献   

9.
Stochastic curtailment has been considered for the interim monitoring of group sequential trials (Davis and Hardy, 1994). Statistical boundaries in Davis and Hardy (1994) were derived using theory of Brownian motion. In some clinical trials, the conditions of forming a Brownian motion may not be satisfied. In this paper, we extend the computations of Brownian motion based boundaries, expected stopping times, and type I and type II error rates to fractional Brownian motion (FBM). FBM includes Brownian motion as a special case. Designs under FBM are compared to those under Brownian motion and to those of O’Brien–Fleming type tests. One- and two-sided boundaries for efficacy and futility monitoring are also discussed. Results show that boundary values decrease and error rates deviate from design levels when the Hurst parameter increases from 0.1 to 0.9, these changes should be considered when designing a study under FBM.  相似文献   

10.
Adaptive trial methodology for multiarmed trials and enrichment designs has been extensively discussed in the past. A general principle to construct test procedures that control the family‐wise Type I error rate in the strong sense is based on combination tests within a closed test. Using survival data, a problem arises when using information of patients for adaptive decision making, which are under risk at interim. With the currently available testing procedures, either no testing of hypotheses in interim analyses is possible or there are restrictions on the interim data that can be used in the adaptation decisions as, essentially, only the interim test statistics of the primary endpoint may be used. We propose a general adaptive testing procedure, covering multiarmed and enrichment designs, which does not have these restrictions. An important application are clinical trials, where short‐term surrogate endpoints are used as basis for trial adaptations, and we illustrate how such trials can be designed. We propose statistical models to assess the impact of effect sizes, the correlation structure between the short‐term and the primary endpoint, the sample size, the timing of interim analyses, and the selection rule on the operating characteristics.  相似文献   

11.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   

12.
For clinical trials with multiple endpoints, the primary interest is usually to evaluate the relationship of these endpoints and treatment interventions. Studying the correlation of two clinical trial endpoints can also be of interests. For example, the association between patient‐reported outcome and clinically assessed endpoint could answer important research questions and also generate interesting hypothesis for future research. However, it is not straightforward to quantify such association. In this article, we proposed a multiple event approach to profile such association with a temporal correlation function, visualized by a correlation function plot over time with a confidence band. We developed this approach by extending the existing methodology in recurrent event literature. This approach was shown to be generally unbiased and could be a useful tool for data visualization and inference. We demonstrated the use of this method with data from a real clinical trial. Although this approach was developed to evaluate the association between patient‐reported outcome and adverse events, it can also be used to evaluate the association of any two endpoints that can be translated to time‐to‐event endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
In May 2012, the Committee of Health and Medicinal Products issued a concept paper on the need to review the points to consider document on multiplicity issues in clinical trials. In preparation for the release of the updated guidance document, Statisticians in the Pharmaceutical Industry held a one‐day expert group meeting in January 2013. Topics debated included multiplicity and the drug development process, the usefulness and limitations of newly developed strategies to deal with multiplicity, multiplicity issues arising from interim decisions and multiregional development, and the need for simultaneous confidence intervals (CIs) corresponding to multiple test procedures. A clear message from the meeting was that multiplicity adjustments need to be considered when the intention is to make a formal statement about efficacy or safety based on hypothesis tests. Statisticians have a key role when designing studies to assess what adjustment really means in the context of the research being conducted. More thought during the planning phase needs to be given to multiplicity adjustments for secondary endpoints given these are increasing in importance in differentiating products in the market place. No consensus was reached on the role of simultaneous CIs in the context of superiority trials. It was argued that unadjusted intervals should be employed as the primary purpose of the intervals is estimation, while the purpose of hypothesis testing is to formally establish an effect. The opposing view was that CIs should correspond to the test decision whenever possible. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
The analysis of recurrent event data in clinical trials presents a number of difficulties. The statistician is faced with issues of event dependency, composite endpoints, unbalanced follow‐up times and informative dropout. It is not unusual, therefore, for statisticians charged with responsibility for providing reliable and valid analyses to need to derive new methods specific to the clinical indication under investigation. One method is proposed that appears to have possible advantages over those that are often used in the analysis of recurrent event data in clinical trials. Based on an approach that counts periods of time with events instead of single event counts, the proposed method makes an adjustment for patient time on study and incorporates heterogeneity by estimating an individual per‐patient risk of experiencing a morbid event. Monte Carlo simulations demonstrate that, with use of a real clinical study data, the proposed method consistently outperforms other measures of morbidity. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
17.
In this article, it is shown how to compute, in an approximated way, probabilities of Type I error and Type II error of sequential Bayesian procedures for testing one-sided null hypotheses. First, some theoretical results are obtained, and then an algorithm is developed for applying these results. The prior predictive density plays a central role in this study.  相似文献   

18.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
In monitoring clinical trials, the question of futility, or whether the data thus far suggest that the results at the final analysis are unlikely to be statistically successful, is regularly of interest over the course of a study. However, the opposite viewpoint of whether the study is sufficiently demonstrating proof of concept (POC) and should continue is a valuable consideration and ultimately should be addressed with high POC power so that a promising study is not prematurely terminated. Conditional power is often used to assess futility, and this article interconnects the ideas of assessing POC for the purpose of study continuation with conditional power, while highlighting the importance of the POC type I error and the POC type II error for study continuation or not at the interim analysis. Methods for analyzing subgroups motivate the interim analyses to maintain high POC power via an adjusted interim POC significance level criterion for study continuation or testing against an inferiority margin. Furthermore, two versions of conditional power based on the assumed effect size or the observed interim effect size are considered. Graphical displays illustrate the relationship of the POC type II error for premature study termination to the POC type I error for study continuation and the associated conditional power criteria.  相似文献   

20.
This paper describes how a multistage analysis strategy for a clinical trial can assess a sequence of hypotheses that pertain to successively more stringent criteria for excess risk exclusion or superiority for a primary endpoint with a low event rate. The criteria for assessment can correspond to excess risk of an adverse event or to a guideline for sufficient efficacy as in the case of vaccine trials. The proposed strategy is implemented through a set of interim analyses, and success for one or more of the less stringent criteria at an interim analysis can be the basis for a regulatory submission, whereas the clinical trial continues to accumulate information to address the more stringent, but not futile, criteria. Simulations show that the proposed strategy is satisfactory for control of type I error, sufficient power, and potential success at interim analyses when the true relative risk is more favorable than assumed for the planned sample size. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号