首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In organ transplantation, placebo-controlled clinical trials are not possible for ethical reasons, and hence non-inferiority trials are used to evaluate new drugs. Patients with a transplanted kidney typically receive three to four immunosuppressant drugs to prevent organ rejection. In the described case of a non-inferiority trial for one of these immunosuppressants, the dose is changed, and another is replaced by an investigational drug. This test regimen is compared with the active control regimen. Justification for the non-inferiority margin is challenging as the putative placebo has never been studied in a clinical trial. We propose the use of a random-effect meta-regression, where each immunosuppressant component of the regimen enters as a covariate. This allows us to make inference on the difference between the putative placebo and the active control. From this, various methods can then be used to derive the non-inferiority margin. A hybrid of the 95/95 and synthesis approach is suggested. Data from 51 trials with a total of 17,002 patients were used in the meta-regression. Our approach was motivated by a recent large confirmatory trial in kidney transplantation. The results and the methodological documents of this evaluation were submitted to the Food and Drug Administration. The Food and Drug Administration accepted our proposed non-inferiority margin and our rationale.  相似文献   

2.
Two approaches of multiple decision processes are proposed for unifying the non-inferiority, equivalence and superiority tests in a comparative clinical trial for a new drug against an active control. One is a method of confidence set with confidence coefficient 0.95 improving the conventional 0.95 confidence interval in the producer's risk and also the consumer's risk in some cases. It requires to include 0 within the region as well as to clear the non-inferiority margin so that a trial with somewhat large number of subjects and inappropriately large non-inferiority margin for proving non-inferiority of a drug that is actually inferior should be unsuccessful. The other is the closed testing procedure which combines the one- and two-sided tests by applying the partitioning principle and justifies the switching procedure by unifying the non-inferiority, equivalence and superiority tests. In particular regarding the non-inferiority, the proposed method justifies simultaneously the old Japanese Statistical Guideline (one-sided 0.05 test) and the International Guideline ICH E9 (one-sided 0.025 test). The method is particularly attractive, changing the strength of the evidence of relative efficacy of the test drug against a control at five levels according to the achievement of the clinical trial. The meaning of the non-inferiority test and also the rationale of switching from it to superiority test will be discussed.  相似文献   

3.
Superiority claims for improved efficacy are the backbone of clinical development of new therapies. However, not every new therapy in development allows for such a claim. Some therapies per se do not try to improve efficacy further but concentrate on important aspects in safety or convenience. Such improvements can be equally important to patients, and development strategies should be available for such compounds. A three-arm design with placebo, active control and experimental treatment may be viewed as the golden standard for such compounds; however, it may be difficult if not impossible to add a placebo arm in certain diseases. In such situations, non-inferiority designs are the only development option left. This paper will highlight some of the key issues with such designs in practice and will report experience from two studies from different therapeutic areas intended for regulatory submission.  相似文献   

4.
Background: Inferentially seamless studies are one of the best‐known adaptive trial designs. Statistical inference for these studies is a well‐studied problem. Regulatory guidance suggests that statistical issues associated with study conduct are not as well understood. Some of these issues are caused by the need for early pre‐specification of the phase III design and the absence of sponsor access to unblinded data. Before statisticians decide to choose a seamless IIb/III design for their programme, they should consider whether these pitfalls will be an issue for their programme. Methods: We consider four case studies. Each design met with varying degrees of success. We explore the reasons for this variation to identify characteristics of drug development programmes that lend themselves well to inferentially seamless trials and other characteristics that warn of difficulties. Results: Seamless studies require increased upfront investment and planning to enable the phase III design to be specified at the outset of phase II. Pivotal, inferentially seamless studies are unlikely to allow meaningful sponsor access to unblinded data before study completion. This limits a sponsor's ability to reflect new information in the phase III portion. Conclusions: When few clinical data have been gathered about a drug, phase II data will answer many unresolved questions. Committing to phase III plans and study designs before phase II begins introduces extra risk to drug development. However, seamless pivotal studies may be an attractive option when the clinical setting and development programme allow, for example, when revisiting dose selection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Since the first properly randomized control trial of streptomycin for pulmonary tuberculosis in the late 1940s, society has made great advances in combating bacterial infections and in developing vaccines to prevent such infections. One constant challenge that anti‐bacterial clinical development must grapple with is to determine the potential benefit of newer agents over existing agents, in an era when anti‐bacterial resistance is a constantly shifting target. By contrast, the development of anti‐fungal agents went into high gear only in the late 1980s and early 1990s in an effort to manage fungal infections in cancer patients receiving chemotherapy, especially in patients with haematologic malignancies, bone marrow transplantation, or lymphoma. The pursuit of anti‐fungal agents intensified with the AIDS epidemic. The evaluation of anti‐fungal agents often faces complications brought on by competing risks in situations where the underlying infections are associated with a high chance of mortality or severe morbidity. In this paper, we use four case studies to illustrate some of the challenges and opportunities in developing anti‐bacterial and anti‐fungal agents. The illustrations touch on not only statistical issues, but also issues related to the availability of new anti‐bacterials in the future. Some suggestions on how statisticians could take advantage of the opportunities and answer to the challenges are also included. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged.  相似文献   

7.
Adaptive designs for multi-armed clinical trials have become increasingly popular recently because of their potential to shorten development times and to increase patient response. However, developing response-adaptive designs that offer patient-benefit while ensuring the resulting trial provides a statistically rigorous and unbiased comparison of the different treatments included is highly challenging. In this paper, the theory of Multi-Armed Bandit Problems is used to define near optimal adaptive designs in the context of a clinical trial with a normally distributed endpoint with known variance. We report the operating characteristics (type I error, power, bias) and patient-benefit of these approaches and alternative designs using simulation studies based on an ongoing trial. These results are then compared to those recently published in the context of Bernoulli endpoints. Many limitations and advantages are similar in both cases but there are also important differences, specially with respect to type I error control. This paper proposes a simulation-based testing procedure to correct for the observed type I error inflation that bandit-based and adaptive rules can induce.  相似文献   

8.
The development of a new pneumococcal conjugate vaccine involves assessing the responses of the new serotypes included in the vaccine. The World Health Organization guidance states that the response from each new serotype in the new vaccine should be compared with the aggregate response from the existing vaccine to evaluate non-inferiority. However, no details are provided on how to define and estimate the aggregate response and what methods to use for non-inferiority comparisons. We investigate several methods to estimate the aggregate response based on binary data including simple average, model-based, and lowest response methods. The response of each new serotype is then compared with the estimated aggregate response for non-inferiority. The non-inferiority test p-value and confidence interval are obtained from Miettinen and Nurminen's method, using an effective sample size. The methods are evaluated using simulations and demonstrated with a real clinical trial example.  相似文献   

9.
We present likelihood methods for defining the non-inferiority margin and measuring the strength of evidence in non-inferiority trials using the 'fixed-margin' framework. Likelihood methods are used to (1) evaluate and combine the evidence from historical trials to define the non-inferiority margin, (2) assess and report the smallest non-inferiority margin supported by the data, and (3) assess potential violations of the constancy assumption. Data from six aspirin-controlled trials for acute coronary syndrome and data from an active-controlled trial for acute coronary syndrome, Organisation to Assess Strategies for Ischemic Syndromes (OASIS-2) trial, are used for illustration. The likelihood framework offers important theoretical and practical advantages when measuring the strength of evidence in non-inferiority trials. Besides eliminating the influence of sample spaces and prior probabilities on the 'strength of evidence in the data', the likelihood approach maintains good frequentist properties. Violations of the constancy assumption can be assessed in the likelihood framework when it is appropriate to assume a unifying regression model for trial data and a constant control effect including a control rate parameter and a placebo rate parameter across historical placebo controlled trials and the non-inferiority trial. In situations where the statistical non-inferiority margin is data driven, lower likelihood support interval limits provide plausibly conservative candidate margins.  相似文献   

10.
Ⅰ期临床试验的主要目的是探索药物毒性最大耐受剂量MTD,而MTD估计的准确与否将影响之后的Ⅱ期和Ⅲ期临床试验研究的结果.抗肿瘤药物Ⅰ期试验的特点是直接对病人进行试验,且样本量较小,这对构造估计精确度高并具有安全性保障要求的统计设计方法提出了挑战.回顾三种常用的Ⅰ期试验设计方法有:3+3设计、CRM设计和mTPI设计.3+3设计是应用较为广泛的传统方法,后两者是当前常用的贝叶斯自适应试验设计方法.通过大量模拟研究对三种方法从最优分配、安全性和估计MTD精确性三方面给以全面考察,并结合中国实际得出mTPI设计是比较适合推荐的Ⅰ期临床试验设计方法的结论.  相似文献   

11.
Multiple-arm dose-response superiority trials are widely studied for continuous and binary endpoints, while non-inferiority designs have been studied recently in two-arm trials. In this paper, a unified asymptotic formulation of a sample size calculation for k-arm (k>0) trials with different endpoints (continuous, binary and survival endpoints) is derived for both superiority and non-inferiority designs. The proposed method covers the sample size calculation for single-arm and k-arm (k> or =2) designs with survival endpoints, which has not been covered in the statistic literature. A simple, closed form for power and sample size calculations is derived from a contrast test. Application examples are provided. The effect of the contrasts on the power is discussed, and a SAS program for sample size calculation is provided and ready to use.  相似文献   

12.
In a non-inferiority trial to assess a new investigative treatment, there may need to be consideration of an indirect comparison with placebo using the active control in the current trial. We can, therefore, use the fact that there is a common active control in the comparisons of the investigative treatment and placebo. In analysing a non-inferiority trial, the ABC of: Assay sensitivity, Bias minimisation and Constancy assumption needs to be considered. It is highlighted how the ABC assumptions can potentially fail when there is placebo creep or a patient population shift. In this situation, the belief about the placebo response expressed in terms of a prior probability in Bayesian formulation could be used with the observed treatment effects to set the non-inferiority limit.  相似文献   

13.
14.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.  相似文献   

15.
Tuberculosis (TB) is one of the biggest killers among infectious diseases worldwide. Together with the identification of drugs that can provide benefits to patients, the challenge in TB is also the optimisation of the duration of these treatments. While conventional duration of treatment in TB is 6 months, there is evidence that shorter durations might be as effective but could be associated with fewer side effects and may be associated with better adherence. Based on a recent proposal of an adaptive order-restricted superiority design that employs the ordering assumptions within various duration of the same drug, we propose a non-inferiority (typically used in TB trials) adaptive design that effectively uses the order assumption. Together with the general construction of the hypothesis testing and expression for type I and type II errors, we focus on how the novel design was proposed for a TB trial concept. We consider a number of practical aspects such as choice of the design parameters, randomisation ratios, and timings of the interim analyses, and how these were discussed with the clinical team.  相似文献   

16.
We consider outcome adaptive phase II or phase II/III trials to identify the best treatment for further development. Different from many other multi-arm multi-stage designs, we borrow approaches for the best arm identification in multi-armed bandit (MAB) approaches developed for machine learning and adapt them for clinical trial purposes. The best arm identification in MAB focuses on the error rate of identification at the end of the trial, but we are also interested in the cumulative benefit of trial patients, for example, the frequency of patients treated with the best treatment. In particular, we consider Top-Two Thompson Sampling (TTTS) and propose an acceleration approach for better performance in drug development scenarios in which the sample size is much smaller than that considered in machine learning applications. We also propose a variant of TTTS (TTTS2) which is simpler, easier for implementation, and has comparable performance in small sample settings. An extensive simulation study was conducted to evaluate the performance of the proposed approach in multiple typical scenarios in drug development.  相似文献   

17.
In modern oncology drug development, adaptive designs have been proposed to identify the recommended phase 2 dose. The conventional dose finding designs focus on the identification of maximum tolerated dose (MTD). However, designs ignoring efficacy could put patients under risk by pushing to the MTD. Especially in immuno-oncology and cell therapy, the complex dose-toxicity and dose-efficacy relationships make such MTD driven designs more questionable. Additionally, it is not uncommon to have data available from other studies that target on similar mechanism of action and patient population. Due to the high variability from phase I trial, it is beneficial to borrow historical study information into the design when available. This will help to increase the model efficiency and accuracy and provide dose specific recommendation rules to avoid toxic dose level and increase the chance of patient allocation at potential efficacious dose levels. In this paper, we propose iBOIN-ET design that uses prior distribution extracted from historical studies to minimize the probability of decision error. The proposed design utilizes the concept of skeleton from both toxicity and efficacy data, coupled with prior effective sample size to control the amount of historical information to be incorporated. Extensive simulation studies across a variety of realistic settings are reported including a comparison of iBOIN-ET design to other model based and assisted approaches. The proposed novel design demonstrates the superior performances in percentage of selecting the correct optimal dose (OD), average number of patients allocated to the correct OD, and overdosing control during dose escalation process.  相似文献   

18.
Summary.  When evaluating potential interventions for cancer prevention, it is necessary to compare benefits and harms. With new study designs, new statistical approaches may be needed to facilitate this comparison. A case in point arose in a proposed genetic substudy of a randomized trial of tamoxifen versus placebo in asymptomatic women who were at high risk for breast cancer. Although the randomized trial showed that tamoxifen substantially reduced the risk of breast cancer, the harms from tamoxifen were serious and some were life threaten-ing. In hopes of finding a subset of women with inherited risk genes who derive greater bene-fits from tamoxifen, we proposed a nested case–control study to test some trial subjects for various genes and new statistical methods to extrapolate benefits and harms to the general population. An important design question is whether or not the study should target common low penetrance genes. Our calculations show that useful results are only likely with rare high penetrance genes.  相似文献   

19.
Incorporating historical data has a great potential to improve the efficiency of phase I clinical trials and to accelerate drug development. For model-based designs, such as the continuous reassessment method (CRM), this can be conveniently carried out by specifying a “skeleton,” that is, the prior estimate of dose limiting toxicity (DLT) probability at each dose. In contrast, little work has been done to incorporate historical data into model-assisted designs, such as the Bayesian optimal interval (BOIN), Keyboard, and modified toxicity probability interval (mTPI) designs. This has led to the misconception that model-assisted designs cannot incorporate prior information. In this paper, we propose a unified framework that allows for incorporating historical data into model-assisted designs. The proposed approach uses the well-established “skeleton” approach, combined with the concept of prior effective sample size, thus it is easy to understand and use. More importantly, our approach maintains the hallmark of model-assisted designs: simplicity—the dose escalation/de-escalation rule can be tabulated prior to the trial conduct. Extensive simulation studies show that the proposed method can effectively incorporate prior information to improve the operating characteristics of model-assisted designs, similarly to model-based designs.  相似文献   

20.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号