首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A three‐arm trial including an experimental treatment, an active reference treatment and a placebo is often used to assess the non‐inferiority (NI) with assay sensitivity of an experimental treatment. Various hypothesis‐test‐based approaches via a fraction or pre‐specified margin have been proposed to assess the NI with assay sensitivity in a three‐arm trial. There is little work done on confidence interval in a three‐arm trial. This paper develops a hybrid approach to construct simultaneous confidence interval for assessing NI and assay sensitivity in a three‐arm trial. For comparison, we present normal‐approximation‐based and bootstrap‐resampling‐based simultaneous confidence intervals. Simulation studies evidence that the hybrid approach with the Wilson score statistic performs better than other approaches in terms of empirical coverage probability and mesial‐non‐coverage probability. An example is used to illustrate the proposed approaches.  相似文献   

2.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

3.
We consider the problem of proving noninferiority when the comparison is based on ordered categorical data. We apply a rank test based on the Wilcoxon–Mann–Whitney effect where the asymptotic variance is estimated consistently under the alternative and a small‐sample approximation is given. We give the associated 100(1?α)% confidence interval and propose a formula for sample size determination. Finally, we illustrate the procedure and possible choices of the noninferiority margin using data from a clinical trial. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
Non‐inferiority trials aim to demonstrate whether an experimental therapy is not unacceptably worse than an active reference therapy already in use. When applicable, a three‐arm non‐inferiority trial, including an experiment therapy, an active reference therapy, and a placebo, is often recommended to assess assay sensitivity and internal validity of a trial. In this paper, we share some practical considerations based on our experience from a phase III three‐arm non‐inferiority trial. First, we discuss the determination of the total sample size and its optimal allocation based on the overall power of the non‐inferiority testing procedure and provide ready‐to‐use R code for implementation. Second, we consider the non‐inferiority goal of ‘capturing all possibilities’ and show that it naturally corresponds to a simple two‐step testing procedure. Finally, using this two‐step non‐inferiority testing procedure as an example, we compare extensively commonly used frequentist p ‐value methods with the Bayesian posterior probability approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

6.
The FDA released the final guidance on noninferiority trials in November 2016. In noninferiority trials, validity of the assessment of the efficacy of the test treatment depends on the control treatment's efficacy. Therefore, it is critically important that there be a reliable estimate of the control treatment effect—which is generally obtained from historical trials, and often assumed to hold in the current setting (the assay constancy assumption). Validating the constancy assumption requires clinical data, which are typically lacking. The guidance acknowledges that “lack of constancy can occur for many reasons.” We clarify the objectives of noninferiority trials. We conclude that correction for bias, rather than assay constancy, is critical to conducting valid noninferiority trials. We propose that assay constancy not be assumed and discounting or thresholds be used to address concern about loss of historical efficacy. Examples are provided for illustration.  相似文献   

7.
In the absence of placebo‐controlled trials, the efficacy of a test treatment can be alternatively examined by showing its non‐inferiority to an active control; that is, the test treatment is not worse than the active control by a pre‐specified margin. The margin is based on the effect of the active control over placebo in historical studies. In other words, the non‐inferiority setup involves a network of direct and indirect comparisons between test treatment, active controls, and placebo. Given this framework, we consider a Bayesian network meta‐analysis that models the uncertainty and heterogeneity of the historical trials into the non‐inferiority trial in a data‐driven manner through the use of the Dirichlet process and power priors. Depending on whether placebo was present in the historical trials, two cases of non‐inferiority testing are discussed that are analogs of the synthesis and fixed‐margin approach. In each of these cases, the model provides a more reliable estimate of the control given its effect in other trials in the network, and, in the case where placebo was only present in the historical trials, the model can predict the effect of the test treatment over placebo as if placebo had been present in the non‐inferiority trial. It can further answer other questions of interest, such as comparative effectiveness of the test treatment among its comparators. More importantly, the model provides an opportunity for disproportionate randomization or the use of small sample sizes by allowing borrowing of information from a network of trials to draw explicit conclusions on non‐inferiority. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
The author considers studies with multiple dependent primary endpoints. Testing hypotheses with multiple primary endpoints may require unmanageably large populations. Composite endpoints consisting of several binary events may be used to reduce a trial to a manageable size. The primary difficulties with composite endpoints are that different endpoints may have different clinical importance and that higher‐frequency variables may overwhelm effects of smaller, but equally important, primary outcomes. To compensate for these inconsistencies, we weight each type of event, and the total number of weighted events is counted. To reflect the mutual dependency of primary endpoints and to make the weighting method effective in small clinical trials, we use the Bayesian approach. We assume a multinomial distribution of multiple endpoints with Dirichlet priors and apply the Bayesian test of noninferiority to the calculation of weighting parameters. We use composite endpoints to test hypotheses of superiority in single‐arm and two‐arm clinical trials. The composite endpoints have a beta distribution. We illustrate this technique with an example. The results provide a statistical procedure for creating composite endpoints. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

10.
Clinical noninferiority trials with at least three groups have received much attention recently, perhaps due to the fact that regulatory agencies often require that a placebo group be evaluated along with a new experimental drug and an active control. The authors discuss likelihood ratio tests for binary endpoints and various noninferiority hypotheses. They find that, depending on the particular hypothesis, the test reduces asymptotically either to the intersection‐union test or to a test which follows asymptotically a mixture of generalized chi‐squared distributions. They investigate the performance of this asymptotic test and provide an exact modification. They show that this test considerably outperforms multiple testing methods such as the Bonferroni adjustment with respect to power. They illustrate their methods with a cancer study to compare antiemetic agents. Finally, they discuss the extension of the results to other settings, such as Gaussian endpoints.  相似文献   

11.
Applied statisticians and pharmaceutical researchers are frequently involved in the design and analysis of clinical trials where at least one of the outcomes is binary. Treatments are judged by the probability of a positive binary response. A typical example is the noninferiority trial, where it is tested whether a new experimental treatment is practically not inferior to an active comparator with a prespecified margin δ. Except for the special case of δ = 0, no exact conditional test is available although approximate conditional methods (also called second‐order methods) can be applied. However, in some situations, the approximation can be poor and the logical argument for approximate conditioning is not compelling. The alternative is to consider an unconditional approach. Standard methods like the pooled z‐test are already unconditional although approximate. In this article, we review and illustrate unconditional methods with a heavy emphasis on modern methods that can deliver exact, or near exact, results. For noninferiority trials based on either rate difference or rate ratio, our recommendation is to use the so‐called E‐procedure, based on either the score or likelihood ratio statistic. This test is effectively exact, computationally efficient, and respects monotonicity constraints in practice. We support our assertions with a numerical study, and we illustrate the concepts developed in theory with a clinical example in pulmonary oncology; R code to conduct all these analyses is available from the authors.  相似文献   

12.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

13.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

14.
Here we present as case study how re-randomization tests were performed in two randomized, controlled clinical trials as sensitivity analyses, as recommended by the United States Food and Drug Administration in the context of adaptive randomization. This was done to confirm primary conclusions on immunological noninferiority of an investigational new fully liquid presentation of a quadrivalent cross-reacting material conjugate meningococcal vaccine (MenACWY-CRM), over its licensed lyophilized/liquid presentation. In two phase 2b studies (Study #1: NCT03652610; Study #2: NCT03433482), noninferiority of the fully liquid presentation of MenACWY-CRM to the licensed presentation was assessed and demonstrated for immune responses against meningococcal serogroup A (MenA), the only vaccine component modified from lyophilized to liquid in the new presentation. The original vaccine assignment algorithm, with a minimization procedure accounting for center or center within age strata, was used to re-randomize participants belonging to the fully liquid and licensed vaccine groups while keeping antibody responses, covariates and entry order as observed. Test statistics under re-randomization were generated according to the ANCOVA model used in the primary analysis. To confirm immunological noninferiority following re-randomization, the corresponding p-values had to be <0.025. For both studies and all primary objective evaluations, the re-randomization p-values were well below 0.025 (0.0004 for Study #1; 0.0001 for the two co-primary endpoints in Study #2). Re-randomization tests performed to comply with a regulatory request confirmed the primary conclusions of immunological noninferiority for the MenA of the fully liquid compared to the licensed vaccine presentation.  相似文献   

15.
A large‐sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two‐sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk–Koch, and Reduced Falk–Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two‐sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates’ continuity correction is recommended for balanced designs and the Falk–Koch method with Yates’ correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk–Koch method is recommended, although in this case, all methods tend to be over‐conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Often, single‐arm trials are used in phase II to gather the first evidence of an oncological drug's efficacy, with drug activity determined through tumour response using the RECIST criterion. Provided the null hypothesis of ‘insufficient drug activity’ is rejected, the next step could be a randomised two‐arm trial. However, single‐arm trials may provide a biased treatment effect because of patient selection, and thus, this development plan may not be an efficient use of resources. Therefore, we compare the performance of development plans consisting of single‐arm trials followed by randomised two‐arm trials with stand‐alone single‐stage or group sequential randomised two‐arm trials. Through this, we are able to investigate the utility of single‐arm trials and determine the most efficient drug development plans, setting our work in the context of a published single‐arm non‐small‐cell lung cancer trial. Reference priors, reflecting the opinions of ‘sceptical’ and ‘enthusiastic’ investigators, are used to quantify and guide the suitability of single‐arm trials in this setting. We observe that the explored development plans incorporating single‐arm trials are often non‐optimal. Moreover, even the most pessimistic reference priors have a considerable probability in favour of alternative plans. Analysis suggests expected sample size savings of up to 25% could have been made, and the issues associated with single‐arm trials avoided, for the non‐small‐cell lung cancer treatment through direct progression to a group sequential randomised two‐arm trial. Careful consideration should thus be given to the use of single‐arm trials in oncological drug development when a randomised trial will follow. Copyright © 2015 The Authors. Pharmaceutical Statistics published by JohnWiley & Sons Ltd.  相似文献   

17.
The current practice of designing single‐arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single‐arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single‐arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Molecularly targeted, genomic‐driven, and immunotherapy‐based clinical trials continue to be advanced for the treatment of relapse or refractory cancer patients, where the growth modulation index (GMI) is often considered a primary endpoint of treatment efficacy. However, there little literature is available that considers the trial design with GMI as the primary endpoint. In this article, we derived a sample size formula for the score test under a log‐linear model of the GMI. Study designs using the derived sample size formula are illustrated under a bivariate exponential model, the Weibull frailty model, and the generalized treatment effect size. The proposed designs provide sound statistical methods for a single‐arm phase II trial with GMI as the primary endpoint.  相似文献   

19.
During a new drug development process, it is desirable to timely detect potential safety signals. For this purpose, repeated meta‐analyses may be performed sequentially on accumulating safety data. Moreover, if the amount of safety data from the originally planned program is not enough to ensure adequate power to test a specific hypothesis (e.g., the noninferiority hypothesis of an event of interest), the total sample size may be increased by adding new studies to the program. Without appropriate adjustment, it is well known that the type I error rate will be inflated because of repeated analyses and sample size adjustment. In this paper, we discuss potential issues associated with adaptive and repeated cumulative meta‐analyses of safety data conducted during a drug development process. We consider both frequentist and Bayesian approaches. A new drug development example is used to demonstrate the application of the methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Clinical trials are often designed to compare several treatments with a common control arm in pairwise fashion. In this paper we study optimal designs for such studies, based on minimizing the total number of patients required to achieve a given level of power. A common approach when designing studies to compare several treatments with a control is to achieve the desired power for each individual pairwise treatment comparison. However, it is often more appropriate to characterize power in terms of the family of null hypotheses being tested, and to control the probability of rejecting all, or alternatively any, of these individual hypotheses. While all approaches lead to unbalanced designs with more patients allocated to the control arm, it is found that the optimal design and required number of patients can vary substantially depending on the chosen characterization of power. The methods make allowance for both continuous and binary outcomes and are illustrated with reference to two clinical trials, one involving multiple doses compared to placebo and the other involving combination therapy compared to mono-therapies. In one example a 55% reduction in sample size is achieved through an optimal design combined with the appropriate characterization of power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号