首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Formal proof of efficacy of a drug requires that in a prospective experiment, superiority over placebo, or either superiority or at least non-inferiority to an established standard, is demonstrated. Traditionally one primary endpoint is specified, but various diseases exist where treatment success needs to be based on the assessment of two primary endpoints. With co-primary endpoints, both need to be “significant” as a prerequisite to claim study success. Here, no adjustment of the study-wise type-1-error is needed, but sample size is often increased to maintain the pre-defined power. Studies that use an at-least-one concept have been proposed where study success is claimed if superiority for at least one of the endpoints is demonstrated. This is sometimes also called the dual primary endpoint concept, and an appropriate adjustment of the study-wise type-1-error is required. This concept is not covered in the European Guideline on multiplicity because study success can be claimed if one endpoint shows significant superiority, despite a possible deterioration in the other. In line with Röhmel's strategy, we discuss an alternative approach including non-inferiority hypotheses testing that avoids obvious contradictions to proper decision-making. This approach leads back to the co-primary endpoint assessment, and has the advantage that minimum requirements for endpoints can be modeled flexibly for several practical needs. Our simulations show that, if planning assumptions are correct, the proposed additional requirements improve interpretation with only a limited impact on power, that is, on sample size.  相似文献   

2.
We consider the problem of sample size calculation for non-inferiority based on the hazard ratio in time-to-event trials where overall study duration is fixed and subject enrollment is staggered with variable follow-up. An adaptation of previously developed formulae for the superiority framework is presented that specifically allows for effect reversal under the non-inferiority setting, and its consequent effect on variance. Empirical performance is assessed through a small simulation study, and an example based on an ongoing trial is presented. The formulae are straightforward to program and may prove a useful tool in planning trials of this type.  相似文献   

3.
Despite tremendous effort on different designs with cross-sectional data, little research has been conducted for sample size calculation and power analyses under repeated measures design. In addition to time-averaged difference, changes in mean response over time (CIMROT) is the primary interest in repeated measures analysis. We generalized sample size calculation and power analysis equations for CIMROT to allow unequal sample size between groups for both continuous and binary measures, through simulation, evaluated the performance of proposed methods, and compared our approach to that of a two-stage model formulization. We also created a software procedure to implement the proposed methods.  相似文献   

4.
Sample size calculation is a critical issue in clinical trials because a small sample size leads to a biased inference and a large sample size increases the cost. With the development of advanced medical technology, some patients can be cured of certain chronic diseases, and the proportional hazards mixture cure model has been developed to handle survival data with potential cure information. Given the needs of survival trials with potential cure proportions, a corresponding sample size formula based on the log-rank test statistic for binary covariates has been proposed by Wang et al. [25]. However, a sample size formula based on continuous variables has not been developed. Herein, we presented sample size and power calculations for the mixture cure model with continuous variables based on the log-rank method and further modified it by Ewell's method. The proposed approaches were evaluated using simulation studies for synthetic data from exponential and Weibull distributions. A program for calculating necessary sample size for continuous covariates in a mixture cure model was implemented in R.  相似文献   

5.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design.  相似文献   

6.
When phase I clinical trials were found to be unable to precisely estimate the frequency of toxicity, Brayan and Day proposed incorporating toxicity considerations into two-stage designs in phase II clinical trials. Conaway and Petroni further pointed out that it is important to evaluate the clinical activity and safety simultaneously in studying cancer treatments with more toxic chemotherapies in a phase II clinical trial. Therefore, they developed multi-stage designs with two dependent binary endpoints. However, the usual sample sizes in phase II trials make these designs difficult to control the type I error rate at a desired level over the entire null region and still have sufficient power against reasonable alternatives. Therefore, the curtailed sampling procedure summarized by Phatak and Bhatt will be applied to the two-stage designs with two dependent binary endpoints in this paper to reduce sample sizes and speed up the development process for drugs.  相似文献   

7.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

8.
Two-stage k-sample designs for the ordered alternative problem   总被引:2,自引:0,他引:2  
In preclinical studies and clinical dose-ranging trials, the Jonckheere-Terpstra test is widely used in the assessment of dose-response relationships. Hewett and Spurrier (1979) presented a two-stage analog of the test in the context of large sample sizes. In this paper, we propose an exact test based on Simon's minimax and optimal design criteria originally used in one-arm phase II designs based on binary endpoints. The convergence rate of the joint distribution of the first and second stage test statistics to the limiting distribution is studied, and design parameters are provided for a variety of assumed alternatives. The behavior of the test is also examined in the presence of ties, and the proposed designs are illustrated through application in the planning of a hypercholesterolemia clinical trial. The minimax and optimal two-stage procedures are shown to be preferable as compared with the one-stage procedure because of the associated reduction in expected sample size for given error constraints.  相似文献   

9.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

10.
Abstract

In this paper, we propose a Bayesian two-stage design with changing hypothesis test by bridging a single-arm study and a double-arm randomized trial in one phase II clinical trial based on continuous endpoints rather than binary endpoints. We have also calibrated with respect to frequentist and Bayesian error rates. The proposed design minimizes the Bayesian expected sample size if the new candidate has low or high efficacy activity subject to the constraint upon error rates in both frequentist and Bayesian perspectives. Tables of designs for various combinations of design parameters are also provided.  相似文献   

11.
The choice between single-arm designs versus randomized double-arm designs has been contentiously debated in the literature of phase II oncology trials. Recently, as a compromise, the single-to-double arm transition design was proposed, combining the two designs into one trial over two stages. Successful implementation of the two-stage transition design requires a suspension period at the end of the first stage to collect the response data of the already enrolled patients. When the evaluation of the primary efficacy endpoint is overly long, the between-stage suspension period may unfavorably prolong the trial duration and cause a delay in treating future eligible patients. To accelerate the trial, we propose a Bayesian single-to-double arm design with short-term endpoints (BSDS), where an intermediate short-term endpoint is used for making early termination decisions at the end of the single-arm stage, followed by an evaluation of the long-term endpoint at the end of the subsequent double-arm stage. Bayesian posterior probabilities are used as the primary decision-making tool at the end of the trial. Design calibration steps are proposed for this Bayesian monitoring process to control the frequentist operating characteristics and minimize the expected sample size. Extensive simulation studies have demonstrated that our design has comparable power and average sample size but a much shorter trial duration than conventional single-to-double arm design. Applications of the design are illustrated using two phase II oncology trials with binary endpoints.  相似文献   

12.
When counting the number of chemical parts in air pollution studies or when comparing the occurrence of congenital malformations between a uranium mining town and a control population, we often assume Poisson distribution for the number of these rare events. Some discussions on sample size calculation under Poisson model appear elsewhere, but all these focus on the case of testing equality rather than testing equivalence. We discuss sample size and power calculation on the basis of exact distribution under Poisson models for testing non-inferiority and equivalence with respect to the mean incidence rate ratio. On the basis of large sample theory, we further develop an approximate sample size calculation formula using the normal approximation of a proposed test statistic for testing non-inferiority and an approximate power calculation formula for testing equivalence. We find that using these approximation formulae tends to produce an underestimate of the minimum required sample size calculated from using the exact test procedure. On the other hand, we find that the power corresponding to the approximate sample sizes can be actually accurate (with respect to Type I error and power) when we apply the asymptotic test procedure based on the normal distribution. We tabulate in a variety of situations the minimum mean incidence needed in the standard (or the control) population, that can easily be employed to calculate the minimum required sample size from each comparison group for testing non-inferiority and equivalence between two Poisson populations.  相似文献   

13.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

14.
In single-arm clinical trials with survival outcomes, the Kaplan–Meier estimator and its confidence interval are widely used to assess survival probability and median survival time. Since the asymptotic normality of the Kaplan–Meier estimator is a common result, the sample size calculation methods have not been studied in depth. An existing sample size calculation method is founded on the asymptotic normality of the Kaplan–Meier estimator using the log transformation. However, the small sample properties of the log transformed estimator are quite poor in small sample sizes (which are typical situations in single-arm trials), and the existing method uses an inappropriate standard normal approximation to calculate sample sizes. These issues can seriously influence the accuracy of results. In this paper, we propose alternative methods to determine sample sizes based on a valid standard normal approximation with several transformations that may give an accurate normal approximation even with small sample sizes. In numerical evaluations via simulations, some of the proposed methods provided more accurate results, and the empirical power of the proposed method with the arcsine square-root transformation tended to be closer to a prescribed power than the other transformations. These results were supported when methods were applied to data from three clinical trials.  相似文献   

15.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

16.
Historical control trials compare an experimental treatment with a previously conducted control treatment. By assigning all recruited samples to the experimental arm, historical control trials can better identify promising treatments in early phase trials compared with randomized control trials. Existing designs of historical control trials with survival endpoints are based on asymptotic normal distribution. However, it remains unclear whether the asymptotic distribution of the test statistic is close enough to the true distribution given relatively small sample sizes in early phase trials. In this article, we address this question by introducing an exact design approach for exponentially distributed survival endpoints, and compare it with an asymptotic design in both real examples and simulation examples. Simulation results show that the asymptotic test could lead to bias in the sample size estimation. We conclude the proposed exact design should be used in the design of historical control trials.  相似文献   

17.
Response-adaptive (RA) allocation designs can skew the allocation of incoming subjects toward the better performing treatment group based on the previously accrued responses. While unstable estimators and increased variability can adversely affect adaptation in early trial stages, Bayesian methods can be implemented with decreasingly informative priors (DIP) to overcome these difficulties. DIPs have been previously used for binary outcomes to constrain adaptation early in the trial, yet gradually increase adaptation as subjects accrue. We extend the DIP approach to RA designs for continuous outcomes, primarily in the normal conjugate family by functionalizing the prior effective sample size to equal the unobserved sample size. We compare this effective sample size DIP approach to other DIP formulations. Further, we considered various allocation equations and assessed their behavior utilizing DIPs. Simulated clinical trials comparing the behavior of these approaches with traditional Frequentist and Bayesian RA as well as balanced designs show that the natural lead-in approaches maintain improved treatment with lower variability and greater power.  相似文献   

18.
The author considers studies with multiple dependent primary endpoints. Testing hypotheses with multiple primary endpoints may require unmanageably large populations. Composite endpoints consisting of several binary events may be used to reduce a trial to a manageable size. The primary difficulties with composite endpoints are that different endpoints may have different clinical importance and that higher‐frequency variables may overwhelm effects of smaller, but equally important, primary outcomes. To compensate for these inconsistencies, we weight each type of event, and the total number of weighted events is counted. To reflect the mutual dependency of primary endpoints and to make the weighting method effective in small clinical trials, we use the Bayesian approach. We assume a multinomial distribution of multiple endpoints with Dirichlet priors and apply the Bayesian test of noninferiority to the calculation of weighting parameters. We use composite endpoints to test hypotheses of superiority in single‐arm and two‐arm clinical trials. The composite endpoints have a beta distribution. We illustrate this technique with an example. The results provide a statistical procedure for creating composite endpoints. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

19.
20.
A sample size justification is a vital part of any investigation. However, estimating the number of participants required to give meaningful results is not always straightforward. A number of components are required to facilitate a suitable sample size calculation. In this paper, the steps for conducting sample size calculations for superiority trials are summarised. Practical advice and examples are provided illustrating how to carry out the calculations by hand and using the app SampSize. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号