首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

2.
Adaptive trial methodology for multiarmed trials and enrichment designs has been extensively discussed in the past. A general principle to construct test procedures that control the family‐wise Type I error rate in the strong sense is based on combination tests within a closed test. Using survival data, a problem arises when using information of patients for adaptive decision making, which are under risk at interim. With the currently available testing procedures, either no testing of hypotheses in interim analyses is possible or there are restrictions on the interim data that can be used in the adaptation decisions as, essentially, only the interim test statistics of the primary endpoint may be used. We propose a general adaptive testing procedure, covering multiarmed and enrichment designs, which does not have these restrictions. An important application are clinical trials, where short‐term surrogate endpoints are used as basis for trial adaptations, and we illustrate how such trials can be designed. We propose statistical models to assess the impact of effect sizes, the correlation structure between the short‐term and the primary endpoint, the sample size, the timing of interim analyses, and the selection rule on the operating characteristics.  相似文献   

3.
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.  相似文献   

4.
The choice between single-arm designs versus randomized double-arm designs has been contentiously debated in the literature of phase II oncology trials. Recently, as a compromise, the single-to-double arm transition design was proposed, combining the two designs into one trial over two stages. Successful implementation of the two-stage transition design requires a suspension period at the end of the first stage to collect the response data of the already enrolled patients. When the evaluation of the primary efficacy endpoint is overly long, the between-stage suspension period may unfavorably prolong the trial duration and cause a delay in treating future eligible patients. To accelerate the trial, we propose a Bayesian single-to-double arm design with short-term endpoints (BSDS), where an intermediate short-term endpoint is used for making early termination decisions at the end of the single-arm stage, followed by an evaluation of the long-term endpoint at the end of the subsequent double-arm stage. Bayesian posterior probabilities are used as the primary decision-making tool at the end of the trial. Design calibration steps are proposed for this Bayesian monitoring process to control the frequentist operating characteristics and minimize the expected sample size. Extensive simulation studies have demonstrated that our design has comparable power and average sample size but a much shorter trial duration than conventional single-to-double arm design. Applications of the design are illustrated using two phase II oncology trials with binary endpoints.  相似文献   

5.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

6.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

7.
We propose a two‐stage design for a single arm clinical trial with an early stopping rule for futility. This design employs different endpoints to assess early stopping and efficacy. The early stopping rule is based on a criteria determined more quickly than that for efficacy. These separate criteria are also nested in the sense that efficacy is a special case of, but usually not identical to, the early stopping endpoint. The design readily allows for planning in terms of statistical significance, power, expected sample size, and expected duration. This method is illustrated with a phase II design comparing rates of disease progression in elderly patients treated for lung cancer to rates found using a historical control. In this example, the early stopping rule is based on the number of patients who exhibit progression‐free survival (PFS) at 2 months post treatment follow‐up. Efficacy is judged by the number of patients who have PFS at 6 months. We demonstrate our design has expected sample size and power comparable with the Simon two‐stage design but exhibits shorter expected duration under a range of useful parameter values.  相似文献   

8.
A placebo‐controlled randomized clinical trial is required to demonstrate that an experimental treatment is superior to its corresponding placebo on multiple coprimary endpoints. This is particularly true in the field of neurology. In fact, clinical trials for neurological disorders need to show the superiority of an experimental treatment over a placebo in two coprimary endpoints. Unfortunately, these trials often fail to detect a true treatment effect for the experimental treatment versus the placebo owing to an unexpectedly high placebo response rate. Sequential parallel comparison design (SPCD) can be used to address this problem. However, the SPCD has not yet been discussed in relation to clinical trials with coprimary endpoints. In this article, our aim was to develop a hypothesis‐testing method and a method for calculating the corresponding sample size for the SPCD with two coprimary endpoints. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy. In addition, the usefulness of our methods is confirmed by returning to an SPCD trial with a single primary endpoint of Alzheimer disease‐related agitation.  相似文献   

9.
Multiple-arm dose-response superiority trials are widely studied for continuous and binary endpoints, while non-inferiority designs have been studied recently in two-arm trials. In this paper, a unified asymptotic formulation of a sample size calculation for k-arm (k>0) trials with different endpoints (continuous, binary and survival endpoints) is derived for both superiority and non-inferiority designs. The proposed method covers the sample size calculation for single-arm and k-arm (k> or =2) designs with survival endpoints, which has not been covered in the statistic literature. A simple, closed form for power and sample size calculations is derived from a contrast test. Application examples are provided. The effect of the contrasts on the power is discussed, and a SAS program for sample size calculation is provided and ready to use.  相似文献   

10.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Formal proof of efficacy of a drug requires that in a prospective experiment, superiority over placebo, or either superiority or at least non-inferiority to an established standard, is demonstrated. Traditionally one primary endpoint is specified, but various diseases exist where treatment success needs to be based on the assessment of two primary endpoints. With co-primary endpoints, both need to be “significant” as a prerequisite to claim study success. Here, no adjustment of the study-wise type-1-error is needed, but sample size is often increased to maintain the pre-defined power. Studies that use an at-least-one concept have been proposed where study success is claimed if superiority for at least one of the endpoints is demonstrated. This is sometimes also called the dual primary endpoint concept, and an appropriate adjustment of the study-wise type-1-error is required. This concept is not covered in the European Guideline on multiplicity because study success can be claimed if one endpoint shows significant superiority, despite a possible deterioration in the other. In line with Röhmel's strategy, we discuss an alternative approach including non-inferiority hypotheses testing that avoids obvious contradictions to proper decision-making. This approach leads back to the co-primary endpoint assessment, and has the advantage that minimum requirements for endpoints can be modeled flexibly for several practical needs. Our simulations show that, if planning assumptions are correct, the proposed additional requirements improve interpretation with only a limited impact on power, that is, on sample size.  相似文献   

12.
Molecularly targeted, genomic‐driven, and immunotherapy‐based clinical trials continue to be advanced for the treatment of relapse or refractory cancer patients, where the growth modulation index (GMI) is often considered a primary endpoint of treatment efficacy. However, there little literature is available that considers the trial design with GMI as the primary endpoint. In this article, we derived a sample size formula for the score test under a log‐linear model of the GMI. Study designs using the derived sample size formula are illustrated under a bivariate exponential model, the Weibull frailty model, and the generalized treatment effect size. The proposed designs provide sound statistical methods for a single‐arm phase II trial with GMI as the primary endpoint.  相似文献   

13.
Many assumptions, including assumptions regarding treatment effects, are made at the design stage of a clinical trial for power and sample size calculations. It is desirable to check these assumptions during the trial by using blinded data. Methods for sample size re‐estimation based on blinded data analyses have been proposed for normal and binary endpoints. However, there is a debate that no reliable estimate of the treatment effect can be obtained in a typical clinical trial situation. In this paper, we consider the case of a survival endpoint and investigate the feasibility of estimating the treatment effect in an ongoing trial without unblinding. We incorporate information of a surrogate endpoint and investigate three estimation procedures, including a classification method and two expectation–maximization (EM) algorithms. Simulations and a clinical trial example are used to assess the performance of the procedures. Our studies show that the expectation–maximization algorithms highly depend on the initial estimates of the model parameters. Despite utilization of a surrogate endpoint, all three methods have large variations in the treatment effect estimates and hence fail to provide a precise conclusion about the treatment effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Failure to adjust for informative non‐compliance, a common phenomenon in endpoint trials, can lead to a considerably underpowered study. However, standard methods for sample size calculation assume that non‐compliance is non‐informative. One existing method to account for informative non‐compliance, based on a two‐subpopulation model, is limited with respect to the degree of association between the risk of non‐compliance and the risk of a study endpoint that can be modelled, and with respect to the maximum allowable rates of non‐compliance and endpoints. In this paper, we introduce a new method that largely overcomes these limitations. This method is based on a model in which time to non‐compliance and time to endpoint are assumed to follow a bivariate exponential distribution. Parameters of the distribution are obtained by equating them with the study design parameters. The impact of informative non‐compliance is investigated across a wide range of conditions, and the method is illustrated by recalculating the sample size of a published clinical trial. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
In this article, we systematically study the optimal truncated group sequential test on binomial proportions. Through analysis of the cost structure, average test cost is introduced as a new optimality criterion. According to the new criterion, the optimal tests on different design parameters including the boundaries, success discriminant value, stage sample vector, stage size, and the maximum sample size are defined. Since the computation time in finding optimal designs by exhaustive search is intolerably long, group sequential sample space sorting method and procedures are developed to find the near-optimal ones. In comparison with the international standard ISO2859-1, the truncated group sequential designs proposed in this article can reduce the average test costs around 20%.  相似文献   

16.
The clinical efficacy of a new treatment may often be better evaluated by two or more co-primary endpoints. Recently, in pharmaceutical drug development, there has been increasing discussion regarding establishing statistically significant favorable results on more than one endpoint in comparisons between treatments, which is referred to as a problem of multiple co-primary endpoints. Several methods have been proposed for calculating the sample size required to design a trial with multiple co-primary correlated endpoints. However, because these methods require users to have considerable mathematical sophistication and knowledge of programming techniques, their application and spread may be restricted in practice. To improve the convenience of these methods, in this paper, we provide a useful formula with accompanying numerical tables for sample size calculations to design clinical trials with two treatments, where the efficacy of a new treatment is demonstrated on continuous co-primary endpoints. In addition, we provide some examples to illustrate the sample size calculations made using the formula. Using the formula and the tables, which can be read according to the patterns of correlations and effect size ratios expected in multiple co-primary endpoints, makes it convenient to evaluate the required sample size promptly.  相似文献   

17.
We consider two-stage adaptive designs for clinical trials where data from the two stages are dependent. This occurs when additional data are obtained from patients during their second stage follow-up. While the proposed flexible approach allows modifications of trial design, sample size, or statistical analysis using the first stage data, there is no need for a complete prespecification of the adaptation rule. Methods are provided for an adaptive closed testing procedure, for calculating overall adjusted p-values, and for obtaining unbiased estimators and confidence bounds for parameters that are invariant to modifications. A motivating example is used to illustrate these methods.  相似文献   

18.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

19.
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well‐known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph‐based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid‐trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号