首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An important question that arises in clinical trials is how many additional observations, if any, are required beyond those originally planned. This has satisfactorily been answered in the case of two-treatment double-blind clinical experiments. However, one may be interested in comparing a new treatment with its competitors, which may be more than one. This problem is addressed in this investigation involving responses from arbitrary distributions, in which the mean and the variance are not functionally related. First, a solution in determining the initial sample size for specified level of significance and power at a specified alternative is obtained. Then it is shown that when the initial sample size is large, the nominal level of significance and the power at the pre-specified alternative are fairly robust for the proposed sample size re-estimation procedure. An application of the results is made to the blood coagulation functionality problem considered by Kropf et al. [Multiple comparisons of treatments with stable multivariate tests in a two-stage adaptive design, including a test for non-inferiority, Biom. J. 42(8) (2000), pp. 951–965].  相似文献   

2.
Optimal three-stage designs with equal sample sizes at each stage are presented and compared to fixed sample designs, fully sequential designs, designs restricted to use the fixed sample critical value at the final stage, and to modifications of other group sequential designs previously proposed in the literature. Typically, the greatest savings realized with interim analyses are obtained by the first interim look. More than 50% of the savings possible with a fully sequential design can be realized with a simple two-stage design. Three-stage designs can realize as much as 75% of the possible savings. Without much loss in efficiency, the designs can be modified so that the critical value at the final stage equals the usual fixed sample value while maintaining the overall level of significance, alleviating some potential confusion should a final stage be necessary. Some common group sequential designs, modified to allow early acceptance of the null hypothesis, are shown to be nearly optimal in some settings while performing poorly in others. An example is given to illustrate the use of several three-stage plans in the design of clinical trials.  相似文献   

3.
Baseline adjusted analyses are commonly encountered in practice, and regulatory guidelines endorse this practice. Sample size calculations for this kind of analyses require knowledge of the magnitude of nuisance parameters that are usually not given when the results of clinical trials are reported in the literature. It is therefore quite natural to start with a preliminary calculated sample size based on the sparse information available in the planning phase and to re‐estimate the value of the nuisance parameters (and with it the sample size) when a portion of the planned number of patients have completed the study. We investigate the characteristics of this internal pilot study design when an analysis of covariance with normally distributed outcome and one random covariate is applied. For this purpose we first assess the accuracy of four approximate sample size formulae within the fixed sample size design. Then the performance of the recalculation procedure with respect to its actual Type I error rate and power characteristics is examined. The results of simulation studies show that this approach has favorable properties with respect to the Type I error rate and power. Together with its simplicity, these features should make it attractive for practical application. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
5.
In clinical trials with a time-to-event endpoint, subjects are often at risk for events other than the one of interest. When the occurrence of one type of event precludes observation of any later events or alters the probably of subsequent events, the situation is one of competing risks. During the planning stage of a clinical trial with competing risks, it is important to take all possible events into account. This paper gives expressions for the power and sample size for competing risks based on a flexible parametric Weibull model. Nonuniform accrual to the study is considered and an allocation ratio other than one may be used. Results are also provided for the case where two or more of the competing risks are of primary interest.  相似文献   

6.
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family‐wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Current survival techniques do not provide a good method for handling clinical trials with a large percent of censored observations. This research proposes using time-dependent surrogates of survival as outcome variables, in conjunction with observed survival time, to improve the precision in comparing the relative effects of two treatments on the distribution of survival time. This is in contrast to the standard method used today which uses the marginal density of survival time, T. only, or the marginal density of a surrogate, X, only, therefore, ignoring some available information. The surrogate measure, X, may be a fixed value or a time-dependent variable, X(t). X is a summary measure of some of the covariates measured throughout the trial that provide additional information on a subject's survival time. It is possible to model these time-dependent covariate values and relate the parameters in the model to the parameters in the distribution of T given X. The result is that three new models are available for the analysis of clinical trials. All three models use the joint density of survival time and a surrogate measure. Given one of three different assumed mechanisms of the potential treatment effect, each of the three methods improves the precision of the treatment estimate.  相似文献   

8.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.  相似文献   

10.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
For two‐arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre‐specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
ABSTRACT

Just as Bayes extensions of the frequentist optimal allocation design have been developed for the two-group case, we provide a Bayes extension of optimal allocation in the three-group case. We use the optimal allocations derived by Jeon and Hu [Optimal adaptive designs for binary response trials with three treatments. Statist Biopharm Res. 2010;2(3):310–318] and estimate success probabilities for each treatment arm using a Bayes estimator. We also introduce a natural lead-in design that allows adaptation to begin as early in the trial as possible. Simulation studies show that the Bayesian adaptive designs simultaneously increase the power and expected number of successfully treated patients compared to the balanced design. And compared to the standard adaptive design, the natural lead-in design introduced in this study produces a higher expected number of successes whilst preserving power.  相似文献   

13.
A sample size justification is a vital part of any investigation. However, estimating the number of participants required to give meaningful results is not always straightforward. A number of components are required to facilitate a suitable sample size calculation. In this paper, the steps for conducting sample size calculations for superiority trials are summarised. Practical advice and examples are provided illustrating how to carry out the calculations by hand and using the app SampSize. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
The Bayesian paradigm provides an ideal platform to update uncertainties and carry them over into the future in the presence of data. Bayesian predictive power (BPP) reflects our belief in the eventual success of a clinical trial to meet its goals. In this paper we derive mathematical expressions for the most common types of outcomes, to make the BPP accessible to practitioners, facilitate fast computations in adaptive trial design simulations that use interim futility monitoring, and propose an organized BPP-based phase II-to-phase III design framework.  相似文献   

15.
Nonparametric approaches to the analysis of multiple endpoints in clinical studies can be of particular value when the endpoints are heterogeneous or distributional assumptions are suspect. We describe a multivariate Terpstra-Jonckheere U-statistic for assessing multiple endpoints with ordered alternatives, and illustrate its use with data arising from a recent clinical study.  相似文献   

16.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

17.
18.
A sample size justification is a vital part of any trial design. However, estimating the number of participants required to give a meaningful result is not always straightforward. A number of components are required to facilitate a suitable sample size calculation. In this paper, the steps for conducting sample size calculations for non‐inferiority and equivalence trials are summarised. Practical advice and examples are provided that illustrate how to carry out the calculations by hand and using the app SampSize. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
One of the most important steps in the design of a pharmaceutical clinical trial is the estimation of the sample size. For a superiority trial the sample size formula (to achieve a stated power) would be based on a given clinically meaningful difference and a value for the population variance. The formula is typically used as though this population variance is known whereas in reality it is unknown and is replaced by an estimate with its associated uncertainty. The variance estimate would be derived from an earlier similarly designed study (or an overall estimate from several previous studies) and its precision would depend on its degrees of freedom. This paper provides a solution for the calculation of sample sizes that allows for the imprecision in the estimate of the sample variance and shows how traditional formulae give sample sizes that are too small since they do not allow for this uncertainty with the deficiency being more acute with fewer degrees of freedom. It is recommended that the methodology described in this paper should be used when the sample variance has less than 200 degrees of freedom.  相似文献   

20.
Clinical trials are often designed to compare several treatments with a common control arm in pairwise fashion. In this paper we study optimal designs for such studies, based on minimizing the total number of patients required to achieve a given level of power. A common approach when designing studies to compare several treatments with a control is to achieve the desired power for each individual pairwise treatment comparison. However, it is often more appropriate to characterize power in terms of the family of null hypotheses being tested, and to control the probability of rejecting all, or alternatively any, of these individual hypotheses. While all approaches lead to unbalanced designs with more patients allocated to the control arm, it is found that the optimal design and required number of patients can vary substantially depending on the chosen characterization of power. The methods make allowance for both continuous and binary outcomes and are illustrated with reference to two clinical trials, one involving multiple doses compared to placebo and the other involving combination therapy compared to mono-therapies. In one example a 55% reduction in sample size is achieved through an optimal design combined with the appropriate characterization of power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号