首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Adaptive sample size redetermination (SSR) for clinical trials consists of examining early subsets of on‐trial data to adjust prior estimates of statistical parameters and sample size requirements. Blinded SSR, in particular, while in use already, seems poised to proliferate even further because it obviates many logistical complications of unblinded methods and it generally introduces little or no statistical or operational bias. On the other hand, current blinded SSR methods offer little to no new information about the treatment effect (TE); the obvious resulting problem is that the TE estimate scientists might simply ‘plug in’ to the sample size formulae could be severely wrong. This paper proposes a blinded SSR method that formally synthesizes sample data with prior knowledge about the TE and the within‐treatment variance. It evaluates the method in terms of the type 1 error rate, the bias of the estimated TE, and the average deviation from the targeted power. The method is shown to reduce this average deviation, in comparison with another established method, over a range of situations. The paper illustrates the use of the proposed method with an example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   

3.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

4.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
For binary endpoints, the required sample size depends not only on the known values of significance level, power and clinically relevant difference but also on the overall event rate. However, the overall event rate may vary considerably between studies and, as a consequence, the assumptions made in the planning phase on this nuisance parameter are to a great extent uncertain. The internal pilot study design is an appealing strategy to deal with this problem. Here, the overall event probability is estimated during the ongoing trial based on the pooled data of both treatment groups and, if necessary, the sample size is adjusted accordingly. From a regulatory viewpoint, besides preserving blindness it is required that eventual consequences for the Type I error rate should be explained. We present analytical computations of the actual Type I error rate for the internal pilot study design with binary endpoints and compare them with the actual level of the chi‐square test for the fixed sample size design. A method is given that permits control of the specified significance level for the chi‐square test under blinded sample size recalculation. Furthermore, the properties of the procedure with respect to power and expected sample size are assessed. Throughout the paper, both the situation of equal sample size per group and unequal allocation ratio are considered. The method is illustrated with application to a clinical trial in depression. Copyright © 2004 John Wiley & Sons Ltd.  相似文献   

6.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
We present a bootstrap Monte Carlo algorithm for computing the power function of the generalized correlation coefficient. The proposed method makes no assumptions about the form of the underlying probability distribution and may be used with observed data to approximate the power function and pilot data for sample size determination. In particular, the bootstrap power functions of the Pearson product moment correlation and the Spearman rank correlation are examined. Monte Carlo experiments indicate that the proposed algorithm is reliable and compares well with the asymptotic values. An example which demonstrates how this method can be used for sample size determination and power calculations is provided.  相似文献   

8.
In single-arm clinical trials with survival outcomes, the Kaplan–Meier estimator and its confidence interval are widely used to assess survival probability and median survival time. Since the asymptotic normality of the Kaplan–Meier estimator is a common result, the sample size calculation methods have not been studied in depth. An existing sample size calculation method is founded on the asymptotic normality of the Kaplan–Meier estimator using the log transformation. However, the small sample properties of the log transformed estimator are quite poor in small sample sizes (which are typical situations in single-arm trials), and the existing method uses an inappropriate standard normal approximation to calculate sample sizes. These issues can seriously influence the accuracy of results. In this paper, we propose alternative methods to determine sample sizes based on a valid standard normal approximation with several transformations that may give an accurate normal approximation even with small sample sizes. In numerical evaluations via simulations, some of the proposed methods provided more accurate results, and the empirical power of the proposed method with the arcsine square-root transformation tended to be closer to a prescribed power than the other transformations. These results were supported when methods were applied to data from three clinical trials.  相似文献   

9.
The poor performance of the Wald method for constructing confidence intervals (CIs) for a binomial proportion has been demonstrated in a vast literature. The related problem of sample size determination needs to be updated and comparative studies are essential to understanding the performance of alternative methods. In this paper, the sample size is obtained for the Clopper–Pearson, Bayesian (Uniform and Jeffreys priors), Wilson, Agresti–Coull, Anscombe, and Wald methods. Two two-step procedures are used: one based on the expected length (EL) of the CI and another one on its first-order approximation. In the first step, all possible solutions that satisfy the optimal criterion are obtained. In the second step, a single solution is proposed according to a new criterion (e.g. highest coverage probability (CP)). In practice, it is expected a sample size reduction, therefore, we explore the behavior of the methods admitting 30% and 50% of losses. For all the methods, the ELs are inflated, as expected, but the coverage probabilities remain close to the original target (with few exceptions). It is not easy to suggest a method that is optimal throughout the range (0, 1) for p. Depending on whether the goal is to achieve CP approximately or above the nominal level different recommendations are made.  相似文献   

10.
For the time-to-event outcome, current methods for sample determination are based on the proportional hazard model. However, if the proportionality assumption fails to capture the relationship between the hazard time and covariates, the proportional hazard model is not suitable to analyze survival data. The accelerated failure time (AFT) model is an alternative method to deal with survival data. In this paper, we address the issue that the relationship between the hazard time and the treatment effect is satisfied with the AFT model to design a multiregional trial. The log-rank test is employed to deal with the heterogeneous effect size among regions. The test statistic for the overall treatment effect is used to determine the total sample size for a multiregional trial, and the proposed criteria are used to rationalize partition sample size to each region.  相似文献   

11.
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates.  相似文献   

12.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

13.
Phase II clinical trials investigate whether a new drug or treatment has sufficient evidence of effectiveness against the disease under study. Two-stage designs are popular for phase II since they can stop in the first stage if the drug is ineffective. Investigators often face difficulties in determining the target response rates, and adaptive designs can help to set the target response rate tested in the second stage based on the number of responses observed in the first stage. Popular adaptive designs consider two alternate response rates, and they generally minimise the expected sample size at the maximum uninterested response rate. Moreover, these designs consider only futility as the reason for early stopping and have high expected sample sizes if the provided drug is effective. Motivated by this problem, we propose an adaptive design that enables us to terminate the single-arm trial at the first stage for efficacy and conclude which alternate response rate to choose. Comparing the proposed design with a popular adaptive design from literature reveals that the expected sample size decreases notably if any of the two target response rates are correct. In contrast, the expected sample size remains almost the same under the null hypothesis.  相似文献   

14.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

15.
Monitoring clinical trials in nonfatal diseases where ethical considerations do not dictate early termination upon demonstration of efficacy often requires examining the interim findings to assure that the protocol-specified sample size will provide sufficient power against the null hypothesis when the alternative hypothesis is true. The sample size may be increased, if necessary to assure adequate power. This paper presents a new method for carrying out such interim power evaluations for observations from normal distributions without unblinding the treatment assignments or discernably affecting the Type 1 error rate. Simulation studies confirm the expected performance of the method.  相似文献   

16.
This article considers experimental costs, besides power evaluation, in order to determine the sample size of an experiment. We focus on the use of standard tools of decision theory in the context of sample size determination. The loss function is defined, from the perspective of an experimenter which adopts the classical frequentist approach, and the risk function is computed. Then, we show the behavior of the risk function in the two-sample t-test, for a small sample experimental setting, with a medium-sized sample, and with large samples. Moreover, an objective criterion for a convenient sample size choice is introduced. Finally, a practical example of sample size determination, which also considers risk computation, is shown.  相似文献   

17.
A new method of statistical classification (discrimination) is proposed. The method is most effective for high dimension, low sample size data. It uses a robust mean difference as the direction vector and locates the classification boundary by minimizing the error rates. Asymptotic results for assessment and comparison to several popular methods are obtained by using a type of asymptotics of finite sample size and infinite dimensions. The value of the proposed approach is demonstrated by simulations. Real data examples are used to illustrate the performance of different classification methods.  相似文献   

18.
A variable delay process sampling procedure is considered in a Markov chain structure. The paper extends the basic sampling method given in Arnold (1970). Analytic properties of the process are developed for expected sample size and distribution of the sample size. A primary concern in the paper is the development of an objective function to enhance ability to select optimal sampling policies. The objective function involves cost due to sampling and protection costs for detecting undesirable conditions.  相似文献   

19.
Many assumptions, including assumptions regarding treatment effects, are made at the design stage of a clinical trial for power and sample size calculations. It is desirable to check these assumptions during the trial by using blinded data. Methods for sample size re‐estimation based on blinded data analyses have been proposed for normal and binary endpoints. However, there is a debate that no reliable estimate of the treatment effect can be obtained in a typical clinical trial situation. In this paper, we consider the case of a survival endpoint and investigate the feasibility of estimating the treatment effect in an ongoing trial without unblinding. We incorporate information of a surrogate endpoint and investigate three estimation procedures, including a classification method and two expectation–maximization (EM) algorithms. Simulations and a clinical trial example are used to assess the performance of the procedures. Our studies show that the expectation–maximization algorithms highly depend on the initial estimates of the model parameters. Despite utilization of a surrogate endpoint, all three methods have large variations in the treatment effect estimates and hence fail to provide a precise conclusion about the treatment effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
The utility of blinded sample size re-estimation for clinical trials depends on the ability to estimate variability without providing information about the true treatment difference, and on some reasonable assurance that the method is not likely to cause the sample size to be increased when the treatment effect is better than anticipated. We show that violations of these properties are unlikely to occur in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号