首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
The current practice of designing single‐arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single‐arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single‐arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

3.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

5.
The number of subjects in a pharmacokinetic two‐period two‐treatment crossover bioequivalence study is typically small, most often less than 60. The most common approach to testing for bioequivalence is the two one‐sided tests procedure. No explicit mathematical formula for the power function in the context of the two one‐sided tests procedure exists in the statistical literature, although the exact power based on Owen's special case of bivariate noncentral t‐distribution has been tabulated and graphed. Several approximations have previously been published for the probability of rejection in the two one‐sided tests procedure for crossover bioequivalence studies. These approximations and associated sample size formulas are reviewed in this article and compared for various parameter combinations with exact power formulas derived here, which are computed analytically as univariate integrals and which have been validated by Monte Carlo simulations. The exact formulas for power and sample size are shown to improve markedly in realistic parameter settings over the previous approximations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
For the case of a one‐sample experiment with known variance σ2=1, it has been shown that at interim analysis the sample size (SS) may be increased by any arbitrary amount provided: (1) The conditional power (CP) at interim is ?50% and (2) there can be no decision to decrease the SS (stop the trial early). In this paper we verify this result for the case of a two‐sample experiment with proportional SS in the treatment groups and an arbitrary common variance. Numerous authors have presented the formula for the CP at interim for a two‐sample test with equal SS in the treatment groups and an arbitrary common variance, for both the one‐ and two‐sided hypothesis tests. In this paper we derive the corresponding formula for the case of unequal, but proportional SS in the treatment groups for both one‐sided superiority and two‐sided hypothesis tests. Finally, we present an SAS macro for doing this calculation and provide a worked out hypothetical example. In discussion we note that this type of trial design trades the ability to stop early (for lack of efficacy) for the elimination of the Type I error penalty. The loss of early stopping requires that such a design employs a data monitoring committee, blinding of the sponsor to the interim calculations, and pre‐planning of how much and under what conditions to increase the SS and that this all be formally written into an interim analysis plan before the start of the study. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3 :277–287) for two‐stage trials with mid‐trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re‐estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.  相似文献   

10.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

11.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

12.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   

13.
This paper proposes the use of the integrated likelihood for inference on the mean effect in small‐sample meta‐analysis for continuous outcomes. The method eliminates the nuisance parameters given by variance components through integration with respect to a suitable weight function, with no need to estimate them. The integrated likelihood approach takes into proper account the estimation uncertainty of within‐study variances, thus providing confidence intervals with empirical coverage closer to nominal levels than standard likelihood methods. The improvement is remarkable when either (i) the number of studies is small to moderate or (ii) the small sample size of the studies does not allow to consider the within‐study variances as known, as common in applications. Moreover, the use of the integrated likelihood avoids numerical pitfalls related to the estimation of variance components which can affect alternative likelihood approaches. The proposed methodology is illustrated via simulation and applied to a meta‐analysis study in nutritional science.  相似文献   

14.
Molecularly targeted, genomic‐driven, and immunotherapy‐based clinical trials continue to be advanced for the treatment of relapse or refractory cancer patients, where the growth modulation index (GMI) is often considered a primary endpoint of treatment efficacy. However, there little literature is available that considers the trial design with GMI as the primary endpoint. In this article, we derived a sample size formula for the score test under a log‐linear model of the GMI. Study designs using the derived sample size formula are illustrated under a bivariate exponential model, the Weibull frailty model, and the generalized treatment effect size. The proposed designs provide sound statistical methods for a single‐arm phase II trial with GMI as the primary endpoint.  相似文献   

15.
Positive and negative predictive values describe the performance of a diagnostic test. There are several methods to test the equality of predictive values in paired designs. However, these methods were premised on large sample theory, and they may not be suitable for small‐size clinical trials because of inflation of the type 1 error rate. In this study, we propose an exact test to control the type 1 error rate strictly for conducting a small‐size clinical trial that investigates the equality of predictive values in paired designs. In addition, we execute simulation studies to evaluate the performance of the proposed exact test and existing methods in small‐size clinical trials. The proposed test can calculate the exact P value, and as a result of simulations, the empirical type 1 error rate for the proposed test did not exceed the significance level regardless of the setting, and the empirical power for the proposed test is not much different from the other methods based on large‐sample theory. Therefore, it is considered that the proposed exact test is useful when the type 1 error rate needs to be controlled strictly.  相似文献   

16.
Assuming that the frequency of occurrence follows the Poisson distribution, we develop sample size calculation procedures for testing equality based on an exact test procedure and an asymptotic test procedure under an AB/BA crossover design. We employ Monte Carlo simulation to demonstrate the use of these sample size formulae and evaluate the accuracy of sample size calculation formula derived from the asymptotic test procedure with respect to power in a variety of situations. We note that when both the relative treatment effect of interest and the underlying intraclass correlation between frequencies within patients are large, the sample size calculation based on the asymptotic test procedure can lose accuracy. In this case, the sample size calculation procedure based on the exact test is recommended. On the other hand, if the relative treatment effect of interest is small, the minimum required number of patients per group will be large, and the asymptotic test procedure will be valid for use. In this case, we may consider use of the sample size calculation formula derived from the asymptotic test procedure to reduce the number of patients needed for the exact test procedure. We include an example regarding a double‐blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these sample size formulae. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

18.
A cancer clinical trial with an immunotherapy often has 2 special features, which are patients being potentially cured from the cancer and the immunotherapy starting to take clinical effect after a certain delay time. Existing testing methods may be inadequate for immunotherapy clinical trials, because they do not appropriately take the 2 features into consideration at the same time, hence have low power to detect the true treatment effect. In this paper, we proposed a piece‐wise proportional hazards cure rate model with a random delay time to fit data, and a new weighted log‐rank test to detect the treatment effect of an immunotherapy over a chemotherapy control. We showed that the proposed weight was nearly optimal under mild conditions. Our simulation study showed a substantial gain of power in the proposed test over the existing tests and robustness of the test with misspecified weight. We also introduced a sample size calculation formula to design the immunotherapy clinical trials using the proposed weighted log‐rank test.  相似文献   

19.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

20.
Adaptive sample size redetermination (SSR) for clinical trials consists of examining early subsets of on‐trial data to adjust prior estimates of statistical parameters and sample size requirements. Blinded SSR, in particular, while in use already, seems poised to proliferate even further because it obviates many logistical complications of unblinded methods and it generally introduces little or no statistical or operational bias. On the other hand, current blinded SSR methods offer little to no new information about the treatment effect (TE); the obvious resulting problem is that the TE estimate scientists might simply ‘plug in’ to the sample size formulae could be severely wrong. This paper proposes a blinded SSR method that formally synthesizes sample data with prior knowledge about the TE and the within‐treatment variance. It evaluates the method in terms of the type 1 error rate, the bias of the estimated TE, and the average deviation from the targeted power. The method is shown to reduce this average deviation, in comparison with another established method, over a range of situations. The paper illustrates the use of the proposed method with an example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号