首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In clinical trials with survival data, investigators may wish to re-estimate the sample size based on the observed effect size while the trial is ongoing. Besides the inflation of the type-I error rate due to sample size re-estimation, the method for calculating the sample size in an interim analysis should be carefully considered because the data in each stage are mutually dependent in trials with survival data. Although the interim hazard estimate is commonly used to re-estimate the sample size, the estimate can sometimes be considerably higher or lower than the hypothesized hazard by chance. We propose an interim hazard ratio estimate that can be used to re-estimate the sample size under those circumstances. The proposed method was demonstrated through a simulation study and an actual clinical trial as an example. The effect of the shape parameter for the Weibull survival distribution on the sample size re-estimation is presented.  相似文献   

2.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design.  相似文献   

3.
Since the treatment effect of an experimental drug is generally not known at the onset of a clinical trial, it may be wise to allow for an adjustment in the sample size after an interim analysis of the unblinded data. Using a particular adaptive test statistic, a procedure is demonstrated for finding the optimal design. Both the timing of the interim analysis and the way the sample size is adjusted can influence the power of the resulting procedure. It is possible to have smaller average sample size using the adaptive test statistic, even if the initial estimate of the treatment effect is wrong, compared to the sample size needed using a standard test statistic without an interim look and assuming a correct initial estimate of the effect. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
Repeated confidence interval (RCI) is an important tool for design and monitoring of group sequential trials according to which we do not need to stop the trial with planned statistical stopping rules. In this article, we derive RCIs when data from each stage of the trial are not independent thus it is no longer a Brownian motion (BM) process. Under this assumption, a larger class of stochastic processes fractional Brownian motion (FBM) is considered. Comparisons of RCI width and sample size requirement are made to those under Brownian motion for different analysis times, Type I error rates and number of interim analysis. Power family spending functions including Pocock, O'Brien-Fleming design types are considered for these simulations. Interim data from BHAT and oncology trials is used to illustrate how to derive RCIs under FBM for efficacy and futility monitoring.  相似文献   

5.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   

6.
In group sequential clinical trials, there are several sample size re-estimation methods proposed in the literature that allow for change of sample size at the interim analysis. Most of these methods are based on either the conditional error function or the interim effect size. Our simulation studies compared the operating characteristics of three commonly used sample size re-estimation methods, Chen et al. (2004), Cui et al. (1999), and Muller and Schafer (2001). Gao et al. (2008) extended the CDL method and provided an analytical expression of lower and upper threshold of conditional power where the type I error is preserved. Recently, Mehta and Pocock (2010) extensively discussed that the real benefit of the adaptive approach is to invest the sample size resources in stages and increasing the sample size only if the interim results are in the so called “promising zone” which they define in their article. We incorporated this concept in our simulations while comparing the three methods. To test the robustness of these methods, we explored the impact of incorrect variance assumption on the operating characteristics. We found that the operating characteristics of the three methods are very comparable. In addition, the concept of promising zone, as suggested by MP, gives the desired power and smaller average sample size, and thus increases the efficiency of the trial design.  相似文献   

7.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

8.
The study of the effect of a treatment may involve the evaluation of a variable at a number of moments. When assuming a smooth curve for the mean response along time, estimation can be afforded by spline regression, in the context of generalized additive models. The novelty of our work lies in the construction of hypothesis tests to compare two curves of treatments in any interval of time for several types of response variables. The within-subject correlation is not modeled but is considered to obtain valid inferences by the use of bootstrap. We propose both semiparametric and nonparametric bootstrap approaches, based on resampling vectors of residuals or responses, respectively. Simulation studies revealed a good performance of the tests, considering, for the outcome, different distribution functions in the exponential family and varying the correlation between observations along time. We show that the sizes of bootstrap tests are close to the nominal value, with tests based on a standardized statistic having slightly better size properties. The power increases as the distance between curves increases and decreases when correlation gets higher. The usefulness of these statistical tools was confirmed using real data, thus allowing to detect changes in fish behavior when exposed to the toxin microcystin-RR.  相似文献   

9.
This paper presents a goodness-of-fit test for a semiparametric random censorship model proposed by Dikta (1998 ). The test statistic is derived from a model-based process which is asymptotically Gaussian. In addition to test consistency, the proposed test can detect local alternatives distinct n -1/2 from the null hypothesis. Due to the intractability of the asymptotic null distribution of the test statistic, we turn to two resampling approximations. We first use the well-known bootstrap method to approximate critical values of the test. We then introduce a so-called random symmetrization method for carrying out the test. Both methods perform very well with a sample of moderate size. A simulation study shows that the latter possesses better empirical powers and sizes for small samples.  相似文献   

10.
This paper presents a new random weighting-based adaptive importance resampling method to estimate the sampling distribution of a statistic. A random weighting-based cross-entropy procedure is developed to iteratively calculate the optimal resampling probability weights by minimizing the Kullback-Leibler distance between the optimal importance resampling distribution and a family of parameterized distributions. Subsequently, the random weighting estimation of the sampling distribution is constructed from the obtained optimal importance resampling distribution. The convergence of the proposed method is rigorously proved. Simulation and experimental results demonstrate that the proposed method can effectively estimate the sampling distribution of a statistic.  相似文献   

11.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
In this article, an attempt has been made to settle the question of existence of unbiased estimator of the key parameter p of the quasi-binomial distributions of Type I (QBD I) and of Type II (QBD II), with/without any knowledge of the other parameter φ appearing in the expressions for probability functions of the QBD's. This is studied with reference to a single observation, a random sample of finite size m as also with samples drawn by suitably defined sequential sampling rules.  相似文献   

13.
The primary purpose of this paper is that of developing a sequential Monte Carlo approximation to an ideal bootstrap estimate of the parameter of interest. Using the concept of fixed-precision approximation, we construct a sequential stopping rule for determining the number of bootstrap samples to be taken in order to achieve a specified precision of the Monte Carlo approximation. It is shown that the sequential Monte Carlo approximation is asymptotically efficient in the problems of estimation of the bias and standard error of a given statistic. Efficient bootstrap resampling is discussed and a numerical study is carried out for illustrating the obtained theoretical results.  相似文献   

14.
Suboptimal Bayesian sequential methods for choosing the best (i.e. largest probability) multinomial cell are considered and their performance is studied using Monte Carlo simulation. Performance characteristics, such as the probability of correct selection and some other associated with the sample size distribution, are evaluated assuming a maximum sample size. Single observation sequential rules as well as rules, where groups of observations are taken, and fixed sample size rules are discussed.  相似文献   

15.
Stute (1993, Consistent estimation under random censorship when covariables are present. Journal of Multivariate Analysis 45, 89–103) proposed a new method to estimate regression models with a censored response variable using least squares and showed the consistency and asymptotic normality for his estimator. This article proposes a new bootstrap-based methodology that improves the performance of the asymptotic interval estimation for the small sample size case. Therefore, we compare the behavior of Stute's asymptotic confidence interval with that of several confidence intervals that are based on resampling bootstrap techniques. In order to build these confidence intervals, we propose a new bootstrap resampling method that has been adapted for the case of censored regression models. We use simulations to study the improvement the performance of the proposed bootstrap-based confidence intervals show when compared to the asymptotic proposal. Simulation results indicate that, for the new proposals, coverage percentages are closer to the nominal values and, in addition, intervals are narrower.  相似文献   

16.
Baseline adjusted analyses are commonly encountered in practice, and regulatory guidelines endorse this practice. Sample size calculations for this kind of analyses require knowledge of the magnitude of nuisance parameters that are usually not given when the results of clinical trials are reported in the literature. It is therefore quite natural to start with a preliminary calculated sample size based on the sparse information available in the planning phase and to re‐estimate the value of the nuisance parameters (and with it the sample size) when a portion of the planned number of patients have completed the study. We investigate the characteristics of this internal pilot study design when an analysis of covariance with normally distributed outcome and one random covariate is applied. For this purpose we first assess the accuracy of four approximate sample size formulae within the fixed sample size design. Then the performance of the recalculation procedure with respect to its actual Type I error rate and power characteristics is examined. The results of simulation studies show that this approach has favorable properties with respect to the Type I error rate and power. Together with its simplicity, these features should make it attractive for practical application. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
Monitoring clinical trials in nonfatal diseases where ethical considerations do not dictate early termination upon demonstration of efficacy often requires examining the interim findings to assure that the protocol-specified sample size will provide sufficient power against the null hypothesis when the alternative hypothesis is true. The sample size may be increased, if necessary to assure adequate power. This paper presents a new method for carrying out such interim power evaluations for observations from normal distributions without unblinding the treatment assignments or discernably affecting the Type 1 error rate. Simulation studies confirm the expected performance of the method.  相似文献   

18.
In many industrial quality control experiments and destructive stress testing, the only available data are successive minima (or maxima)i.e., record-breaking data. There are two sampling schemes used to collect record-breaking data: random sampling and inverse sampling. For random sampling, the total sample size is predetermined and the number of records is a random variable while in inverse-sampling the number of records to be observed is predetermined; thus the sample size is a random variable. The purpose of this papper is to determinevia simulations, which of the two schemes, if any, is more efficient. Since the two schemes are equivalent asymptotically, the simulations were carried out for small to moderate sized record-breaking samples. Simulated biases and mean square errors of the maximum likelihood estimators of the parameters using the two sampling schemes were compared. In general, it was found that if the estimators were well behaved, then there was no significant difference between the mean square errors of the estimates for the two schemes. However, for certain distributions described by both a shape and a scale parameter, random sampling led to estimators that were inconsistent. On the other hand, the estimated obtained from inverse sampling were always consistent. Moreover, for moderated sized record-breaking samples, the total sample size that needs to be observed is smaller for inverse sampling than for random sampling.  相似文献   

19.
In this study, we propose a group sequential procedure that allows the change of necessary sample size at intermediary stage in sequential test. In the procedure, we formulate the conditional power to judge the necessity of the change of sample size in decision rules. Furthermore, we present an integral formula of the power of the test and show how to change the necessary sample size by using the power of the test. In simulation studies, we investigate the characteristics of the change of sample size and the pattern of decision across all stages based on generated normal random numbers.  相似文献   

20.
Two‐stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean‐adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean‐squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号