首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

2.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
The purpose of this study was to evaluate the effect of residual variability and carryover on average bioequivalence (ABE) studies performed under a 22 crossover design. ABE is usually assessed by means of the confidence interval inclusion principle. Here, the interval under consideration was the standard 'shortest' interval, which is the mainstream approach in practice. The evaluation was performed by means of a simulation study under different combinations of carryover and residual variability besides of formulation effect and sample size. The evaluation was made in terms of percentage of ABE declaration, coverage and interval precision. As is well known, high levels of variability distort the ABE procedures, particularly its type II error control (i.e. high variabilities make difficult to declare bioequivalence when it holds). The effect of carryover is modulated by variability and is especially disturbing for the type I error control. In the presence of carryover, the risk of erroneously declaring bioequivalence may become high, especially for low variabilities and large sample sizes. We end up with some hints concerning the controversy about pretesting for carryover before performing ABE analysis.  相似文献   

4.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
In 2008, this group published a paper on approaches for two‐stage crossover bioequivalence (BE) studies that allowed for the reestimation of the second‐stage sample size based on the variance estimated from the first‐stage results. The sequential methods considered used an assumed GMR of 0.95 as part of the method for determining power and sample size. This note adds results for an assumed GMR = 0.90. Two of the methods recommended for GMR = 0.95 in the earlier paper have some unacceptable increases in Type I error rate when the GMR is changed to 0.90. If a sponsor wants to assume 0.90 for the GMR, Method D is recommended. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Similar to Schuirmann's two one-sided tests procedure for assessment of bioequivalence in average bioavailability (Schuirmann,), Liu and Chow proposed a two one-sided tests procedure for assessment of equivalence of variability of bioavailability. Their procedure is derived based on the correlation between crossover differences and subject totals. In this paper, we examined the performance of their test procedure in terms of its test size and power for various situations where the intersubject variability and the intrasubject variability of the test drug product are relatively larger, similar, and smaller than that of the intrasubject variability of the reference drug product.  相似文献   

7.
In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70–143% within each of two ranges of intra‐subject coefficient of variation (10–30% and 30–55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size. The optimized designs exhibited substantially improved performance characteristics over our previous adaptive sequential designs. Even though the optimized designs avoided undue inflation of type I error and maintained power at 80%, their average sample sizes were similar to or less than those of conventional single stage designs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

9.
A test for assessing the equivalence of two variances of a bivariate normal vector is constructed. It is uniformly more powerful than the two one-sided tests procedure and the power improvement is substantial. Numerical studies show that it has a type I error close to the test level at most boundary points of the null hypothesis space. One can apply this test to paired difference experiments or 2×2 crossover designs to compare the variances of two populations with two correlated samples. The application of this test on bioequivalence in variability is presented. We point out that bioequivalence in intra-variability implies bioequivalence in variability, however, the latter has a larger power.  相似文献   

10.
In a human bioequivalence (BE) study, the conclusion of BE is usually based on the ratio of geometric means of pharmacokinetic parameters between a test and a reference products. The “Guideline for Bioequivalence Studies of Generic Products” (2012) issued by the Japanese health authority and other similar guidelines across the world require a 90% confidence interval (CI) of the ratio to fall entirely within the range of 0.8 to 1.25 for the conclusion of BE. If prerequisite conditions are satisfied, the Japanese guideline provides for a secondary BE criterion that requires the point estimate of the ratio to fall within the range of 0.9 to 1.11. We investigated the statistical properties of the “switching decision rule” wherein the secondary criterion is applied only when the CI criterion fails. The behavior of the switching decision rule differed from either of its component criteria and displayed an apparent type I error rate inflation when the prerequisite conditions were not considered. The degree of inflation became greater as the true variability increased in comparison to the assumed variability used in the sample size calculation. To our knowledge, this is the first report in which the overall behavior of the combination of the two component criteria was investigated. The implications of the in vitro tests on human BE and the accuracy of the intra‐subject variability have impacts on appropriate planning and interpretation of BE studies utilizing the switching decision rule.  相似文献   

11.
Bayesian dynamic borrowing designs facilitate borrowing information from historical studies. Historical data, when perfectly commensurate with current data, have been shown to reduce the trial duration and the sample size, while inflation in the type I error and reduction in the power have been reported, when imperfectly commensurate. These results, however, were obtained without considering that Bayesian designs are calibrated to meet regulatory requirements in practice and even no‐borrowing designs may use information from historical data in the calibration. The implicit borrowing of historical data suggests that imperfectly commensurate historical data may similarly impact no‐borrowing designs negatively. We will provide a fair appraiser of Bayesian dynamic borrowing and no‐borrowing designs. We used a published selective adaptive randomization design and real clinical trial setting and conducted simulation studies under varying degrees of imperfectly commensurate historical control scenarios. The type I error was inflated under the null scenario of no intervention effect, while larger inflation was noted with borrowing. The larger inflation in type I error under the null setting can be offset by the greater probability to stop early correctly under the alternative. Response rates were estimated more precisely and the average sample size was smaller with borrowing. The expected increase in bias with borrowing was noted, but was negligible. Using Bayesian dynamic borrowing designs may improve trial efficiency by stopping trials early correctly and reducing trial length at the small cost of inflated type I error.  相似文献   

12.
The Conway–Maxwell–Poisson estimator is considered in this paper as the population size estimator. The benefit of using the Conway–Maxwell–Poisson distribution is that it includes the Bernoulli, the Geometric and the Poisson distributions as special cases and, furthermore, allows for heterogeneity. Little emphasis is often placed on the variability associated with the population size estimate. This paper provides a deep and extensive comparison of bootstrap methods in the capture–recapture setting. It deals with the classical bootstrap approach using the true population size, the true bootstrap, and the classical bootstrap using the observed sample size, the reduced bootstrap. Furthermore, the imputed bootstrap, as well as approximating forms in terms of standard errors and confidence intervals for the population size, under the Conway–Maxwell–Poisson distribution, have been investigated and discussed. These methods are illustrated in a simulation study and in benchmark real data examples.  相似文献   

13.
The carryover effect is a recurring issue in the pharmaceutical field. It may strongly influence the final outcome of an average bioequivalence study. Testing a null hypothesis of zero carryover is useless: not rejecting it does not guarantee the non‐existence of carryover, and rejecting it is not informative of the true degree of carryover and its influence on the validity of the final outcome of the bioequivalence study. We propose a more consistent approach: even if some carryover is present, is it enough to seriously distort the study conclusions or is it negligible? This is the central aim of this paper, which focuses on average bioequivalence studies based on 2 × 2 crossover designs and on the main problem associated with carryover: type I error inflation. We propose an equivalence testing approach to these questions and suggest reasonable negligibility or relevance limits for carryover. Finally, we illustrate this approach on some real datasets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
The study design was a multi-center, multiple-dose, randomized, open-label, 2 x 2 crossover study in patients with advanced solid tumors. Each patient was randomized to receive the test formulation or the reference formulation of the drug. The primary objective of the study was to demonstrate the bioequivalence of the test formulation T relative to the reference formulation R. The primary pharmacokinetic endpoints were AUC and Cmax. Since there were different bioequivalence criteria, different endpoints, with different and highly variable coefficients of variation, an adaptive design with a stopping rule for early establishing the bioequivalence as well as early stopping for futility with a flexible information-based monitoring based on error spending approach was implemented to manage uncertainty in assumptions of variability and expected slow enrollment rates.  相似文献   

16.
Preliminary tests of significance on the crucial assumptions are often done before drawing inferences of primary interest. In a factorial trial, the data may be pooled across the columns or rows for making inferences concerning the efficacy of the drugs {simple effect) in the absence of interaction. Pooling the data has an advantage of higher power due to larger sample size. On the other hand, in the presence of interaction, such pooling may seriously inflate the type I error rate in testing for the simple effect.

A preliminary test for interaction is therefore in order. If this preliminary test is not significant at some prespecified level of significance, then pool the data for testing the efficacy of the drugs at a specified α level. Otherwise, use of the corresponding cell means for testing the efficacy of the drugs at the specified α is recommended. This paper demonstrates that this adaptive procedure may seriously inflate the overall type I error rate. Such inflation happens even in the absence of interaction.

One interesting result is that the type I error rate of the adaptive procedure depends on the interaction and the square root of the sample size only through their product. One consequence of this result is as follows. No matter how small the non-zero interaction might be, the inflation of the type I error rate of the always-pool procedure will eventually become unacceptable as the sample size increases. Therefore, in a very large study, even though the interaction is suspected to be very small but non-zero, the always-pool procedure may seriously inflate the type I error rate in testing for the simple effects.

It is concluded that the 2 × 2 factorial design is not an efficient design for detecting simple effects, unless the interaction is negligible.  相似文献   

17.
For normally distributed data analyzed with linear models, it is well known that measurement error on an independent variable leads to attenuation of the effect of the independent variable on the dependent variable. However, for time‐to‐event variables such as progression‐free survival (PFS), the effect of the measurement variability in the underlying measurements defining the event is less well understood. We conducted a simulation study to evaluate the impact of measurement variability in tumor assessment on the treatment effect hazard ratio for PFS and on the median PFS time, for different tumor assessment frequencies. Our results show that scan measurement variability can cause attenuation of the treatment effect (i.e. the hazard ratio is closer to one) and that the extent of attenuation may be increased with more frequent scan assessments. This attenuation leads to inflation of the type II error. Therefore, scan measurement variability should be minimized as far as possible in order to reveal a treatment effect that is closest to the truth. In disease settings where the measurement variability is shown to be large, consideration may be given to inflating the sample size of the study to maintain statistical power. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Reference‐scaled average bioequivalence (RSABE) approaches for highly variable drugs are based on linearly scaling the bioequivalence limits according to the reference formulation within‐subject variability. RSABE methods have type I error control problems around the value where the limits change from constant to scaled. In all these methods, the probability of type I error has only one absolute maximum at this switching variability value. This allows adjusting the significance level to obtain statistically correct procedures (that is, those in which the probability of type I error remains below the nominal significance level), at the expense of some potential power loss. In this paper, we explore adjustments to the EMA and FDA regulatory RSABE approaches, and to a possible improvement of the original EMA method, designated as HoweEMA. The resulting adjusted methods are completely correct with respect to type I error probability. The power loss is generally small and tends to become irrelevant for moderately large (affordable in real studies) sample sizes.  相似文献   

19.
In this paper, we briefly overview different zero-inflated probability distributions. We compare the performance of the estimates of Poisson, Generalized Poisson, ZIP, ZIGP and ZINB models through Mean square error (MSE), bias and Standard error (SE) when the samples are generated from ZIP distribution. We propose a new estimator referred to as probability estimator (PE) of inflation parameter of ZIP distribution based on moment estimator (ME) of the mean parameter and compare its performance with ME and maximum likelihood estimator (MLE) through a simulation study. We use the PE along with ME and MLE to fit ZIP distribution to various zero-inflated datasets and observe that the results do not differ significantly. We recommend using PE in place of MLE since it is easy to calculate and the simulation study in this paper demonstrates that the PE performs as good as MLE irrespective of the sample size.  相似文献   

20.
Pseudo‐values have proven very useful in censored data analysis in complex settings such as multi‐state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second‐order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo‐values still seem unclear. In this paper, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U‐statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error estimates will typically be too large, although in many practical applications the difference will be of minor importance. We show how to estimate correctly the variability of the estimator. This is further studied in some simulation studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号