首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

2.
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross‐over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (?0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non‐central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under‐ or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70–143% within each of two ranges of intra‐subject coefficient of variation (10–30% and 30–55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size. The optimized designs exhibited substantially improved performance characteristics over our previous adaptive sequential designs. Even though the optimized designs avoided undue inflation of type I error and maintained power at 80%, their average sample sizes were similar to or less than those of conventional single stage designs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
In group sequential clinical trials, there are several sample size re-estimation methods proposed in the literature that allow for change of sample size at the interim analysis. Most of these methods are based on either the conditional error function or the interim effect size. Our simulation studies compared the operating characteristics of three commonly used sample size re-estimation methods, Chen et al. (2004), Cui et al. (1999), and Muller and Schafer (2001). Gao et al. (2008) extended the CDL method and provided an analytical expression of lower and upper threshold of conditional power where the type I error is preserved. Recently, Mehta and Pocock (2010) extensively discussed that the real benefit of the adaptive approach is to invest the sample size resources in stages and increasing the sample size only if the interim results are in the so called “promising zone” which they define in their article. We incorporated this concept in our simulations while comparing the three methods. To test the robustness of these methods, we explored the impact of incorrect variance assumption on the operating characteristics. We found that the operating characteristics of the three methods are very comparable. In addition, the concept of promising zone, as suggested by MP, gives the desired power and smaller average sample size, and thus increases the efficiency of the trial design.  相似文献   

5.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   

6.

Sufficient dimension reduction (SDR) provides a framework for reducing the predictor space dimension in statistical regression problems. We consider SDR in the context of dimension reduction for deterministic functions of several variables such as those arising in computer experiments. In this context, SDR can reveal low-dimensional ridge structure in functions. Two algorithms for SDR—sliced inverse regression (SIR) and sliced average variance estimation (SAVE)—approximate matrices of integrals using a sliced mapping of the response. We interpret this sliced approach as a Riemann sum approximation of the particular integrals arising in each algorithm. We employ the well-known tools from numerical analysis—namely, multivariate numerical integration and orthogonal polynomials—to produce new algorithms that improve upon the Riemann sum-based numerical integration in SIR and SAVE. We call the new algorithms Lanczos–Stieltjes inverse regression (LSIR) and Lanczos–Stieltjes average variance estimation (LSAVE) due to their connection with Stieltjes’ method—and Lanczos’ related discretization—for generating a sequence of polynomials that are orthogonal with respect to a given measure. We show that this approach approximates the desired integrals, and we study the behavior of LSIR and LSAVE with two numerical examples. The quadrature-based LSIR and LSAVE eliminate the first-order algebraic convergence rate bottleneck resulting from the Riemann sum approximation, thus enabling high-order numerical approximations of the integrals when appropriate. Moreover, LSIR and LSAVE perform as well as the best-case SIR and SAVE implementations (e.g., adaptive partitioning of the response space) when low-order numerical integration methods (e.g., simple Monte Carlo) are used.

  相似文献   

7.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

8.
In some situations an experimenter may desire to have equally spaced design points. Three methods of obtaining such points on the interval [—1,1]—namely systematic random sampling, centrally located systematic sampling, and a purposive systematic sampling method which includes the endpoints - 1 and 1 as two of the design points-are evaluated under the D-optimal and G-optimal criteria. These methods are also compared to the optimal designs in polynomial regression and to the limiting designs of Kiefer and Studden (1976).  相似文献   

9.
Before carrying out a full scale bioequivalence trial, it is desirable to conduct a pilot trial to decide if a generic drug product shows promise of bioequivalence. The purpose of a pilot trial is to screen test formulations, and hence small sample sizes can be used. Based on the outcome of the pilot trial, one can decide whether or not a full scale pivotal trial should be carried out to assess bioequivalence. This article deals with the design of a pivotal trial, based on the evidence from the pilot trial. A two-stage adaptive procedure is developed in order to determine the sample size and the decision rule for the pivotal trial, for testing average bioequivalence using the two one-sided test (TOST). Numerical implementation of the procedure is discussed in detail, and the required tables are provided. Numerical results indicate that the required sample sizes could be smaller than that recommended by the FDA for a single trial, especially when the pilot study provides strong evidence in favor of bioequivalence.  相似文献   

10.
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3 :277–287) for two‐stage trials with mid‐trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re‐estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.  相似文献   

11.
Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter.  相似文献   

12.
The 2 × 2 crossover is commonly used to establish average bioequivalence of two treatments. In practice, the sample size for this design is often calculated under a supposition that the true average bioavailabilities of the two treatments are almost identical. However, the average bioequivalence analysis that is subsequently carried out does not reflect this prior belief and this leads to a loss in efficiency. We propose an alternate average bioequivalence analysis that avoids this inefficiency. The validity and substantial power advantages of our proposed method are illustrated by simulations, and two numerical examples with real data are provided.  相似文献   

13.
This article is devoted to the construction and asymptotic study of adaptive, group‐sequential, covariate‐adjusted randomized clinical trials analysed through the prism of the semiparametric methodology of targeted maximum likelihood estimation. We show how to build, as the data accrue group‐sequentially, a sampling design that targets a user‐supplied optimal covariate‐adjusted design. We also show how to carry out sound statistical inference based on such an adaptive sampling scheme (therefore extending some results known in the independent and identically distributed setting only so far), and how group‐sequential testing applies on top of it. The procedure is robust (i.e. consistent even if the working model is mis‐specified). A simulation study confirms the theoretical results and validates the conjecture that the procedure may also be efficient.  相似文献   

14.
A biosimilar drug is a biological product that is highly similar to and at the same time has no clinically meaningful difference from licensed product in terms of safety, purity, and potency. Biosimilar study design is essential to demonstrate the equivalence between biosimilar drug and reference product. However, existing designs and assessment methods are primarily based on binary and continuous endpoints. We propose a Bayesian adaptive design for biosimilarity trials with time-to-event endpoint. The features of the proposed design are twofold. First, we employ the calibrated power prior to precisely borrow relevant information from historical data for the reference drug. Second, we propose a two-stage procedure using the Bayesian biosimilarity index (BBI) to allow early stop and improve the efficiency. Extensive simulations are conducted to demonstrate the operating characteristics of the proposed method in contrast with some naive method. Sensitivity analysis and extension with respect to the assumptions are presented.  相似文献   

15.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   

16.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

17.
Statistical bioequivalence has recently attracted lots of attention. This is perhaps due to the importance of setting a reasonable criterion on the part of a regulatory agency such as the FDA in the US in regulating the manufacturing of drugs (especially generic drugs). Pharmaceutical companies are obviously interested in the criterion since a huge profit is involved. Various criteria and various types of bioequivalence have been proposed. At present, the FDA recommends testing for average bioequivalence. The FDA, however, is considering replacing average bioequivalence by individual bioequivalence. We focus on the criterion of individual bioequivalence proposed earlier by Anderson and Hauck (J. Pharmacokinetics and Biopharmaceutics 18 (1990) 259) and Wellek (Medizinische Informatik und Statistik, vol. 71, Springer, Berlin, 1989, pp. 95–99; Biometrical J. 35 (1993) 47). For their criterion, they proposed TIER (test of individual equivalence ratios). Other tests were also proposed by Phillips (J. Biopharmaceutical Statist. 3 (1993) 185), and Liu and Chow (J. Biopharmaceutical Statist. 7 (1997) 49). In this paper, we propose an alternative test, called nearly unbiased test, which is shown numerically to have power substantially larger than existing tests. We also show that our test works for various models including 2×3 and 2×4 crossover designs.  相似文献   

18.
The problem of modeling the relationship between a set of covariates and a multivariate response with correlated components often arises in many areas of research such as genetics, psychometrics, signal processing. In the linear regression framework, such task can be addressed using a number of existing methods. In the high-dimensional sparse setting, most of these methods rely on the idea of penalization in order to efficiently estimate the regression matrix. Examples of such methods include the lasso, the group lasso, the adaptive group lasso or the simultaneous variable selection (SVS) method. Crucially, a suitably chosen penalty also allows for an efficient exploitation of the correlation structure within the multivariate response. In this paper we introduce a novel variant of such method called the adaptive SVS, which is closely linked with the adaptive group lasso. Via a simulation study we investigate its performance in the high-dimensional sparse regression setting. We provide a comparison with a number of other popular methods under different scenarios and show that the adaptive SVS is a powerful tool for efficient recovery of signal in such setting. The methods are applied to genetic data.  相似文献   

19.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achieve a specific goal. Response-adaptive designs for clinical trials are usually constructed to achieve a single objective. In this paper, we develop a new adaptive allocation rule to improve current strategies for building response-adaptive designs to construct multiple-objective repeated measurement designs. This new rule is designed to increase estimation precision and treatment benefit by assigning more patients to a better treatment sequence. We demonstrate that designs constructed under the new proposed allocation rule can be nearly as efficient as fixed optimal designs in terms of the mean squared error, while leading to improved patient care.  相似文献   

20.
ABSTRACT

Just as Bayes extensions of the frequentist optimal allocation design have been developed for the two-group case, we provide a Bayes extension of optimal allocation in the three-group case. We use the optimal allocations derived by Jeon and Hu [Optimal adaptive designs for binary response trials with three treatments. Statist Biopharm Res. 2010;2(3):310–318] and estimate success probabilities for each treatment arm using a Bayes estimator. We also introduce a natural lead-in design that allows adaptation to begin as early in the trial as possible. Simulation studies show that the Bayesian adaptive designs simultaneously increase the power and expected number of successfully treated patients compared to the balanced design. And compared to the standard adaptive design, the natural lead-in design introduced in this study produces a higher expected number of successes whilst preserving power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号