首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR‐RTR‐RRT and TRTR‐RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR‐RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR‐RT designs. Under the assumption of zero subject‐by‐formulation interaction, this basic model is comparable to the FDA‐recommended model for TRR‐RTR‐RRT and TRTR‐RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real‐world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ‐based testing procedure that evaluates HV drugs under practical TR‐RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ‐based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods.  相似文献   

2.
The purpose of this study was to evaluate the effect of residual variability and carryover on average bioequivalence (ABE) studies performed under a 22 crossover design. ABE is usually assessed by means of the confidence interval inclusion principle. Here, the interval under consideration was the standard 'shortest' interval, which is the mainstream approach in practice. The evaluation was performed by means of a simulation study under different combinations of carryover and residual variability besides of formulation effect and sample size. The evaluation was made in terms of percentage of ABE declaration, coverage and interval precision. As is well known, high levels of variability distort the ABE procedures, particularly its type II error control (i.e. high variabilities make difficult to declare bioequivalence when it holds). The effect of carryover is modulated by variability and is especially disturbing for the type I error control. In the presence of carryover, the risk of erroneously declaring bioequivalence may become high, especially for low variabilities and large sample sizes. We end up with some hints concerning the controversy about pretesting for carryover before performing ABE analysis.  相似文献   

3.
An Erratum has been published for this article in Pharmaceutical Statistics 2004; 3(3): 232 Since the early 1990s, average bioequivalence (ABE) has served as the international standard for demonstrating that two formulations of drug product will provide the same therapeutic benefit and safety profile. Population (PBE) and individual (IBE) bioequivalence have been the subject of intense international debate since methods for their assessment were proposed in the late 1980s. Guidance has been proposed by the Food and Drug Administration (FDA) for the implementation of these techniques in the pioneer and generic pharmaceutical industries. Hitherto no consensus among regulators, academia and industry has been established on the use of the IBE and PBE metrics. The need for more stringent bioequivalence criteria has not been demonstrated, and it is known that the PBE and IBE criteria proposed by the FDA are actually less stringent under certain conditions. The statistical properties of method of moments and restricted maximum likelihood modelling in replicate designs will be summarized, and the application of these techniques in the assessment of ABE, IBE and PBE will be considered based on a database of 51 replicate design studies and using simulation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

5.
Bioequivalence (BE) is required for approving a generic drug. The two one‐sided tests procedure (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration‐time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross‐over study with the typical number of subjects using the TOST because of lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference‐scaled average BE (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross‐over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Since the early 1990s, average bioequivalence (ABE) studies have served as the international regulatory standard for demonstrating that two formulations of drug product will provide the same therapeutic benefit and safety profile when used in the marketplace. Population (PBE) and individual (IBE) bioequivalence have been the subject of intense international debate since methods for their assessment were proposed in the late 1980s and since their use was proposed in United States Food and Drug Administration guidance in 1997. Guidance has since been proposed and finalized by the Food and Drug Administration for the implementation of such techniques in the pioneer and generic pharmaceutical industries. The current guidance calls for the use of replicate design and of cross‐over studies (cross‐overs with sequences TRTR, RTRT, where T is the test and R is the reference formulation) for selected drug products, and proposes restricted maximum likelihood and method‐of‐moments techniques for parameter estimation. In general, marketplace access will be granted if the products demonstrate ABE based on a restricted maximum likelihood model. Study sponsors have the option of using PBE or IBE if the use of these criteria can be justified to the regulatory authority. Novel and previously proposed SAS®‐based approaches to the modelling of pharmacokinetic data from replicate design studies will be summarized. Restricted maximum likelihood and method‐of‐moments modelling results are compared and contrasted based on the analysis of data available from previously performed replicate design studies, and practical issues involved in the application of replicate designs to demonstrate ABE are characterized. It is concluded that replicate designs may be used effectively to demonstrate ABE for highly variable drug products. Statisticians should exercise caution in the choice of modelling procedure. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
8.
Statistical bioequivalence has recently attracted lots of attention. This is perhaps due to the importance of setting a reasonable criterion on the part of a regulatory agency such as the FDA in the US in regulating the manufacturing of drugs (especially generic drugs). Pharmaceutical companies are obviously interested in the criterion since a huge profit is involved. Various criteria and various types of bioequivalence have been proposed. At present, the FDA recommends testing for average bioequivalence. The FDA, however, is considering replacing average bioequivalence by individual bioequivalence. We focus on the criterion of individual bioequivalence proposed earlier by Anderson and Hauck (J. Pharmacokinetics and Biopharmaceutics 18 (1990) 259) and Wellek (Medizinische Informatik und Statistik, vol. 71, Springer, Berlin, 1989, pp. 95–99; Biometrical J. 35 (1993) 47). For their criterion, they proposed TIER (test of individual equivalence ratios). Other tests were also proposed by Phillips (J. Biopharmaceutical Statist. 3 (1993) 185), and Liu and Chow (J. Biopharmaceutical Statist. 7 (1997) 49). In this paper, we propose an alternative test, called nearly unbiased test, which is shown numerically to have power substantially larger than existing tests. We also show that our test works for various models including 2×3 and 2×4 crossover designs.  相似文献   

9.
In 2008, this group published a paper on approaches for two‐stage crossover bioequivalence (BE) studies that allowed for the reestimation of the second‐stage sample size based on the variance estimated from the first‐stage results. The sequential methods considered used an assumed GMR of 0.95 as part of the method for determining power and sample size. This note adds results for an assumed GMR = 0.90. Two of the methods recommended for GMR = 0.95 in the earlier paper have some unacceptable increases in Type I error rate when the GMR is changed to 0.90. If a sponsor wants to assume 0.90 for the GMR, Method D is recommended. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
A modified large-sample (MLS) approach and a generalized confidence interval (GCI) approach are proposed for constructing confidence intervals for intraclass correlation coefficients. Two particular intraclass correlation coefficients are considered in a reliability study. Both subjects and raters are assumed to be random effects in a balanced two-factor design, which includes subject-by-rater interaction. Computer simulation is used to compare the coverage probabilities of the proposed MLS approach (GiTTCH) and GCI approaches with the Leiva and Graybill [1986. Confidence intervals for variance components in the balanced two-way model with interaction. Comm. Statist. Simulation Comput. 15, 301–322] method. The competing approaches are illustrated with data from a gauge repeatability and reproducibility study. The GiTTCH method maintains at least the stated confidence level for interrater reliability. For intrarater reliability, the coverage is accurate in several circumstances but can be liberal in some circumstances. The GCI approach provides reasonable coverage for lower confidence bounds on interrater reliability, but its corresponding upper bounds are too liberal. Regarding intrarater reliability, the GCI approach is not recommended because the lower bound coverage is liberal. Comparing the overall performance of the three methods across a wide array of scenarios, the proposed modified large-sample approach (GiTTCH) provides the most accurate coverage for both interrater and intrarater reliability.  相似文献   

11.
Before carrying out a full scale bioequivalence trial, it is desirable to conduct a pilot trial to decide if a generic drug product shows promise of bioequivalence. The purpose of a pilot trial is to screen test formulations, and hence small sample sizes can be used. Based on the outcome of the pilot trial, one can decide whether or not a full scale pivotal trial should be carried out to assess bioequivalence. This article deals with the design of a pivotal trial, based on the evidence from the pilot trial. A two-stage adaptive procedure is developed in order to determine the sample size and the decision rule for the pivotal trial, for testing average bioequivalence using the two one-sided test (TOST). Numerical implementation of the procedure is discussed in detail, and the required tables are provided. Numerical results indicate that the required sample sizes could be smaller than that recommended by the FDA for a single trial, especially when the pilot study provides strong evidence in favor of bioequivalence.  相似文献   

12.
In mixed linear models, it is frequently of interest to test hypotheses on the variance components. F-test and likelihood ratio test (LRT) are commonly used for such purposes. Current LRTs available in literature are based on limiting distribution theory. With the development of finite sample distribution theory, it becomes possible to derive the exact test for likelihood ratio statistic. In this paper, we consider the problem of testing null hypotheses on the variance component in a one-way balanced random effects model. We use the exact test for the likelihood ratio statistic and compare the performance of F-test and LRT. Simulations provide strong support of the equivalence between these two tests. Furthermore, we prove the equivalence between these two tests mathematically.  相似文献   

13.
In vitro permeation tests (IVPT) offer accurate and cost-effective development pathways for locally acting drugs, such as topical dermatological products. For assessment of bioequivalence, the FDA draft guidance on generic acyclovir 5% cream introduces a new experimental design, namely the single-dose, multiple-replicate per treatment group design, as IVPT pivotal study design. We examine the statistical properties of its hypothesis testing method—namely the mixed scaled average bioequivalence (MSABE). Meanwhile, some adaptive design features in clinical trials can help researchers make a decision earlier with fewer subjects or boost power, saving resources, while controlling the impact on family-wise error rate. Therefore, we incorporate MSABE in an adaptive design combining the group sequential design and sample size re-estimation. Simulation studies are conducted to study the passing rates of the proposed methods—both within and outside the average bioequivalence limits. We further consider modifications to the adaptive designs applied for IVPT BE trials, such as Bonferroni's adjustment and conditional power function. Finally, a case study with real data demonstrates the advantages of such adaptive methods.  相似文献   

14.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

15.
Spatio-temporal processes are often high-dimensional, exhibiting complicated variability across space and time. Traditional state-space model approaches to such processes in the presence of uncertain data have been shown to be useful. However, estimation of state-space models in this context is often problematic since parameter vectors and matrices are of high dimension and can have complicated dependence structures. We propose a spatio-temporal dynamic model formulation with parameter matrices restricted based on prior scientific knowledge and/or common spatial models. Estimation is carried out via the expectation–maximization (EM) algorithm or general EM algorithm. Several parameterization strategies are proposed and analytical or computational closed form EM update equations are derived for each. We apply the methodology to a model based on an advection–diffusion partial differential equation in a simulation study and also to a dimension-reduced model for a Palmer Drought Severity Index (PDSI) data set.  相似文献   

16.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Psychometric growth curve modeling techniques are used to describe a person’s latent ability and how that ability changes over time based on a specific measurement instrument. However, the same instrument cannot always be used over a period of time to measure that latent ability. This is often the case when measuring traits longitudinally in children. Reasons may be that over time some measurement tools that were difficult for young children become too easy as they age resulting in floor effects or ceiling effects or both. We propose a Bayesian hierarchical model for such a scenario. Within the Bayesian model we combine information from multiple instruments used at different age ranges and having different scoring schemes to examine growth in latent ability over time. The model includes between-subject variance and within-subject variance and does not require linking item specific difficulty between the measurement tools. The model’s utility is demonstrated on a study of language ability in children from ages one to ten who are hard of hearing where measurement tool specific growth and subject-specific growth are shown in addition to a group level latent growth curve comparing the hard of hearing children to children with normal hearing.KEYWORDS: Bayesian hierarchical models, psychometric modeling, language ability, growth curve modeling, longitudinal analysis  相似文献   

18.
Reference‐scaled average bioequivalence (RSABE) approaches for highly variable drugs are based on linearly scaling the bioequivalence limits according to the reference formulation within‐subject variability. RSABE methods have type I error control problems around the value where the limits change from constant to scaled. In all these methods, the probability of type I error has only one absolute maximum at this switching variability value. This allows adjusting the significance level to obtain statistically correct procedures (that is, those in which the probability of type I error remains below the nominal significance level), at the expense of some potential power loss. In this paper, we explore adjustments to the EMA and FDA regulatory RSABE approaches, and to a possible improvement of the original EMA method, designated as HoweEMA. The resulting adjusted methods are completely correct with respect to type I error probability. The power loss is generally small and tends to become irrelevant for moderately large (affordable in real studies) sample sizes.  相似文献   

19.
In this paper, we study the bioequivalence (BE) inference problem motivated by pharmacokinetic data that were collected using the serial sampling technique. In serial sampling designs, subjects are independently assigned to one of the two drugs; each subject can be sampled only once, and data are collected at K distinct timepoints from multiple subjects. We consider design and hypothesis testing for the parameter of interest: the area under the concentration–time curve (AUC). Decision rules in demonstrating BE were established using an equivalence test for either the ratio or logarithmic difference of two AUCs. The proposed t-test can deal with cases where two AUCs have unequal variances. To control for the type I error rate, the involved degrees-of-freedom were adjusted using Satterthwaite's approximation. A power formula was derived to allow the determination of necessary sample sizes. Simulation results show that, when the two AUCs have unequal variances, the type I error rate is better controlled by the proposed method compared with a method that only handles equal variances. We also propose an unequal subject allocation method that improves the power relative to that of the equal and symmetric allocation. The methods are illustrated using practical examples.  相似文献   

20.
In this paper we examine the small sample distribution of the likelihood ratio test in the random effects model which is often recommended for meta-analyses. We find that this distribution depends strongly on the true value of the heterogeneity parameter (between-study variance) of the model, and that the correct p-value may be quite different from its large sample approximation. We recommend that the dependence of the heterogeneity parameter be examined for the data at hand and suggest a (simulation) method for this. Our setup allows for explanatory variables on the study level (meta-regression) and we discuss other possible applications, too. Two data sets are analyzed and two simulation studies are performed for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号