首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An Erratum has been published for this article in Pharmaceutical Statistics 2004; 3(3): 232 Since the early 1990s, average bioequivalence (ABE) has served as the international standard for demonstrating that two formulations of drug product will provide the same therapeutic benefit and safety profile. Population (PBE) and individual (IBE) bioequivalence have been the subject of intense international debate since methods for their assessment were proposed in the late 1980s. Guidance has been proposed by the Food and Drug Administration (FDA) for the implementation of these techniques in the pioneer and generic pharmaceutical industries. Hitherto no consensus among regulators, academia and industry has been established on the use of the IBE and PBE metrics. The need for more stringent bioequivalence criteria has not been demonstrated, and it is known that the PBE and IBE criteria proposed by the FDA are actually less stringent under certain conditions. The statistical properties of method of moments and restricted maximum likelihood modelling in replicate designs will be summarized, and the application of these techniques in the assessment of ABE, IBE and PBE will be considered based on a database of 51 replicate design studies and using simulation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
Bioequivalence (BE) is required for approving a generic drug. The two one‐sided tests procedure (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration‐time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross‐over study with the typical number of subjects using the TOST because of lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference‐scaled average BE (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross‐over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Viewpoint: observations on scaled average bioequivalence   总被引:1,自引:1,他引:0  
The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace.  相似文献   

4.
Inverse Weibull (IW) distribution is one of the widely used probability distributions for nonnegative data modelling, specifically, for describing degradation phenomena of mechanical components. In this paper, by compounding IW and power series distributions we introduce a new lifetime distribution. The compounding procedure follows the same set-up carried out by Adamidis and Loukas [A lifetime distribution with decreasing failure rate. Stat Probab Lett. 1998;39:35–42]. We provide mathematical properties of this new distribution such as moments, estimation by maximum likelihood with censored data, inference for a large sample and the EM algorithm to determine the maximum likelihood estimates of the parameters. Furthermore, we characterize the proposed distributions using a simple relationship between two truncated moments and maximum entropy principle under suitable constraints. Finally, to show the flexibility of this type of distributions, we demonstrate applications of two real data sets.  相似文献   

5.
Average bioequivalence (ABE) has been the regulatory standard for bioequivalence (BE) since the 1990s. BE studies are commonly two-period crossovers, but may also use replicated designs. The replicated crossover will provide greater power for the ABE assessment. FDA has recommended that ABE analysis of replicated crossovers use a model which includes terms for separate within- and between-subject components for each formulation and which allows for a subject x formulation interaction component. Our simulation study compares the performance of four alternative mixed effects models: the FDA model, a three variance component model proposed by Ekbohm and Melander (EM), a random intercepts and slopes model (RIS) proposed by Patterson and Jones, and a simple model that contains only two variance components. The simple model fails (when not 'true') to provide adequate coverage and it accepts the hypothesis of equivalence too often. FDA and EM models are frequently indistinguishable and often provide the best performance with respect to coverage and probability of concluding BE. The RIS model concludes equivalence too often when both the within- and between-subject variance components differ between formulations. The FDA analysis model is recommended because it provides the most detail regarding components of variability and has a slight advantage over the EM model in confidence interval length.  相似文献   

6.
Emrah Altun 《Statistics》2019,53(2):364-386
In this paper, we introduce a new distribution, called generalized Gudermannian (GG) distribution, and its skew extension for GARCH models in modelling daily Value-at-Risk (VaR). Basic structural properties of the proposed distribution are obtained including probability density and cumulative distribution functions, moments, and stochastic representation. The maximum likelihood method is used to estimate unknown parameters of the proposed model and finite sample performance of maximum likelihood estimates are evaluated by means of Monte-Carlo simulation study. The real data application on Nikkei 225 index is given to demonstrate the performance of GARCH model specified under skew extension of GG innovation distribution against normal, Student's-t, skew normal and generalized error and skew generalized error distributions in terms of the accuracy of VaR forecasts. The empirical results show that the GARCH model with GG innovation distribution produces the most accurate VaR forecasts for all confidence levels.  相似文献   

7.
Random effects models have been playing a critical role for modelling longitudinal data. However, there are little studies on the kernel-based maximum likelihood method for semiparametric random effects models. In this paper, based on kernel and likelihood methods, we propose a pooled global maximum likelihood method for the partial linear random effects models. The pooled global maximum likelihood method employs the local approximations of the nonparametric function at a group of grid points simultaneously, instead of one point. Gaussian quadrature is used to approximate the integration of likelihood with respect to random effects. The asymptotic properties of the proposed estimators are rigorously studied. Simulation studies are conducted to demonstrate the performance of the proposed approach. We also apply the proposed method to analyse correlated medical costs in the Medical Expenditure Panel Survey data set.  相似文献   

8.
Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR‐RTR‐RRT and TRTR‐RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR‐RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR‐RT designs. Under the assumption of zero subject‐by‐formulation interaction, this basic model is comparable to the FDA‐recommended model for TRR‐RTR‐RRT and TRTR‐RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real‐world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ‐based testing procedure that evaluates HV drugs under practical TR‐RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ‐based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods.  相似文献   

9.
The purpose of this study was to evaluate the effect of residual variability and carryover on average bioequivalence (ABE) studies performed under a 22 crossover design. ABE is usually assessed by means of the confidence interval inclusion principle. Here, the interval under consideration was the standard 'shortest' interval, which is the mainstream approach in practice. The evaluation was performed by means of a simulation study under different combinations of carryover and residual variability besides of formulation effect and sample size. The evaluation was made in terms of percentage of ABE declaration, coverage and interval precision. As is well known, high levels of variability distort the ABE procedures, particularly its type II error control (i.e. high variabilities make difficult to declare bioequivalence when it holds). The effect of carryover is modulated by variability and is especially disturbing for the type I error control. In the presence of carryover, the risk of erroneously declaring bioequivalence may become high, especially for low variabilities and large sample sizes. We end up with some hints concerning the controversy about pretesting for carryover before performing ABE analysis.  相似文献   

10.
Estimating the parameters of multivariate mixed Poisson models is an important problem in image processing applications, especially for active imaging or astronomy. The classical maximum likelihood approach cannot be used for these models since the corresponding masses cannot be expressed in a simple closed form. This paper studies a maximum pairwise likelihood approach to estimate the parameters of multivariate mixed Poisson models when the mixing distribution is a multivariate Gamma distribution. The consistency and asymptotic normality of this estimator are derived. Simulations conducted on synthetic data illustrate these results and show that the proposed estimator outperforms classical estimators based on the method of moments. An application to change detection in low-flux images is also investigated.  相似文献   

11.
In this paper, a new discrete distribution called Uniform-Geometric distribution is proposed. Several distributional properties including survival function, moments, skewness, kurtosis, entropy and hazard rate function are discussed. Estimation of distribution parameter is studied by methods of moments, proportions and maximum likelihood. A simulation study is performed to compare the performance of the different estimates in terms of bias and mean square error. Two real data applications are also presented to see that new distribution is useful in modelling data.  相似文献   

12.
Motivated by problems of modelling torsional angles in molecules, Singh, Hnizdo & Demchuk (2002) proposed a bivariate circular model which is a natural torus analogue of the bivariate normal distribution and a natural extension of the univariate von Mises distribution to the bivariate case. The authors present here a multivariate extension of the bivariate model of Singh, Hnizdo & Demchuk (2002). They study the conditional distributions and investigate the shapes of marginal distributions for a special case. The methods of moments and pseudo‐likelihood are considered for the estimation of parameters of the new distribution. The authors investigate the efficiency of the pseudo‐likelihood approach in three dimensions. They illustrate their methods with protein data of conformational angles  相似文献   

13.
Weibull distributions have received wide ranging applications in many areas including reliability, hydrology and communication systems. Many estimation methods have been proposed for Weibull distributions. But there has not been a comprehensive comparison of these estimation methods. Most studies have focused on comparing the maximum likelihood estimation (MLE) with one of the other approaches. In this paper, we first propose an L-moment estimator for the Weibull distribution. Then, a comprehensive comparison is made of the following methods: the method of maximum likelihood estimation (MLE), the method of logarithmic moments, the percentile method, the method of moments and the method of L-moments.  相似文献   

14.
The penalized logistic regression is a useful tool for classifying samples and feature selection. Although the methodology has been widely used in various fields of research, their performance takes a sudden turn for the worst in the presence of outlier, since the logistic regression is based on the maximum log-likelihood method which is sensitive to outliers. It implies that we cannot accurately classify samples and find important factors having crucial information for classification. To overcome the problem, we propose a robust penalized logistic regression based on a weighted likelihood methodology. We also derive an information criterion for choosing the tuning parameters, which is a vital matter in robust penalized logistic regression modelling in line with generalized information criteria. We demonstrate through Monte Carlo simulations and real-world example that the proposed robust modelling strategies perform well for sparse logistic regression modelling even in the presence of outliers.  相似文献   

15.
The problem of testing the similarity of two normal populations is reconsidered, in this article, from a nonclassical point of view. We introduce a test statistic based on the maximum likelihood estimate of Weitzman's overlapping coefficient. Simulated critical points are provided for the proposed test for various sample sizes and significance levels. Statistical powers of the proposed test are computed via simulation studies and compared to those of the existing tests. Furthermore, Type-I error robustness of the proposed and the existing tests are studied via simulation studies when the underlying distributions are non-normal. Two data sets are analyzed for illustration purposes. Finally, the proposed test has been implemented to assess the bioequivalence of two drug formulations.  相似文献   

16.
A numerically feasible algorithm is proposed for maximum likelihood estimation of the parameters of the Dirichlet distribution. The performance of the proposed method is compared with the method of moments using bias ratio and squared errors by Monte Carlo simulation. For these criteria, it is found that even in small samples maximum likelihood estimation has advantages over the method of moments.  相似文献   

17.
Researchers in the medical, health, and social sciences routinely encounter ordinal variables such as self‐reports of health or happiness. When modelling ordinal outcome variables, it is common to have covariates, for example, attitudes, family income, retrospective variables, measured with error. As is well known, ignoring even random error in covariates can bias coefficients and hence prejudice the estimates of effects. We propose an instrumental variable approach to the estimation of a probit model with an ordinal response and mismeasured predictor variables. We obtain likelihood‐based and method of moments estimators that are consistent and asymptotically normally distributed under general conditions. These estimators are easy to compute, perform well and are robust against the normality assumption for the measurement errors in our simulation studies. The proposed method is applied to both simulated and real data. The Canadian Journal of Statistics 47: 653–667; 2019 © 2019 Statistical Society of Canada  相似文献   

18.
For the first time, a five-parameter distribution, called the Kumaraswamy Burr XII (KwBXII) distribution, is defined and studied. The new distribution contains as special models some well-known distributions discussed in lifetime literature, such as the logistic, Weibull and Burr XII distributions, among several others. We obtain the complete moments, incomplete moments, generating and quantile functions, mean deviations, Bonferroni and Lorenz curves and reliability of the KwBXII distribution. We provide two representations for the moments of the order statistics. The method of maximum likelihood and a Bayesian procedure are adopted for estimating the model parameters. For different parameter settings and sample sizes, various simulation studies are performed and compared to the performance of the KwBXII distribution. Three applications to real data sets demonstrate the usefulness of the proposed distribution and that it may attract wider applications in lifetime data analysis.  相似文献   

19.
The use of Bayesian approaches in the regulated world of pharmaceutical drug development has not been without its difficulties or its critics. The recent Food and Drug Administration regulatory guidance on the use of Bayesian approaches in device submissions has mandated an investigation into the operating characteristics of Bayesian approaches and has suggested how to make adjustments in order that the proposed approaches are in a sense calibrated. In this paper, I present examples of frequentist calibration of Bayesian procedures and argue that we need not necessarily aim for perfect calibration but should be allowed to use procedures, which are well‐calibrated, a position supported by the guidance. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
A parametric robust test is proposed for comparing several coefficients of variation. This test is derived by properly correcting the normal likelihood function according to the technique suggested by Royall and Tsou. The proposed test statistic is asymptotically valid for general random variables, as long as their underlying distributions have finite fourth moments.

Simulation studies and real data analyses are provided to demonstrate the effectiveness of the novel robust procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号