首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Combinations of drugs are increasingly being used for a wide variety of diseases and conditions. A pre-clinical study may allow the investigation of the response at a large number of dose combinations. In determining the response to a drug combination, interest may lie in seeking evidence of 'synergism', in which the joint action is greater than the actions of the individual drugs, or of 'antagonism', in which it is less. Two well-known response surface models representing no interaction are Loewe additivity and Bliss independence, and Loewe or Bliss synergism or antagonism is defined relative to these. We illustrate an approach to fitting these models for the case in which the marginal single drug dose-response relationships are represented by four-parameter logistic curves with common upper and lower limits, and where the response variable is normally distributed with a common variance about the dose-response curve. When the dose-response curves are not parallel, the relative potency of the two drugs varies according to the magnitude of the desired effect and the models for Loewe additivity and synergism/antagonism cannot be explicitly expressed. We present an iterative approach to fitting these models without the assumption of parallel dose-response curves. A goodness-of-fit test based on residuals is also described. Implementation using the SAS NLIN procedure is illustrated using data from a pre-clinical study.  相似文献   

2.
Potency bioassays are used to measure biological activity. Consequently, potency is considered a critical quality attribute in manufacturing. Relative potency is measured by comparing the concentration‐response curves of a manufactured test batch with that of a reference standard. If the curve shapes are deemed similar, the test batch is said to exhibit constant relative potency with the reference standard, a critical requirement for calibrating the potency of the final drug product. Outliers in bioassay potency data may result in the false acceptance/rejection of a bad/good sample and, if accepted, may yield a biased relative potency estimate. To avoid these issues, the USP<1032> recommends the screening of bioassay data for outliers prior to performing a relative potency analysis. In a recently published work, the effects of one or more outliers, outlier size, and outlier type on similarity testing and estimation of relative potency were thoroughly examined, confirming the USP<1032> outlier guidance. As a follow‐up, several outlier detection methods, including those proposed by the USP<1010>, are evaluated and compared in this work through computer simulation. Two novel outlier detection methods are also proposed. The effects of outlier removal on similarity testing and estimation of relative potency were evaluated, resulting in recommendations for best practice.  相似文献   

3.
The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre‐clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre‐clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log‐normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out‐perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
The authors propose a method for comparing two samples of curves. The notion of similarity between two curves is the basis of three statistics they suggest for testing the null hypothesis of no difference between the two groups. They exploit standard tools from functional data analysis to preprocess the observed curves and use the permutation distribution under the null hypothesis to obtain p‐values for their tests. They explore the operating characteristics of these tests through simulations and as an application, compare the ganglioside distribution in brain tissue between old and young rats.  相似文献   

5.
Model‐informed drug discovery and development offers the promise of more efficient clinical development, with increased productivity and reduced cost through scientific decision making and risk management. Go/no‐go development decisions in the pharmaceutical industry are often driven by effect size estimates, with the goal of meeting commercially generated target profiles. Sufficient efficacy is critical for eventual success, but the decision to advance development phase is also dependent on adequate knowledge of appropriate dose and dose‐response. Doses which are too high or low pose risk of clinical or commercial failure. This paper addresses this issue and continues the evolution of formal decision frameworks in drug development. Here, we consider the integration of both efficacy and dose‐response estimation accuracy into the go/no‐go decision process, using a model‐based approach. Using prespecified target and lower reference values associated with both efficacy and dose accuracy, we build a decision framework to more completely characterize development risk. Given the limited knowledge of dose response in early development, our approach incorporates a set of dose‐response models and uses model averaging. The approach and its operating characteristics are illustrated through simulation. Finally, we demonstrate the decision approach on a post hoc analysis of the phase 2 data for naloxegol (a drug approved for opioid‐induced constipation).  相似文献   

6.
Our paper proposes a methodological strategy to select optimal sampling designs for phenotyping studies including a cocktail of drugs. A cocktail approach is of high interest to determine the simultaneous activity of enzymes responsible for drug metabolism and pharmacokinetics, therefore useful in anticipating drug–drug interactions and in personalized medicine. Phenotyping indexes, which are area under the concentration‐time curves, can be derived from a few samples using nonlinear mixed effect models and maximum a posteriori estimation. Because of clinical constraints in phenotyping studies, the number of samples that can be collected in individuals is limited and the sampling times must be as flexible as possible. Therefore to optimize joint design for several drugs (i.e., to determine a compromise between informative times that best characterize each drug's kinetics), we proposed to use a compound optimality criterion based on the expected population Fisher information matrix in nonlinear mixed effect models. This criterion allows weighting different models, which might be useful to take into account the importance accorded to each target in a phenotyping test. We also computed windows around the optimal times based on recursive random sampling and Monte‐Carlo simulation while maintaining a reasonable level of efficiency for parameter estimation. We illustrated this strategy for two drugs often included in phenotyping cocktails, midazolam (probe for CYP3A) and digoxin (P‐glycoprotein), based on the data of a previous study, and were able to find a sparse and flexible design. The obtained design was evaluated by clinical trial simulations and shown to be efficient for the estimation of population and individual parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Since the implementation of the International Conference on Harmonization (ICH) E14 guideline in 2005, regulators have required a “thorough QTc” (TQT) study for evaluating the effects of investigational drugs on delayed cardiac repolarization as manifested by a prolonged QTc interval. However, TQT studies have increasingly been viewed unfavorably because of their low cost effectiveness. Several researchers have noted that a robust drug concentration‐QTc (conc‐QTc) modeling assessment in early phase development should, in most cases, obviate the need for a subsequent TQT study. In December 2015, ICH released an “E14 Q&As (R3)” document supporting the use of conc‐QTc modeling for regulatory decisions. In this article, we propose a simple improvement of two popular conc‐QTc assessment methods for typical first‐in‐human crossover‐like single ascending dose clinical pharmacology trials. The improvement is achieved, in part, by leveraging routinely encountered (and expected) intrasubject correlation patterns encountered in such trials. A real example involving a single ascending dose and corresponding TQT trial, along with results from a simulation study, illustrate the strong performance of the proposed method. The improved conc‐QTc assessment will further enable highly reliable go/no‐go decisions in early phase clinical development and deliver results that support subsequent TQT study waivers by regulators.  相似文献   

8.
Similarity in bioassays means that the test preparation behaves as a dilution of the standard preparation with respect to their biological effect. Thus, similarity must be investigated to confirm this biological property. Historically, this was typically conducted with traditional hypothesis testing, but this has received substantial criticism. Failing to reject similarity does not imply that the 2 preparations are similar. Also, rejecting similarity when bioassay variability is small might simply demonstrate a nonrelevant deviation in similarity. To remedy these concerns, equivalence testing has been proposed as an alternative to traditional hypothesis testing, and it has found its way in the official guidelines. However, similarity has been discussed mainly in terms of the parameters in the dose‐response curves of the standard and test preparations, but the consequences of nonsimilarity on the relative bioactivity have never been investigated. This article provides a general equivalence approach to evaluate similarity that is directly related to bioequivalence on the relative bioactivity of the standard and test preparations. Bioequivalence on the relative bioactivity can only be guaranteed for positive (only nonblanks) and finite dose intervals. The approach is demonstrated on 4 case studies in which we also show how to calculate a sample size and how to investigate the power of equivalence on similarity.  相似文献   

9.
In Sections 49 and 50 of the Design of Experiments, Fisher discusses an experiment designed to compare the effects of several types of manure on yield. Each type of manure is applied at three dosage levels: zero, single, and double doses. Fisher points out that the usual contrasts constructed for a factorial experiment are unsatisfactory in this setting. In particular, since the response curves necessarily meet at the zero dose, the usual notion of interaction as a lack of parallelism cannot apply. Fisher then gives an appropriate definition for interaction in this setting. This paper is concerned with a class of orthogonal polynomials that can be used as an aid in the detection of this modified definition of interaction.  相似文献   

10.
Cell‐based potency assays play an important role in the characterization of biopharmaceuticals but they can be challenging to develop in part because of greater inherent variability than other analytical methods. Our objective is to select concentrations on a dose–response curve that will enhance assay robustness. We apply the maximin D‐optimal design concept to the four‐parameter logistic (4PL) model and then derive and compute the maximin D‐optimal design for a challenging bioassay using curves representative of assay variation. The selected concentration points from this ‘best worst case’ design adequately fit a variety of 4PL shapes and demonstrate improved robustness. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Bioequivalence (BE) is required for approving a generic drug. The two one‐sided tests procedure (TOST, or the 90% confidence interval approach) has been used as the mainstream methodology to test average BE (ABE) on pharmacokinetic parameters such as the area under the blood concentration‐time curve and the peak concentration. However, for highly variable drugs (%CV > 30%), it is difficult to demonstrate ABE in a standard cross‐over study with the typical number of subjects using the TOST because of lack of power. Recently, the US Food and Drug Administration and the European Medicines Agency recommended similar but not identical reference‐scaled average BE (RSABE) approaches to address this issue. Although the power is improved, the new approaches may not guarantee a high level of confidence for the true difference between two drugs at the ABE boundaries. It is also difficult for these approaches to address the issues of population BE (PBE) and individual BE (IBE). We advocate the use of a likelihood approach for representing and interpreting BE data as evidence. Using example data from a full replicate 2 × 4 cross‐over study, we demonstrate how to present evidence using the profile likelihoods for the mean difference and standard deviation ratios of the two drugs for the pharmacokinetic parameters. With this approach, we present evidence for PBE and IBE as well as ABE within a unified framework. Our simulations show that the operating characteristics of the proposed likelihood approach are comparable with the RSABE approaches when the same criteria are applied. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Nowadays, treatment regimens for cancer often involve a combination of drugs. The determination of the doses of each of the combined drugs in phase I dose escalation studies poses methodological challenges. The most common phase I design, the classic ‘3+3' design, has been criticized for poorly estimating the maximum tolerated dose (MTD) and for treating too many subjects at doses below the MTD. In addition, the classic ‘3+3' is not able to address the challenges posed by combinations of drugs. Here, we assume that a control drug (commonly used and well‐studied) is administered at a fixed dose in combination with a new agent (the experimental drug) of which the appropriate dose has to be determined. We propose a randomized design in which subjects are assigned to the control or to the combination of the control and experimental. The MTD is determined using a model‐based Bayesian technique based on the difference of probability of dose limiting toxicities (DLT) between the control and the combination arm. We show, through a simulation study, that this approach provides better and more accurate estimates of the MTD. We argue that this approach may differentiate between an extreme high probability of DLT observed from the control and a high probability of DLT of the combination. We also report on a fictive (simulation) analysis based on published data of a phase I trial of ifosfamide combined with sunitinib.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose‐escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose‐concentration and concentration‐QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug–drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose‐escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision‐making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
The article studies the log-logistic class of dose–response bioassay models in the binomial set-up. The dose is identified by the potency adjusted mixing proportions of two similar compounds. Models for both absence and presence of interaction between the compounds have been considered. The aim is to investigate the D- and Ds-optimal mixture designs for the estimation of the full set of parameters or for the estimation of potency for a best guess of the parameter values. We also indicate how to find the optimal design to estimate the mixing proportions at which the probability of success attains a given value in the absence of the interaction effect.  相似文献   

15.
Understanding the dose–response relationship is a key objective in Phase II clinical development. Yet, designing a dose‐ranging trial is a challenging task, as it requires identifying the therapeutic window and the shape of the dose–response curve for a new drug on the basis of a limited number of doses. Adaptive designs have been proposed as a solution to improve both quality and efficiency of Phase II trials as they give the possibility to select the dose to be tested as the trial goes. In this article, we present a ‘shapebased’ two‐stage adaptive trial design where the doses to be tested in the second stage are determined based on the correlation observed between efficacy of the doses tested in the first stage and a set of pre‐specified candidate dose–response profiles. At the end of the trial, the data are analyzed using the generalized MCP‐Mod approach in order to account for model uncertainty. A simulation study shows that this approach gives more precise estimates of a desired target dose (e.g. ED70) than a single‐stage (fixed‐dose) design and performs as well as a two‐stage D‐optimal design. We present the results of an adaptive model‐based dose‐ranging trial in multiple sclerosis that motivated this research and was conducted using the presented methodology. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we present a test of independence between the response variable, which can be discrete or continuous, and a continuous covariate after adjusting for heteroscedastic treatment effects. The method involves first augmenting each pair of the data for all treatments with a fixed number of nearest neighbours as pseudo‐replicates. Then a test statistic is constructed by taking the difference of two quadratic forms. The statistic is equivalent to the average lagged correlations between the response and nearest neighbour local estimates of the conditional mean of response given the covariate for each treatment group. This approach effectively eliminates the need to estimate the nonlinear regression function. The asymptotic distribution of the proposed test statistic is obtained under the null and local alternatives. Although using a fixed number of nearest neighbours pose significant difficulty in the inference compared to that allowing the number of nearest neighbours to go to infinity, the parametric standardizing rate for our test statistics is obtained. Numerical studies show that the new test procedure has robust power to detect nonlinear dependency in the presence of outliers that might result from highly skewed distributions. The Canadian Journal of Statistics 38: 408–433; 2010 © 2010 Statistical Society of Canada  相似文献   

17.
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies.  相似文献   

18.
The modeling of the dose–adverse event relationship in clinical studies of drugs that must be titrated is complicated due to confounding dose escalation and exposure time effects. We analyze the dose–adverse events over time of hypotension-related adverse events (dizziness, hypotension, postural hypotension, syncope, vertigo) in placebo-controlled benign prostatic hyperplasia studies of terazosin using two different methods. The first method uses a Cox regression model with time-dependent covariates to evaluate the time-to-first event data. The second method uses a logistic regression model with parameters estimated using generalized estimating equations to analyze multiple events. Doses were assigned to both placebo and terazosin patients according to the titration scheme in each study. Three combined titration-to-efficacy response studies (231 placebo, 230 terazosin patients) had a significant difference in the incidence of hypotension-related adverse events between placebo (7.4%) and terazosin (21.7%); however, they did not exhibit a significant difference between treatments in the rate of adverse events by dose. Applying these methods to a larger, longer duration titration to response study (1031 placebo, 1053 terazosin totals) produced similar results.  相似文献   

19.
Let F(x) and F(x+θ) be log dose-response curves for a standard preparation and a test preparation, respectively, in a parallel quantal bioassay designed to test the relative potency of a drug, toxicant, or some other substance, and suppose the form of F is unknown. Several estimators of the shift parameter θ or relative potency, are compared, including some generalized and trimmed Spearman-Kärber estimators and a non parametric maximum likelihood estimator. Both point and interval estimation are discussed. Some recommendations concerning the choices of estimators are offered.  相似文献   

20.
Drug combinations in preclinical tumor xenograft studies are often assessed using fixed doses. Assessing the joint action of drug combinations with fixed doses has not been well developed in the literature. Here, an interaction index is proposed for fixed‐dose drug combinations in a subcutaneous tumor xenograft model. Furthermore, a bootstrap percentile interval of the interaction index is also developed. The joint action of two drugs can be assessed on the basis of confidence limits of the interaction index. Tumor xenograft data from actual two‐drug combination studies are analyzed to illustrate the proposed method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号