首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider fitting Emax models to the primary endpoint for a parallel group dose–response clinical trial. Such models can be difficult to fit using Maximum Likelihood if the data give little information about the maximum possible response. Consequently, we consider alternative models that can be derived as limiting cases, which can usually be fitted. Furthermore we propose two model selection procedures for choosing between the different models. These model selection procedures are compared with two model selection procedures which have previously been used. In a simulation study we find that the model selection procedure that performs best depends on the underlying true situation. One of the new model selection procedures gives what may be regarded as the most robust of the procedures.  相似文献   

2.
This article proposes an extension of the continual reassessment method to determine the maximum tolerated dose (MTD) in the presence of patients' heterogeneity in phase I clinical trials. To start with a simple case, we consider the covariate as a binary variable representing two groups of patients. A logistic regression model is used to establish the dose–response relationship and the design is based on the Bayesian framework. Simulation studies for six plausible dose–response scenarios show that the proposed design is likely to determine the MTD more accurately than the design that does not take covariate into consideration.  相似文献   

3.
With increased costs of drug development the need for efficient studies has become critical. A key decision point on the development pathway has become the proof of concept study. These studies must provide clear information to the project teams to enable decision making about further developing a drug candidate but also to gain evidence that any effect size is sufficient to warrant this development given the current market environment. Our case study outlines one such proof of concept trial where a new candidate therapy for neuropathic pain was investigated to assess dose-response and to evaluate the magnitude of its effect compared to placebo. A Normal Dynamic Linear Model was used to estimate the dose-response--enforcing some smoothness in the dose-response, but allowing for the fact that the dose-response may be non-monotonic. A pragmatic, parallel group study design was used with interim analyses scheduled to allow the sponsor to drop ineffective doses or to stop the study. Simulations were performed to assess the operating characteristics of the study design. The study results are presented. Significant cost savings were made when it transpired that the new candidate drug did not show superior efficacy when compared placebo and the study was stopped.  相似文献   

4.
Sensitivity analysis of air scattered neutron dose at several distances from a particle accelerator with respect to shield thickness and neutron yield distributions has been carried out. We illustrate the successful use of Response Surface Methodology in studying the behaviour of the sensitivity coefficients for main effects and interaction terms. A comparison of the full six factor design and the orthogonal Central Composite Design has been made. The overhead shield is found to be the most sensitive parameter followed by the high energy part of the neutron energy distribution.  相似文献   

5.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   

6.
Phase I clinical trials are conducted in order to find the maximum tolerated dose (MTD) of a given drug from a finite set of doses. For ethical reasons, these studies are usually sequential, treating patients or groups of patients with the optimal dose according to the current knowledge, with the hope that this will lead to using the true MTD from some time on. However, the first result proved here is that this goal is infeasible, and that such designs, and, more generally, designs that concentrate on one dose from some time on, cannot provide consistent estimators for the MTD unless very strong parametric assumptions hold. Allowing some non-MTD treatment, we construct a randomized design that assigns the MTD with probability that approaches one as the size of the experiment goes to infinity and estimates the MTD consistently. We compare the suggested design with several methods by simulations, studying their performances in terms of correct estimation of the MTD and the proportion of individuals treated with the MTD.  相似文献   

7.
Drug-combination studies have become increasingly popular in oncology. One of the critical concerns in phase I drug-combination trials is the uncertainty in toxicity evaluation. Most of the existing phase I designs aim to identify the maximum tolerated dose (MTD) by reducing the two-dimensional searching space to one dimension via a prespecified model or splitting the two-dimensional space into multiple one-dimensional subspaces based on the partially known toxicity order. Nevertheless, both strategies often lead to complicated trials which may either be sensitive to model assumptions or induce longer trial durations due to subtrial split. We develop two versions of dynamic ordering design (DOD) for dose finding in drug-combination trials, where the dose-finding problem is cast in the Bayesian model selection framework. The toxicity order of dose combinations is continuously updated via a two-dimensional pool-adjacent-violators algorithm, and then the dose assignment for each incoming cohort is selected based on the optimal model under the dynamic toxicity order. We conduct extensive simulation studies to evaluate the performance of DOD in comparison with four other commonly used designs under various scenarios. Simulation results show that the two versions of DOD possess competitive performances in terms of correct MTD selection as well as safety, and we apply both versions of DOD to two real oncology trials for illustration.  相似文献   

8.
This paper considers the maximin approach for designing clinical studies. A maximin efficient design maximizes the smallest efficiency when compared with a standard design, as the parameters vary in a specified subset of the parameter space. To specify this subset of parameters in a real situation, a four‐step procedure using elicitation based on expert opinions is proposed. Further, we describe why and how we extend the initially chosen subset of parameters to a much larger set in our procedure. By this procedure, the maximin approach becomes feasible for dose‐finding studies. Maximin efficient designs have shown to be numerically difficult to construct. However, a new algorithm, the H‐algorithm, considerably simplifies the construction of these designs. We exemplify the maximin efficient approach by considering a sigmoid Emax model describing a dose–response relationship and compare inferential precision with that obtained when using a uniform design. The design obtained is shown to be at least 15% more efficient than the uniform design. © 2014 The Authors. Pharmaceutical Statistics Published by John Wiley & Sons Ltd.  相似文献   

9.
In this paper, the application of the intersection–union test method in fixed‐dose combination drug studies is discussed. An approximate sample size formula for the problem of testing the efficacy of a combination drug using intersection–union tests is proposed. The sample sizes obtained from the formula are found to be reasonably accurate in terms of attaining the target power 1?β for a specified β. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

10.
For a dose finding study in cancer, the most successful dose (MSD), among a group of available doses, is that dose at which the overall success rate is the highest. This rate is the product of the rate of seeing non-toxicities together with the rate of tumor response. A successful dose finding trial in this context is one where we manage to identify the MSD in an efficient manner. In practice we may also need to consider algorithms for identifying the MSD which can incorporate certain restrictions, the most common restriction maintaining the estimated toxicity rate alone below some maximum rate. In this case the MSD may correspond to a different level than that for the unconstrained MSD and, in providing a final recommendation, it is important to underline that it is subject to the given constraint. We work with the approach described in O'Quigley et al. [Biometrics 2001; 57(4):1018-1029]. The focus of that work was dose finding in HIV where both information on toxicity and efficacy were almost immediately available. Recent cancer studies are beginning to fall under this same heading where, as before, toxicity can be quickly evaluated and, in addition, we can rely on biological markers or other measures of tumor response. Mindful of the particular context of cancer, our purpose here is to consider the methodology developed by O'Quigley et al. and its practical implementation. We also carry out a study on the doubly under-parameterized model, developed by O'Quigley et al. but not  相似文献   

11.
We consider fitting the so‐called Emax model to continuous response data from clinical trials designed to investigate the dose–response relationship for an experimental compound. When there is insufficient information in the data to estimate all of the parameters because of the high dose asymptote being ill defined, maximum likelihood estimation fails to converge. We explore the use of either bootstrap resampling or the profile likelihood to make inferences about effects and doses required to give a particular effect, using limits on the parameter values to obtain the value of the maximum likelihood when the high dose asymptote is ill defined. The results obtained show these approaches to be comparable with or better than some others that have been used when maximum likelihood estimation fails to converge and that the profile likelihood method outperforms the method of bootstrap resampling used. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Phase I studies of a cytotoxic agent often aim to identify the dose that provides an investigator specified target dose-limiting toxicity (DLT) probability. In practice, an initial cohort receives a dose with a putative low DLT probability, and subsequent dosing follows by consecutively deciding whether to retain the current dose, escalate to the adjacent higher dose, or de-escalate to the adjacent lower dose. This article proposes a Phase I design derived using a Bayesian decision-theoretic approach to this sequential decision-making process. The design consecutively chooses the action that minimizes posterior expected loss where the loss reflects the distance on the log-odds scale between the target and the DLT probability of the dose that would be given to the next cohort under the corresponding action. A logistic model is assumed for the log odds of a DLT at the current dose with a weakly informative t-distribution prior centered at the target. The key design parameters are the pre-specified odds ratios for the DLT probabilities at the adjacent higher and lower doses. Dosing rules may be pre-tabulated, as these only depend on the outcomes at the current dose, which greatly facilitates implementation. The recommended default version of the proposed design improves dose selection relative to many established designs across a variety of scenarios.  相似文献   

13.
There has recently been increasing demand for better designs to conduct first‐into‐man dose‐escalation studies more efficiently, more accurately and more quickly. The authors look into the Bayesian decision‐theoretic approach and use simulation as a tool to investigate the impact of compromises with conventional practice that might make the procedures more acceptable for implementation. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
To estimate the effective dose level EDα in the common binary response model, several parametric and nonparametric estimators have been proposed in the literature. In the present article, we focus on nonparametric methods and present a detailed numerical comparison of four different approaches to estimate the EDα nonparametrically. The methods are briefly reviewed and their finite sample properties are studied by means of a detailed simulation study. Moreover, a data example is presented to illustrate the different concepts.  相似文献   

15.
ABSTRACT

The clinical trials are usually designed with the implicit assumption that data analysis will occur only after the trial is completed. It is a challenging problem if the sponsor wishes to evaluate the drug efficacy in the middle of the study without breaking the randomization codes. In this article, the randomized response model and mixture model are introduced to analyze the data, masking the randomization codes of the crossover design. Given the probability of treatment sequence, the test of mixture model provides higher power than the test of randomized response model, which is inadequate in the example. The paired t-test has higher powers than both models if the randomization codes are broken. The sponsor may stop the trial early to claim the effectiveness of the study drug if the mixture model concludes a positive result.  相似文献   

16.
In this paper, the two-sample scale problem is addressed within the rank framework which does not require to specify the underlying continuous distribution. However, since the power of a rank test depends on the underlying distribution, it would be very useful for the researcher to have some information on it in order to use the possibly most suitable test. A two-stage adaptive design is used with adaptive tests where the data from the first stage are used to compute a selector statistic to select the test statistic for stage 2. More precisely, an adaptive scale test due to Hall and Padmanabhan and its components are considered in one-stage and several adaptive and non-adaptive two-stage procedures. A simulation study shows that the two-stage test with the adaptive choice in the second stage and with Liptak combination, when it is not more powerful than the corresponding one-stage test, shows, however, a quite similar power behavior. The test procedures are illustrated using two ecological applications and a clinical trial.  相似文献   

17.
Understanding the dose–response relationship is a key objective in Phase II clinical development. Yet, designing a dose‐ranging trial is a challenging task, as it requires identifying the therapeutic window and the shape of the dose–response curve for a new drug on the basis of a limited number of doses. Adaptive designs have been proposed as a solution to improve both quality and efficiency of Phase II trials as they give the possibility to select the dose to be tested as the trial goes. In this article, we present a ‘shapebased’ two‐stage adaptive trial design where the doses to be tested in the second stage are determined based on the correlation observed between efficacy of the doses tested in the first stage and a set of pre‐specified candidate dose–response profiles. At the end of the trial, the data are analyzed using the generalized MCP‐Mod approach in order to account for model uncertainty. A simulation study shows that this approach gives more precise estimates of a desired target dose (e.g. ED70) than a single‐stage (fixed‐dose) design and performs as well as a two‐stage D‐optimal design. We present the results of an adaptive model‐based dose‐ranging trial in multiple sclerosis that motivated this research and was conducted using the presented methodology. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
CVX‐based numerical algorithms are widely and freely available for solving convex optimization problems but their applications to solve optimal design problems are limited. Using the CVX programs in MATLAB, we demonstrate their utility and flexibility over traditional algorithms in statistics for finding different types of optimal approximate designs under a convex criterion for nonlinear models. They are generally fast and easy to implement for any model and any convex optimality criterion. We derive theoretical properties of the algorithms and use them to generate new A‐, c‐, D‐ and E‐optimal designs for various nonlinear models, including multi‐stage and multi‐objective optimal designs. We report properties of the optimal designs and provide sample CVX program codes for some of our examples that users can amend to find tailored optimal designs for their problems. The Canadian Journal of Statistics 47: 374–391; 2019 © 2019 Statistical Society of Canada  相似文献   

19.
Assessing dose response from flexible‐dose clinical trials is problematic. The true dose effect may be obscured and even reversed in observed data because dose is related to both previous and subsequent outcomes. To remove selection bias, we propose marginal structural models, inverse probability of treatment‐weighting (IPTW) methodology. Potential clinical outcomes are compared across dose groups using a marginal structural model (MSM) based on a weighted pooled repeated measures analysis (generalized estimating equations with robust estimates of standard errors), with dose effect represented by current dose and recent dose history, and weights estimated from the data (via logistic regression) and determined as products of (i) inverse probability of receiving dose assignments that were actually received and (ii) inverse probability of remaining on treatment by this time. In simulations, this method led to almost unbiased estimates of true dose effect under various scenarios. Results were compared with those obtained by unweighted analyses and by weighted analyses under various model specifications. The simulation showed that the IPTW MSM methodology is highly sensitive to model misspecification even when weights are known. Practitioners applying MSM should be cautious about the challenges of implementing MSM with real clinical data. Clinical trial data are used to illustrate the methodology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
Since the implementation of the International Conference on Harmonization (ICH) E14 guideline in 2005, regulators have required a “thorough QTc” (TQT) study for evaluating the effects of investigational drugs on delayed cardiac repolarization as manifested by a prolonged QTc interval. However, TQT studies have increasingly been viewed unfavorably because of their low cost effectiveness. Several researchers have noted that a robust drug concentration‐QTc (conc‐QTc) modeling assessment in early phase development should, in most cases, obviate the need for a subsequent TQT study. In December 2015, ICH released an “E14 Q&As (R3)” document supporting the use of conc‐QTc modeling for regulatory decisions. In this article, we propose a simple improvement of two popular conc‐QTc assessment methods for typical first‐in‐human crossover‐like single ascending dose clinical pharmacology trials. The improvement is achieved, in part, by leveraging routinely encountered (and expected) intrasubject correlation patterns encountered in such trials. A real example involving a single ascending dose and corresponding TQT trial, along with results from a simulation study, illustrate the strong performance of the proposed method. The improved conc‐QTc assessment will further enable highly reliable go/no‐go decisions in early phase clinical development and deliver results that support subsequent TQT study waivers by regulators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号