首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
This article reviews currently used approaches for establishing dose proportionality in Phase I dose escalation studies. A review of relevant literature between 2002 and 2006 found that the power model was the preferred choice for assessing dose proportionality in about one-third of the articles. This article promotes the use of the power model and a conceptually appealing extension, i.e. a criterion based on comparing the 90% confidence interval for the ratio of predicted mean values from the extremes of the dose range (R(dnm)) to pre-defined equivalence criterion (theta(L),theta(U)). The choice of bioequivalence default values of theta(L)=0.8 and theta(U)=1.25 seems reasonable for dose levels only a doubling apart but are impractically strict when applied over the complete dose range. Power calculations are used to show that this prescribed criterion lacks power to conclude dose proportionality in typical Phase I dose-escalation studies. A more lenient criterion with values theta(L)=0.5 and theta(U)=2 is proposed for exploratory dose proportionality assessments across the complete dose range.  相似文献   

2.
Model‐based phase I dose‐finding designs rely on a single model throughout the study for estimating the maximum tolerated dose (MTD). Thus, one major concern is about the choice of the most suitable model to be used. This is important because the dose allocation process and the MTD estimation depend on whether or not the model is reliable, or whether or not it gives a better fit to toxicity data. The aim of our work was to propose a method that would remove the need for a model choice prior to the trial onset and then allow it sequentially at each patient's inclusion. In this paper, we described model checking approach based on the posterior predictive check and model comparison approach based on the deviance information criterion, in order to identify a more reliable or better model during the course of a trial and to support clinical decision making. Further, we presented two model switching designs for a phase I cancer trial that were based on the aforementioned approaches, and performed a comparison between designs with or without model switching, through a simulation study. The results showed that the proposed designs had the advantage of decreasing certain risks, such as those of poor dose allocation and failure to find the MTD, which could occur if the model is misspecified. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Dose proportionality/linearity is a desirable property in pharmacokinetic studies. Various methods have been proposed for its assessment. When dose proportionality is not established, it is of interest to evaluate the degree of departure from dose linearity. In this paper, we propose a measure of departure from dose linearity and derive an asymptotic test under a repeated measures incomplete block design using a slope approach. Simulation studies show that the proposed method has a satisfactory small sample performance in terms of size and power.  相似文献   

4.
Drug-combination studies have become increasingly popular in oncology. One of the critical concerns in phase I drug-combination trials is the uncertainty in toxicity evaluation. Most of the existing phase I designs aim to identify the maximum tolerated dose (MTD) by reducing the two-dimensional searching space to one dimension via a prespecified model or splitting the two-dimensional space into multiple one-dimensional subspaces based on the partially known toxicity order. Nevertheless, both strategies often lead to complicated trials which may either be sensitive to model assumptions or induce longer trial durations due to subtrial split. We develop two versions of dynamic ordering design (DOD) for dose finding in drug-combination trials, where the dose-finding problem is cast in the Bayesian model selection framework. The toxicity order of dose combinations is continuously updated via a two-dimensional pool-adjacent-violators algorithm, and then the dose assignment for each incoming cohort is selected based on the optimal model under the dynamic toxicity order. We conduct extensive simulation studies to evaluate the performance of DOD in comparison with four other commonly used designs under various scenarios. Simulation results show that the two versions of DOD possess competitive performances in terms of correct MTD selection as well as safety, and we apply both versions of DOD to two real oncology trials for illustration.  相似文献   

5.
Crossover designs are used for a variety of different applications. While these designs have a number of attractive features, they also induce a number of special problems and concerns. One of these is the possible presence of carryover effects. Even with the use of washout periods, which are for many applications widely accepted as an indispensable component, the effect of a treatment from a previous period may not be completely eliminated. A model that has recently received renewed attention in the literature is the model in which first-order carryover effects are assumed to be proportional to direct treatment effects. Under this model, assuming that the constant of proportionality is known, we identify optimal and efficient designs for the direct effects for different values of the constant of proportionality. We also consider the implication of these results for the case that the constant of proportionality is not known.  相似文献   

6.
In this paper, a new test method for analyzing unreplicated factorial designs is proposed. The proposed method is illustrated by some examples. An extensive simulation with the standard 16-run designs was carried out to compare the proposed method with three another existing methods. Besides the usual power criterion, another three versions of power, Power I–III, were also used to evaluate the performance of the compared methods. The simulation study shows that the proposed method has higher ability than the remaining three compared methods to identify all active effects without misidentifying any inactive effects as active.  相似文献   

7.
Summary.  We introduce a new method for generating optimal split-plot designs. These designs are optimal in the sense that they are efficient for estimating the fixed effects of the statistical model that is appropriate given the split-plot design structure. One advantage of the method is that it does not require the prior specification of a candidate set. This makes the production of split-plot designs computationally feasible in situations where the candidate set is too large to be tractable. The method allows for flexible choice of the sample size and supports inclusion of both continuous and categorical factors. The model can be any linear regression model and may include arbitrary polynomial terms in the continuous factors and interaction terms of any order. We demonstrate the usefulness of this flexibility with a 100-run polypropylene experiment involving 11 factors where we found a design that is substantially more efficient than designs that are produced by using other approaches.  相似文献   

8.
Abstract

In choice experiments the process of decision-making can be more complex than the proposed by the Multinomial Logit Model (MNL). In these scenarios, models such as the Nested Multinomial Logit Model (NMNL) are often employed to model a more complex decision-making. Understanding the decision-making process is important in some fields such as marketing. Achieving a precise estimation of the models is crucial to the understanding of this process. To do this, optimal experimental designs are required. To construct an optimal design, information matrix is key. A previous research by others has developed the expression for the information matrix of the two-level NMNL model with two nests: Alternatives nest (J alternatives) and No-Choice nest (1 alternative). In this paper, we developed the likelihood function for a two-stage NMNL model for M nests and we present the expression for the information matrix for 2 nests with any amount of alternatives in them. We also show alternative D-optimal designs for No-Choice scenarios with similar relative efficiency but with less complex alternatives which can help to obtain more reliable answers and one application of these designs.  相似文献   

9.
We describe a general family of contingent response models. These models have ternary outcomes constructed from two Bernoulli outcomes, where one outcome is only observed if the other outcome is positive. This family is represented in a canonical form which yields general results for its Fisher information. A bivariate extreme value distribution illustrates the model and optimal design results. To provide a motivating context, we call the two binary events that compose the contingent responses toxicity and efficacy. Efficacy or lack thereof is assumed only to be observable in the absence of toxicity, resulting in the ternary response (toxicity, efficacy without toxicity, neither efficacy nor toxicity). The rate of toxicity, and the rate of efficacy conditional on no toxicity, are assumed to increase with dose. While optimal designs for contingent response models are numerically found, limiting optimal designs can be expressed in closed forms. In particular, in the family of four parameter bivariate location-scale models we study, as the marginal probability functions of toxicity and no efficacy diverge, limiting D optimal designs are shown to consist of a mixture of the D optimal designs for each failure (toxicity and no efficacy) univariately. Limiting designs are also obtained for the case of equal scale parameters.  相似文献   

10.
Understanding the dose–response relationship is a key objective in Phase II clinical development. Yet, designing a dose‐ranging trial is a challenging task, as it requires identifying the therapeutic window and the shape of the dose–response curve for a new drug on the basis of a limited number of doses. Adaptive designs have been proposed as a solution to improve both quality and efficiency of Phase II trials as they give the possibility to select the dose to be tested as the trial goes. In this article, we present a ‘shapebased’ two‐stage adaptive trial design where the doses to be tested in the second stage are determined based on the correlation observed between efficacy of the doses tested in the first stage and a set of pre‐specified candidate dose–response profiles. At the end of the trial, the data are analyzed using the generalized MCP‐Mod approach in order to account for model uncertainty. A simulation study shows that this approach gives more precise estimates of a desired target dose (e.g. ED70) than a single‐stage (fixed‐dose) design and performs as well as a two‐stage D‐optimal design. We present the results of an adaptive model‐based dose‐ranging trial in multiple sclerosis that motivated this research and was conducted using the presented methodology. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Dose-finding designs for phase-I trials aim to determine the recommended phase-II dose (RP2D) for further phase-II drug development. If the trial includes patients for whom several lines of standard therapy failed or if the toxicity of the investigated agent does not necessarily increase with dose, optimal dose-finding designs should limit the frequency of treatment with suboptimal doses. We propose a two-stage design strategy with a run-in intra-patient dose escalation part followed by a more traditional dose-finding design. We conduct simulation studies to compare the 3 + 3 design, the Bayesian Optimal Interval Design (BOIN) and the Continual Reassessment Method (CRM) with and without intra-patient dose escalation. The endpoints are accuracy, sample size, safety, and therapeutic efficiency. For scenarios where the correct RP2D is the highest dose, inclusion of an intra-patient dose escalation stage generally increases accuracy and therapeutic efficiency. However, for scenarios where the correct RP2D is below the highest dose, intra-patient dose escalation designs lead to increased risk of overdosing and an overestimation of RP2D. The magnitude of the change in operating characteristics after including an intra-patient stage is largest for the 3 + 3 design, decreases for the BOIN and is smallest for the CRM.  相似文献   

12.
The purpose of screening experiments is to identify the dominant variables from a set of many potentially active variables which may affect some characteristic y. Edge designs were recently introduced in the literature and are constructed by using conferences matrices and were proved to be robust. We introduce a new class of edge designs which are constructed from skew-symmetric supplementary difference sets. These designs are particularly useful since they can be applied for experiments with an even number of factors and they may exist for orders where conference matrices do not exist. Using this methodology, examples of new edge designs for 6, 14, 22, 26, 38, 42, 46, 58, and 62 factors are constructed. Of special interest are the new edge designs for studying 22 and 58 factors because edge designs with these parameters have not been constructed in the literature since conference matrices of the corresponding order do not exist. The suggested new edge designs achieve the same model-robustness as the traditional edge designs. We also suggest the use of a mirror edge method as a test for the linearity of the true underlying model. We give the details of the methodology and provide some illustrating examples for this new approach. We also show that the new designs have good D-efficiencies when applied to first order models.  相似文献   

13.
We introduce a new design for dose-finding in the context of toxicity studies for which it is assumed that toxicity increases with dose. The goal is to identify the maximum tolerated dose, which is taken to be the dose associated with a prespecified “target” toxicity rate. The decision to decrease, increase or repeat a dose for the next subject depends on how far an estimated toxicity rate at the current dose is from the target. The size of the window within which the current dose will be repeated is obtained based on the theory of Markov chains as applied to group up-and-down designs. But whereas the treatment allocation rule in Markovian group up-and-down designs is only based on information from the current cohort of subjects, the treatment allocation rule for the proposed design is based on the cumulative information at the current dose. We then consider an extension of this new design for clinical trials in which the subject's outcome is not known immediately. The new design is compared to the continual reassessment method.  相似文献   

14.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

15.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   

16.
Consider the problem of estimating a dose with a certain response rate. Many multistage dose‐finding designs for this problem were originally developed for oncology studies where the mean dose–response is strictly increasing in dose. In non‐oncology phase II dose‐finding studies, the dose–response curve often plateaus in the range of interest, and there are several doses with the mean response equal to the target. In this case, it is usually of interest to find the lowest of these doses because higher doses might have higher adverse event rates. It is often desirable to compare the response rate at the estimated target dose with a placebo and/or active control. We investigate which of the several known dose‐finding methods developed for oncology phase I trials is the most suitable when the dose–response curve plateaus. Some of the designs tend to spread the allocation among the doses on the plateau. Others, such as the continual reassessment method and the t‐statistic design, concentrate allocation at one of the doses with the t‐statistic design selecting the lowest dose on the plateau more frequently. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Phase I studies of a cytotoxic agent often aim to identify the dose that provides an investigator specified target dose-limiting toxicity (DLT) probability. In practice, an initial cohort receives a dose with a putative low DLT probability, and subsequent dosing follows by consecutively deciding whether to retain the current dose, escalate to the adjacent higher dose, or de-escalate to the adjacent lower dose. This article proposes a Phase I design derived using a Bayesian decision-theoretic approach to this sequential decision-making process. The design consecutively chooses the action that minimizes posterior expected loss where the loss reflects the distance on the log-odds scale between the target and the DLT probability of the dose that would be given to the next cohort under the corresponding action. A logistic model is assumed for the log odds of a DLT at the current dose with a weakly informative t-distribution prior centered at the target. The key design parameters are the pre-specified odds ratios for the DLT probabilities at the adjacent higher and lower doses. Dosing rules may be pre-tabulated, as these only depend on the outcomes at the current dose, which greatly facilitates implementation. The recommended default version of the proposed design improves dose selection relative to many established designs across a variety of scenarios.  相似文献   

18.
This paper presents a unified method of constructing change-over designs that permit the estimation of direct effects orthogonal to all other effects when the residual effects of treatments last for two consecutive periods. Explicit methods of analysis of these designs have been obtained for the situations where the first period observations are omitted from the analysis and where the first period observations are included.  相似文献   

19.
The results of a computer search for saturated designs for 2n factorial experiments with n runs is reported, (where n = 2 mod 4). A complete search of the design space is avoided by focussing on designs constructed from cyclic generators. A method of searching quickly for the best generators is given. The resulting designs are as good as, and sometimes better than, designs obtained via search algorithms reported in the literature. The addition of a further factor having three levels is also considered. Here, too, a complete search is avoided by restricting attention to the most efficient part of the design space under p-efficiency.  相似文献   

20.
Balanced factorial designs are introduced for cDNA microarray experiments. Single replicate designs obtained using the classical method of confounding are shown to be particularly useful for deriving suitable balanced designs for cDNA microarrays. Classical factorial designs obtained using methods other than the method of confounding are also shown to be useful. The paper provides a systematic method of deriving designs for microarray experiments as opposed to algorithmic and ad-hoc methods and generalizes several of the microarray designs given recently in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号