首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
T-cell engagers are a class of oncology drugs which engage T-cells to initiate immune response against malignant cells. T-cell engagers have features that are unlike prior classes of oncology drugs (e.g., chemotherapies or targeted therapies), because (1) starting dose level often must be conservative due to immune-related side effects such as cytokine release syndrome (CRS); (2) dose level can usually be safely titrated higher as a result of subject's immune system adaptation after first exposure to lower dose; and (3) due to preventive management of CRS, these safety events rarely worsen to become dose limiting toxicities (DLTs). It is generally believed that for T-cell engagers the dose intensity of the starting dose and the peak dose intensity both correlate with improved efficacy. Existing dose finding methodologies are not designed to efficiently identify both the initial starting dose and peak dose intensity in a single trial. In this study, we propose a new trial design, dose intra-subject escalation to an event (DIETE) design, that can (1) estimate the maximum tolerated initial dose level (MTD1); and (2) incorporate systematic intra-subject dose-escalation to estimate the maximum tolerated dose level subsequent to adaptation induced by the initial dose level (MTD2) with a survival analysis approach. We compare our framework to similar methodologies and evaluate their key operating characteristics.  相似文献   

2.
Designing Phase I clinical trials is challenging when accrual is slow or sample size is limited. The corresponding key question is: how to efficiently and reliably identify the maximum tolerated dose (MTD) using a sample size as small as possible? We propose model-assisted and model-based designs with adaptive intrapatient dose escalation (AIDE) to address this challenge. AIDE is adaptive in that the decision of conducting intrapatient dose escalation depends on both the patient's individual safety data, as well as other enrolled patient's safety data. When both data indicate reasonable safety, a patient may perform intrapatient dose escalation, generating toxicity data at more than one dose. This strategy not only provides patients the opportunity to receive higher potentially more effective doses, but also enables efficient statistical learning of the dose-toxicity profile of the treatment, which dramatically reduces the required sample size. Simulation studies show that the proposed designs are safe, robust, and efficient to identify the MTD with a sample size that is substantially smaller than conventional interpatient dose escalation designs. Practical considerations are provided and R code for implementing AIDE is available upon request.  相似文献   

3.
This article proposes an extension of the continual reassessment method to determine the maximum tolerated dose (MTD) in the presence of patients' heterogeneity in phase I clinical trials. To start with a simple case, we consider the covariate as a binary variable representing two groups of patients. A logistic regression model is used to establish the dose–response relationship and the design is based on the Bayesian framework. Simulation studies for six plausible dose–response scenarios show that the proposed design is likely to determine the MTD more accurately than the design that does not take covariate into consideration.  相似文献   

4.
Drug-combination studies have become increasingly popular in oncology. One of the critical concerns in phase I drug-combination trials is the uncertainty in toxicity evaluation. Most of the existing phase I designs aim to identify the maximum tolerated dose (MTD) by reducing the two-dimensional searching space to one dimension via a prespecified model or splitting the two-dimensional space into multiple one-dimensional subspaces based on the partially known toxicity order. Nevertheless, both strategies often lead to complicated trials which may either be sensitive to model assumptions or induce longer trial durations due to subtrial split. We develop two versions of dynamic ordering design (DOD) for dose finding in drug-combination trials, where the dose-finding problem is cast in the Bayesian model selection framework. The toxicity order of dose combinations is continuously updated via a two-dimensional pool-adjacent-violators algorithm, and then the dose assignment for each incoming cohort is selected based on the optimal model under the dynamic toxicity order. We conduct extensive simulation studies to evaluate the performance of DOD in comparison with four other commonly used designs under various scenarios. Simulation results show that the two versions of DOD possess competitive performances in terms of correct MTD selection as well as safety, and we apply both versions of DOD to two real oncology trials for illustration.  相似文献   

5.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   

6.
Model‐based dose‐finding methods for a combination therapy involving two agents in phase I oncology trials typically include four design aspects namely, size of the patient cohort, three‐parameter dose‐toxicity model, choice of start‐up rule, and whether or not to include a restriction on dose‐level skipping. The effect of each design aspect on the operating characteristics of the dose‐finding method has not been adequately studied. However, some studies compared the performance of rival dose‐finding methods using design aspects outlined by the original studies. In this study, we featured the well‐known four design aspects and evaluated the impact of each independent effect on the operating characteristics of the dose‐finding method including these aspects. We performed simulation studies to examine the effect of these design aspects on the determination of the true maximum tolerated dose combinations as well as exposure to unacceptable toxic dose combinations. The results demonstrated that the selection rates of maximum tolerated dose combinations and UTDCs vary depending on the patient cohort size and restrictions on dose‐level skipping However, the three‐parameter dose‐toxicity models and start‐up rules did not affect these parameters. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Nowadays, treatment regimens for cancer often involve a combination of drugs. The determination of the doses of each of the combined drugs in phase I dose escalation studies poses methodological challenges. The most common phase I design, the classic ‘3+3' design, has been criticized for poorly estimating the maximum tolerated dose (MTD) and for treating too many subjects at doses below the MTD. In addition, the classic ‘3+3' is not able to address the challenges posed by combinations of drugs. Here, we assume that a control drug (commonly used and well‐studied) is administered at a fixed dose in combination with a new agent (the experimental drug) of which the appropriate dose has to be determined. We propose a randomized design in which subjects are assigned to the control or to the combination of the control and experimental. The MTD is determined using a model‐based Bayesian technique based on the difference of probability of dose limiting toxicities (DLT) between the control and the combination arm. We show, through a simulation study, that this approach provides better and more accurate estimates of the MTD. We argue that this approach may differentiate between an extreme high probability of DLT observed from the control and a high probability of DLT of the combination. We also report on a fictive (simulation) analysis based on published data of a phase I trial of ifosfamide combined with sunitinib.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
For phase I cancer clinical trials, toxicity is a major concern. Commonly, toxicity is categorized to five levels of severity. In addition to the traditional standard dose-escaiation design, the Continual Reassessment Method (CRM) provides a promising alternative to estimate the maximum tolerated dose of a drug. However, in both standard design (STD) and CRM, the severity level of toxicity on grade 3/4 of a previous patient's response would not be a differentiated factor for the next dose level assignment. In this study, we extend the procedure incorporating the idea of unequal weights on the assessments of grade 3 and grade 4 toxicity in the dose escalation. The simulation results show that the proposed extended procedures by taking the impact of grade 4 toxicity into account, both for STD and CRM, reduce the chance of recommendation to the higher dose levels. Similar trends are observed for patient allocation to the higher levels. Additionally, for CRM which performs more accurately on the estimation of maximum tolerated dose (MTD), the proposed extended CRM maintains the same characteristic.  相似文献   

9.
One of the main aims of early phase clinical trials is to identify a safe dose with an indication of therapeutic benefit to administer to subjects in further studies. Ideally therefore, dose‐limiting events (DLEs) and responses indicative of efficacy should be considered in the dose‐escalation procedure. Several methods have been suggested for incorporating both DLEs and efficacy responses in early phase dose‐escalation trials. In this paper, we describe and evaluate a Bayesian adaptive approach based on one binary response (occurrence of a DLE) and one continuous response (a measure of potential efficacy) per subject. A logistic regression and a linear log‐log relationship are used respectively to model the binary DLEs and the continuous efficacy responses. A gain function concerning both the DLEs and efficacy responses is used to determine the dose to administer to the next cohort of subjects. Stopping rules are proposed to enable efficient decision making. Simulation results shows that our approach performs better than taking account of DLE responses alone. To assess the robustness of the approach, scenarios where the efficacy responses of subjects are generated from an E max model, but modelled by the linear log–log model are also considered. This evaluation shows that the simpler log–log model leads to robust recommendations even under this model showing that it is a useful approximation to the difficulty in estimating E max model. Additionally, we find comparable performance to alternative approaches using efficacy and safety for dose‐finding. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
The primary objective of an oncology dose-finding trial for novel therapies, such as molecularly targeted agents and immune-oncology therapies, is to identify the optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. Pharmacokinetic (PK) information is considered an appropriate indicator for evaluating the level of drug intervention in humans from a pharmacological perspective. Several novel anticancer agents have been shown to have significant exposure-efficacy relationships, and some PK information has been considered an important predictor of efficacy. This paper proposes a Bayesian optimal interval design for dose optimization with a randomization scheme based on PK outcomes in oncology. A simulation study shows that the proposed design has advantages compared to the other designs in the percentage of correct OD selection and the average number of patients allocated to OD in various realistic settings.  相似文献   

11.
In studies of combinations of agents in phase I oncology trials, the dose–toxicity relationship may not be monotone for all combinations, in which case the toxicity probabilities follow a partial order. The continual reassessment method for partial orders (PO‐CRM) is a design for phase I trials of combinations that leans upon identifying possible complete orders associated with the partial order. This article addresses some practical design considerations not previously undertaken when describing the PO‐CRM. We describe an approach in choosing a proper subset of possible orderings, formulated according to the known toxicity relationships within a matrix of combination therapies. Other design issues, such as working model selection and stopping rules, are also discussed. We demonstrate the practical ability of PO‐CRM as a phase I design for combinations through its use in a recent trial designed at the University of Virginia Cancer Center. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
Consider the problem of estimating a dose with a certain response rate. Many multistage dose‐finding designs for this problem were originally developed for oncology studies where the mean dose–response is strictly increasing in dose. In non‐oncology phase II dose‐finding studies, the dose–response curve often plateaus in the range of interest, and there are several doses with the mean response equal to the target. In this case, it is usually of interest to find the lowest of these doses because higher doses might have higher adverse event rates. It is often desirable to compare the response rate at the estimated target dose with a placebo and/or active control. We investigate which of the several known dose‐finding methods developed for oncology phase I trials is the most suitable when the dose–response curve plateaus. Some of the designs tend to spread the allocation among the doses on the plateau. Others, such as the continual reassessment method and the t‐statistic design, concentrate allocation at one of the doses with the t‐statistic design selecting the lowest dose on the plateau more frequently. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
The randomized block design is routinely employed in the social and biopharmaceutical sciences. With no missing values, analysis of variance (AOV) can be used to analyze such experiments. However, if some data are missing, the AOV formulae are no longer applicable, and iterative methods such as restricted maximum likelihood (REML) are recommended, assuming block effects are treated as random. Despite the well-known advantages of REML, methods like AOV based on complete cases (blocks) only (CC-AOV) continue to be used by researchers, particularly in situations where routinely only a few missing values are encountered. Reasons for this appear to include a natural proclivity for non-iterative, summary-statistic-based methods, and a presumption that CC-AOV is only trivially less efficient than REML with only a few missing values (say≤10%). The purpose of this note is two-fold. First, to caution that CC-AOV can be considerably less powerful than REML even with only a few missing values. Second, to offer a summary-statistic-based, pairwise-available-case-estimation (PACE) alternative to CC-AOV. PACE, which is identical to AOV (and REML) with no missing values, outperforms CC-AOV in terms of statistical power. However, it is recommended in lieu of REMLonly if software to implement the latter is unavailable, or the use of a “transparent” formula-based approach is deemed necessary. An example using real data is provided for illustration.  相似文献   

14.
This article reviews currently used approaches for establishing dose proportionality in Phase I dose escalation studies. A review of relevant literature between 2002 and 2006 found that the power model was the preferred choice for assessing dose proportionality in about one-third of the articles. This article promotes the use of the power model and a conceptually appealing extension, i.e. a criterion based on comparing the 90% confidence interval for the ratio of predicted mean values from the extremes of the dose range (R(dnm)) to pre-defined equivalence criterion (theta(L),theta(U)). The choice of bioequivalence default values of theta(L)=0.8 and theta(U)=1.25 seems reasonable for dose levels only a doubling apart but are impractically strict when applied over the complete dose range. Power calculations are used to show that this prescribed criterion lacks power to conclude dose proportionality in typical Phase I dose-escalation studies. A more lenient criterion with values theta(L)=0.5 and theta(U)=2 is proposed for exploratory dose proportionality assessments across the complete dose range.  相似文献   

15.
In this paper, we study the performance of a soccer player based on analysing an incomplete data set. To achieve this aim, we fit the bivariate Rayleigh distribution to the soccer dataset by the maximum likelihood method. In this way, the missing data and right censoring problems, that usually happen in such studies, are considered. Our aim is to inference about the performance of a soccer player by considering the stress and strength components. The first goal of the player of interest in a match is assumed as the stress component and the second goal of the match is assumed as the strength component. We propose some methods to overcome incomplete data problem and we use these methods to inference about the performance of a soccer player.  相似文献   

16.
A growth curve analysis is often applied to estimate patterns of changes in a given characteristic of different individuals. It is also used to find out if the variations in the growth rates among individuals are due to effects of certain covariates. In this paper, a random coefficient linear regression model, as a special case of the growth curve analysis, is generalized to accommodate the situation where the set of influential covariates is not known a priori. Two different approaches for seleaing influential covariates (a weighted stepwise selection procedure and a modified version of Rao and Wu’s selection criterion) for the random slope coefficient of a linear regression model with unbalanced data are proposed. Performances of these methods are evaluated by means of Monte-Carlo simulation. In addition, several methods (Maximum Likelihood, Restricted Maximum Likelihood, Pseudo Maximum Likelihood and Method of Moments) for estimating the parameters of the selected model are compared Proposed variable selection schemes and estimators are appliedtotheactualindustrial problem which motivated this investigation.  相似文献   

17.
《随机性模型》2013,29(2-3):643-668
Abstract

We investigate polynomial factorization as a classical analysis method for servers with semi-Markov arrival and service processes. The modeling approach is directly applicable to queueing systems and servers in production lines and telecommunication networks, where the flexibility in adaptation to autocorrelated processes is essential.

Although the method offers a compact form of the solution with favourable computation time complexity enabling to consider large state spaces and system equations of high degree, numerical stability is not guaranteed for this approach. Therefore we apply interval arithmetic in order to get verified results for the workload distributions, or otherwise to indicate that the precision of the computation has to be improved. The paper gives an overview of numerical and performance aspects of factorization in comparison to alternative methods.  相似文献   

18.
Most of the samples in the real world are from the normal distributions with unknown mean and variance, for which it is common to assume a conjugate normal-inverse-gamma prior. We calculate the empirical Bayes estimators of the mean and variance parameters of the normal distribution with a conjugate normal-inverse-gamma prior by the moment method and the Maximum Likelihood Estimation (MLE) method in two theorems. After that, we illustrate the two theorems for the monthly simple returns of the Shanghai Stock Exchange Composite Index.  相似文献   

19.
This article discusses a consistent and almost unbiased estimation approach in partial linear regression for parameters of interest when the regressors are contaminated with a mixture of Berkson and classical errors. Advantages of the presented procedure are: (1) random errors and observations are not necessarily to be parametric settings; (2) there is no need to use additional sample information, and to consider the estimation of nuisance parameters. We will examine the performance of our presented estimate in a variety of numerical examples through Monte Carlo simulation. The proposed approach is also illustrated in the analysis of an air pollution data.  相似文献   

20.
The adjusted r2 algorithm is a popular automated method for selecting the start time of the terminal disposition phase (tz) when conducting a noncompartmental pharmacokinetic data analysis. Using simulated data, the performance of the algorithm was assessed in relation to the ratio of the slopes of the preterminal and terminal disposition phases, the point of intercept of the terminal disposition phase with the preterminal disposition phase, the length of the terminal disposition phase captured in the concentration‐time profile, the number of data points present in the terminal disposition phase, and the level of variability in concentration measurement. The adjusted r2 algorithm was unable to identify tz accurately when there were more than three data points present in a profile's terminal disposition phase. The terminal disposition phase rate constant (λz) calculated based on the value of tz selected by the algorithm had a positive bias in all simulation data conditions. Tolerable levels of bias (median bias less than 5%) were achieved under conditions of low measurement variability. When measurement variability was high, tolerable levels of bias were attained only when the terminal phase time span was 4 multiples of t1/2 or longer. A comparison of the performance of the adjusted r2 algorithm, a simple r2 algorithm, and tz selection by visual inspection was conducted using a subset of the simulation data. In the comparison, the simple r2 algorithm performed as well as the adjusted r2 algorithm and the visual inspection method outperformed both algorithms. Recommendations concerning the use of the various tz selection methods are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号