首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

2.
The continual reassessment method (CRM) was first introduced by O’Quigley et al. [1990. Continual reassessment method: a practical design for Phase I clinical trials in cancer. Biometrics 46, 33–48]. Many articles followed adding to the original ideas, among which are articles by Babb et al. [1998. Cancer Phase I clinical trials: efficient dose escalation with overdose control. Statist. Med. 17, 1103–1120], Braun [2002. The bivariate-continual reassessment method. Extending the CRM to phase I trials of two competing outcomes. Controlled Clin. Trials 23, 240–256], Chevret [1993. The continual reassessment method in cancer phase I clinical trials: a simulation study. Statist. Med. 12, 1093–1108], Faries [1994. Practical modifications of the continual reassessment method for phase I cancer clinical trials. J. Biopharm. Statist. 4, 147–164], Goodman et al. [1995. Some practical improvements in the continual reassessment method for phase I studies. Statist. Med. 14, 1149–1161], Ishizuka and Ohashi [2001. The continual reassessment method and its applications: a Bayesian methodology for phase I cancer clinical trials. Statist. Med. 20, 2661–2681], Legedeza and Ibrahim [2002. Longitudinal design for phase I trials using the continual reassessment method. Controlled Clin. Trials 21, 578–588], Mahmood [2001. Application of preclinical data to initiate the modified continual reassessment method for maximum tolerated dose-finding trial. J. Clin. Pharmacol. 41, 19–24], Moller [1995. An extension of the continual reassessment method using a preliminary up and down design in a dose finding study in cancer patients in order to investigate a greater number of dose levels. Statist. Med. 14, 911–922], O’Quigley [1992. Estimating the probability of toxicity at the recommended dose following a Phase I clinical trial in cancer. Biometrics 48, 853–862], O’Quigley and Shen [1996. Continual reassessment method: a likelihood approach. Biometrics 52, 163–174], O’Quigley et al. (1999), O’Quigley et al. [2002. Non-parametric optimal design in dose finding studies. Biostatistics 3, 51–56], O’Quigley and Paoletti [2003. Continual reassessment method for ordered groups. Biometrics 59, 429–439], Piantodosi et al., 1998. [1998 Practical implementation of a modified continual reassessment method for dose-finding trials. Cancer Chemother. Pharmacol. 41, 429–436] and Whitehead and Williamson [1998. Bayesian decision procedures based on logistic regression models for dose-finding studies. J. Biopharm. Statist. 8, 445–467]. The method is broadly described by Storer [1989. Design and analysis of Phase I clinical trials. Biometrics 45, 925–937]. Whether likelihood or Bayesian based, inference poses particular theoretical difficulties in view of working models being under-parameterized. Nonetheless CRM models have proven themselves to be of practical use and, in this work, the aim is to turn the spotlight on the main theoretical ideas underpinning the approach, obtaining results which can provide guidance in practice. Stemming from this theoretical framework are a number of results and some further development, in particular the way to structure a randomized allocation of subjects as well as a more robust approach to the problem of dealing with patient heterogeneity.  相似文献   

3.
There is a growing need for study designs that can evaluate efficacy and toxicity outcomes simultaneously in phase I or phase I/II cancer clinical trials. Many dose‐finding approaches have been proposed; however, most of these approaches assume binary efficacy and toxicity outcomes, such as dose‐limiting toxicity (DLT), and objective responses. DLTs are often defined for short time periods. In contrast, objective responses are often defined for longer periods because of practical limitations on confirmation and the criteria used to define ‘confirmation’. This means that studies have to be carried out for unacceptably long periods of time. Previous studies have not proposed a satisfactory solution to this specific problem. Furthermore, this problem may be a barrier for practitioners who want to implement notable previous dose‐finding approaches. To cope with this problem, we propose an approach using unconfirmed early responses as the surrogate efficacy outcome for the confirmed outcome. Because it is reasonable to expect moderate positive correlation between the two outcomes and the method replaces the surrogate outcome with the confirmed outcome once it becomes available, the proposed approach can reduce irrelevant dose selection and accumulation of bias. Moreover, it is also expected that it can significantly shorten study duration. Using simulation studies, we demonstrate the positive utility of the proposed approach and provide three variations of it, all of which can be easily implemented with modified likelihood functions and outcome variable definitions. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Compound optimal designs are considered where one component of the design criterion is a traditional optimality criterion such as the D‐optimality criterion, and the other component accounts for higher efficacy with low toxicity. With reference to the dose‐finding problem, we suggest the technique to choose weights for the two components that makes the optimization problem simpler than the traditional penalized design. We allow general bivariate responses for efficacy and toxicity. We then extend the procedure in the presence of nondesignable covariates such as age, sex, or other health conditions. A new breast cancer treatment is considered to illustrate the procedures. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
We introduce a new design for dose-finding in the context of toxicity studies for which it is assumed that toxicity increases with dose. The goal is to identify the maximum tolerated dose, which is taken to be the dose associated with a prespecified “target” toxicity rate. The decision to decrease, increase or repeat a dose for the next subject depends on how far an estimated toxicity rate at the current dose is from the target. The size of the window within which the current dose will be repeated is obtained based on the theory of Markov chains as applied to group up-and-down designs. But whereas the treatment allocation rule in Markovian group up-and-down designs is only based on information from the current cohort of subjects, the treatment allocation rule for the proposed design is based on the cumulative information at the current dose. We then consider an extension of this new design for clinical trials in which the subject's outcome is not known immediately. The new design is compared to the continual reassessment method.  相似文献   

6.
We describe a general family of contingent response models. These models have ternary outcomes constructed from two Bernoulli outcomes, where one outcome is only observed if the other outcome is positive. This family is represented in a canonical form which yields general results for its Fisher information. A bivariate extreme value distribution illustrates the model and optimal design results. To provide a motivating context, we call the two binary events that compose the contingent responses toxicity and efficacy. Efficacy or lack thereof is assumed only to be observable in the absence of toxicity, resulting in the ternary response (toxicity, efficacy without toxicity, neither efficacy nor toxicity). The rate of toxicity, and the rate of efficacy conditional on no toxicity, are assumed to increase with dose. While optimal designs for contingent response models are numerically found, limiting optimal designs can be expressed in closed forms. In particular, in the family of four parameter bivariate location-scale models we study, as the marginal probability functions of toxicity and no efficacy diverge, limiting D optimal designs are shown to consist of a mixture of the D optimal designs for each failure (toxicity and no efficacy) univariately. Limiting designs are also obtained for the case of equal scale parameters.  相似文献   

7.
Age-conditional probabilities of developing a first cancer represent the transition from being cancer-free to developing a first cancer. Natural inputs into their calculation are rates of first cancer per person-years alive and cancer-free. However these rates are not readily available because they require information on the cancer-free population. Instead rates of first cancer per person-years alive, calculated using as denominator the mid-year populations, available from census data, can be easily calculated from cancer registry data. Methods have been developed to estimate age-conditional probabilities of developing cancer based on these easily available rates per person-years alive that do not directly account for the cancer-free population. In the last few years models (Merrill et al., Int J Epidemiol 29(2):197-207, 2000; Mariotto et al., SEER Cancer Statistics Review, 2002; Clegg et al., Biometrics 58(3):684-688, 2002; Gigli et al., Stat Methods Med Res 15(3):235-253, 2006, and software (ComPrev:Complete Prevalence Software, Version 1.0, 2005) have been developed that allow estimation of cancer prevalence (DevCan: Probability of Developing or Dying of Cancer Software, Version 6.0, 2005). Estimates of population-based cancer prevalence allows for the estimation of the cancer-free population and consequently of rates per person-years alive and cancer-free. In this paper we present a method that directly estimates the age-conditional probabilities of developing a first cancer using rates per person-years alive and cancer-free obtained from prevalence estimates. We explore conditions when the previous and the new estimators give similar or different values using real data from the Surveillance, Epidemiology and End Results (SEER) program.  相似文献   

8.
Following on from the work of O'Quigley & Flandre (1994) and, more recently, O'Quigley & Xu (2000), we develop a measure, R2, of the predictive ability of a stratified proportional hazards regression model. The extension of this earlier work to the stratified case is relatively straightforward, both conceptually and in its practical implementation. The extension is nonetheless important in that the stratified model is making weaker assumptions than the full multivariate model. Formulae are given that can be readily incorporated into standard software routines, since the component parts of the calculations are routinely provided by most packages. We give examples on the predictability of survival in breast cancer data, modelled via proportional hazards and stratified proportional hazards models, the latter being necessary in view of the effects of a non-proportional hazards nature.  相似文献   

9.
In early phase dose‐finding cancer studies, the objective is to determine the maximum tolerated dose, defined as the highest dose with an acceptable dose‐limiting toxicity rate. Finding this dose for drug‐combination trials is complicated because of drug–drug interactions, and many trial designs have been proposed to address this issue. These designs rely on complicated statistical models that typically are not familiar to clinicians, and are rarely used in practice. The aim of this paper is to propose a Bayesian dose‐finding design for drug combination trials based on standard logistic regression. Under the proposed design, we continuously update the posterior estimates of the model parameters to make the decisions of dose assignment and early stopping. Simulation studies show that the proposed design is competitive and outperforms some existing designs. We also extend our design to handle delayed toxicities. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Many phase I drug combination designs have been proposed to find the maximum tolerated combination (MTC). Due to the two‐dimension nature of drug combination trials, these designs typically require complicated statistical modeling and estimation, which limit their use in practice. In this article, we propose an easy‐to‐implement Bayesian phase I combination design, called Bayesian adaptive linearization method (BALM), to simplify the dose finding for drug combination trials. BALM takes the dimension reduction approach. It selects a subset of combinations, through a procedure called linearization, to convert the two‐dimensional dose matrix into a string of combinations that are fully ordered in toxicity. As a result, existing single‐agent dose‐finding methods can be directly used to find the MTC. In case that the selected linear path does not contain the MTC, a dose‐insertion procedure is performed to add new doses whose expected toxicity rate is equal to the target toxicity rate. Our simulation studies show that the proposed BALM design performs better than competing, more complicated combination designs.  相似文献   

11.
Ring-recovery methodology has been widely used to estimate survival rates in multi-year ringing studies of wildlife and fish populations (Youngs & Robson, 1975; Brownie et al. , 1985). The Brownie et al. (1985) methodology is often used but its formulation does not account for the fact that rings may be returned in two ways. Sometimes hunters are solicited by a wildlife management officer or scientist and asked if they shot any ringed birds. Alternatively, a hunter may voluntarily report the ring to the Bird Banding Laboratory (US Fish and Wildlife Service, Laurel, MD, USA) as is requested on the ring. Because the Brownie et al. (1985) models only consider reported rings, Conroy (1985) and Conroy et al. (1989) generalized their models to permit solicited rings. Pollock et al. (1991) considered a very similar model for fish tagging models which might be combined with angler surveys. Pollock et al. (1994) showed how to apply their generalized formulation, with some modification to allow for crippling losses, to wildlife ringing studies. Provided an estimate of ring reporting rate is available, separation of hunting and natural mortality estimates is possible which provides important management information. Here we review this material and then discuss possible methods of estimating reporting rate which include: (1) reward ring studies; (2) use of planted rings; (3) hunter surveys; and (4) pre- and post-hunting season ringings. We compare and contrast the four methods in terms of their model assumptions and practicality. We also discuss the estimation of crippling loss using pre- and post-season ringing in combination with a reward ringing study to estimate reporting rate.  相似文献   

12.
Ring-recovery methodology has been widely used to estimate survival rates in multi-year ringing studies of wildlife and fish populations (Youngs & Robson, 1975; Brownie et al. , 1985). The Brownie et al. (1985) methodology is often used but its formulation does not account for the fact that rings may be returned in two ways. Sometimes hunters are solicited by a wildlife management officer or scientist and asked if they shot any ringed birds. Alternatively, a hunter may voluntarily report the ring to the Bird Banding Laboratory (US Fish and Wildlife Service, Laurel, MD, USA) as is requested on the ring. Because the Brownie et al. (1985) models only consider reported rings, Conroy (1985) and Conroy et al. (1989) generalized their models to permit solicited rings. Pollock et al. (1991) considered a very similar model for fish tagging models which might be combined with angler surveys. Pollock et al. (1994) showed how to apply their generalized formulation, with some modification to allow for crippling losses, to wildlife ringing studies. Provided an estimate of ring reporting rate is available, separation of hunting and natural mortality estimates is possible which provides important management information. Here we review this material and then discuss possible methods of estimating reporting rate which include: (1) reward ring studies; (2) use of planted rings; (3) hunter surveys; and (4) pre- and post-hunting season ringings. We compare and contrast the four methods in terms of their model assumptions and practicality. We also discuss the estimation of crippling loss using pre- and post-season ringing in combination with a reward ringing study to estimate reporting rate.  相似文献   

13.
In this note, we consider data subjected to middle censoring where the variable of interest becomes unobservable when it falls within an interval of censorship. We demonstrate that the nonparametric maximum likelihood estimator (NPMLE) of distribution function can be obtained by using Turnbull's (1976) EM algorithm or self-consistent estimating equation (Jammalamadaka and Mangalam, 2003) with an initial estimator which puts mass only on the innermost intervals. The consistency of the NPMLE can be established based on the asymptotic properties of self-consistent estimators (SCE) with mixed interval-censored data ( [Yu et al., 2000] and [Yu et al., 2001]).  相似文献   

14.
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies.  相似文献   

15.
Immuno‐oncology has emerged as an exciting new approach to cancer treatment. Common immunotherapy approaches include cancer vaccine, effector cell therapy, and T‐cell–stimulating antibody. Checkpoint inhibitors such as cytotoxic T lymphocyte–associated antigen 4 and programmed death‐1/L1 antagonists have shown promising results in multiple indications in solid tumors and hematology. However, the mechanisms of action of these novel drugs pose unique statistical challenges in the accurate evaluation of clinical safety and efficacy, including late‐onset toxicity, dose optimization, evaluation of combination agents, pseudoprogression, and delayed and lasting clinical activity. Traditional statistical methods may not be the most accurate or efficient. It is highly desirable to develop the most suitable statistical methodologies and tools to efficiently investigate cancer immunotherapies. In this paper, we summarize these issues and discuss alternative methods to meet the challenges in the clinical development of these novel agents. For safety evaluation and dose‐finding trials, we recommend the use of a time‐to‐event model‐based design to handle late toxicities, a simple 3‐step procedure for dose optimization, and flexible rule‐based or model‐based designs for combination agents. For efficacy evaluation, we discuss alternative endpoints/designs/tests including the time‐specific probability endpoint, the restricted mean survival time, the generalized pairwise comparison method, the immune‐related response criteria, and the weighted log‐rank or weighted Kaplan‐Meier test. The benefits and limitations of these methods are discussed, and some recommendations are provided for applied researchers to implement these methods in clinical practice.  相似文献   

16.
Sargent et al (J Clin Oncol 23: 8664–8670, 2005) concluded that 3-year disease-free survival (DFS) can be considered a valid surrogate (replacement) endpoint for 5-year overall survival (OS) in clinical trials of adjuvant chemotherapy for colorectal cancer. We address the question whether the conclusion holds for trials involving other classes of treatments than those considered by Sargent et al. Additionally, we assess if the 3-year cutpoint is an optimal one. To this aim, we investigate whether the results reported by Sargent et al. could have been used to predict treatment effects in three centrally randomized adjuvant colorectal cancer trials performed by the Japanese Foundation for Multidisciplinary Treatment for Cancer (JFMTC) (Sakamoto et al. J Clin Oncol 22:484–492, 2004). Our analysis supports the conclusion of Sargent et al. and shows that using DFS at 2 or 3 years would be the best option for the prediction of OS at 5 years.  相似文献   

17.
Model‐based dose‐finding methods for a combination therapy involving two agents in phase I oncology trials typically include four design aspects namely, size of the patient cohort, three‐parameter dose‐toxicity model, choice of start‐up rule, and whether or not to include a restriction on dose‐level skipping. The effect of each design aspect on the operating characteristics of the dose‐finding method has not been adequately studied. However, some studies compared the performance of rival dose‐finding methods using design aspects outlined by the original studies. In this study, we featured the well‐known four design aspects and evaluated the impact of each independent effect on the operating characteristics of the dose‐finding method including these aspects. We performed simulation studies to examine the effect of these design aspects on the determination of the true maximum tolerated dose combinations as well as exposure to unacceptable toxic dose combinations. The results demonstrated that the selection rates of maximum tolerated dose combinations and UTDCs vary depending on the patient cohort size and restrictions on dose‐level skipping However, the three‐parameter dose‐toxicity models and start‐up rules did not affect these parameters. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
In this article, we consider permutation methods for multivariate testing on ordered categorical variables based on the nonparametric combination of permutation dependent tests (NPC; Pesarin and Salmaso, 2010). Furthermore, an extension of the nonparametric combination of dependent rankings (Arboretti et al., 2007) is proposed in order to construct a synthesis of composite indicators.

The methodological approaches are applied to a study of risk factors for skin cancer in a cohort of adult patients with heart transplants followed for a minimum of three years after transplantation (Belloni et al, 2004) and to a survey on tourist's opinions about “Tre Cime” Park (District of Sesto Dolomites/Alta Pusteria, Italy).  相似文献   

19.
In recent years, seamless phase I/II clinical trials have drawn much attention, as they consider both toxicity and efficacy endpoints in finding an optimal dose (OD). Engaging an appropriate number of patients in a trial is a challenging task. This paper attempts a dynamic stopping rule to save resources in phase I/II trials. That is, the stopping rule aims to save patients from unnecessary toxic or subtherapeutic doses. We allow a trial to stop early when widths of the confidence intervals for the dose-response parameters become narrower or when the sample size is equal to a predefined size, whichever comes first. The simulation study of dose-response scenarios in various settings demonstrates that the proposed stopping rule can engage an appropriate number of patients. Therefore, we suggest its use in clinical trials.  相似文献   

20.
Regression Type Estimators Using Multiple Auxiliary Information   总被引:2,自引:0,他引:2  
In this paper we consider a practical situation where information on two auxiliary variables related to the study variable is available at different levels. Following Kiregyera (1980, 1984) who has obtained a chain ratio-to-regression estimator and regression to regression estimator, we shall study several estimators that arise naturally in this context and compare them under the mean square error criterion. We extend these results to the case when multiple auxiliary information is available.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号