首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Patient heterogeneity may complicate dose‐finding in phase 1 clinical trials if the dose‐toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively,it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem,we consider a generalization of the continual reassessment method on the basis of a hierarchical Bayesian dose‐toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup‐specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to 3 alternative approaches,on the basis of nonhierarchical models,that make different types of assumptions about within‐subgroup dose‐toxicity curves. The simulations show that the hierarchical model‐based method is recommended in settings where the dose‐toxicity curves are exchangeable between subgroups. We present practical guidelines for application and provide computer programs for trial simulation and conduct.  相似文献   

2.
With the development of molecular targeted drugs, predictive biomarkers have played an increasingly important role in identifying patients who are likely to receive clinically meaningful benefits from experimental drugs (i.e., sensitive subpopulation) even in early clinical trials. For continuous biomarkers, such as mRNA levels, it is challenging to determine cutoff value for the sensitive subpopulation, and widely accepted study designs and statistical approaches are not currently available. In this paper, we propose the Bayesian adaptive patient enrollment restriction (BAPER) approach to identify the sensitive subpopulation while restricting enrollment of patients from the insensitive subpopulation based on the results of interim analyses, in a randomized phase 2 trial with time‐to‐endpoint outcome and a single biomarker. Applying a four‐parameter change‐point model to the relationship between the biomarker and hazard ratio, we calculate the posterior distribution of the cutoff value that exhibits the target hazard ratio and use it for the restriction of the enrollment and the identification of the sensitive subpopulation. We also consider interim monitoring rules for termination because of futility or efficacy. Extensive simulations demonstrated that our proposed approach reduced the number of enrolled patients from the insensitive subpopulation, relative to an approach with no enrollment restriction, without reducing the likelihood of a correct decision for next trial (no‐go, go with entire population, or go with sensitive subpopulation) or correct identification of the sensitive subpopulation. Additionally, the four‐parameter change‐point model had a better performance over a wide range of simulation scenarios than a commonly used dichotomization approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
With advancement of technologies such as genomic sequencing, predictive biomarkers have become a useful tool for the development of personalized medicine. Predictive biomarkers can be used to select subsets of patients, which are most likely to benefit from a treatment. A number of approaches for subgroup identification were proposed over the last years. Although overviews of subgroup identification methods are available, systematic comparisons of their performance in simulation studies are rare. Interaction trees (IT), model‐based recursive partitioning, subgroup identification based on differential effect, simultaneous threshold interaction modeling algorithm (STIMA), and adaptive refinement by directed peeling were proposed for subgroup identification. We compared these methods in a simulation study using a structured approach. In order to identify a target population for subsequent trials, a selection of the identified subgroups is needed. Therefore, we propose a subgroup criterion leading to a target subgroup consisting of the identified subgroups with an estimated treatment difference no less than a pre‐specified threshold. In our simulation study, we evaluated these methods by considering measures for binary classification, like sensitivity and specificity. In settings with large effects or huge sample sizes, most methods perform well. For more realistic settings in drug development involving data from a single trial only, however, none of the methods seems suitable for selecting a target population. Using the subgroup criterion as alternative to the proposed pruning procedures, STIMA and IT can improve their performance in some settings. The methods and the subgroup criterion are illustrated by an application in amyotrophic lateral sclerosis.  相似文献   

4.
Two‐stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two‐stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family‐wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed ‘true’ subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two‐stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
The phase II basket trial in oncology is a novel design that enables the simultaneous assessment of treatment effects of one anti-cancer targeted agent in multiple cancer types. Biomarkers could potentially associate with the clinical outcomes and re-define clinically meaningful treatment effects. It is therefore natural to develop a biomarker-based basket design to allow the prospective enrichment of the trials with the adaptive selection of the biomarker-positive (BM+) subjects who are most sensitive to the experimental treatment. We propose a two-stage phase II adaptive biomarker basket (ABB) design based on a potential predictive biomarker measured on a continuous scale. At Stage 1, the design incorporates a biomarker cutoff estimation procedure via a hierarchical Bayesian model with biomarker as a covariate (HBMbc). At Stage 2, the design enrolls only BM+ subjects, defined as those with the biomarker values exceeding the biomarker cutoff within each cancer type, and subsequently assesses the early efficacy and/or futility stopping through the pre-defined interim analyses. At the end of the trial, the response rate of all BM+ subjects for each cancer type can guide drug development, while the data from all subjects can be used to further model the relationship between the biomarker value and the clinical outcome for potential future research. The extensive simulation studies show that the ABB design could produce a good estimate of the biomarker cutoff to select BM+ subjects with high accuracy and could outperform the existing phase II basket biomarker cutoff design under various scenarios.  相似文献   

6.
In an affected‐sib‐pair genetic linkage analysis, identical by descent data for affected sib pairs are routinely collected at a large number of markers along chromosomes. Under very general genetic assumptions, the IBD distribution at each marker satisfies the possible triangle constraint. Statistical analysis of IBD data should thus utilize this information to improve efficiency. At the same time, this constraint renders the usual regularity conditions for likelihood‐based statistical methods unsatisfied. In this paper, the authors study the asymptotic properties of the likelihood ratio test (LRT) under the possible triangle constraint. They derive the limiting distribution of the LRT statistic based on data from a single locus. They investigate the precision of the asymptotic distribution and the power of the test by simulation. They also study the test based on the supremum of the LRT statistics over the markers distributed throughout a chromosome. Instead of deriving a limiting distribution for this test, they use a mixture of chi‐squared distributions to approximate its true distribution. Their simulation results show that this approach has desirable simplicity and satisfactory precision.  相似文献   

7.
Testing for homogeneity in finite mixture models has been investigated by many researchers. The asymptotic null distribution of the likelihood ratio test (LRT) is very complex and difficult to use in practice. We propose a modified LRT for homogeneity in finite mixture models with a general parametric kernel distribution family. The modified LRT has a χ-type of null limiting distribution and is asymptotically most powerful under local alternatives. Simulations show that it performs better than competing tests. They also reveal that the limiting distribution with some adjustment can satisfactorily approximate the quantiles of the test statistic, even for moderate sample sizes.  相似文献   

8.
Predictive enrichment strategies use biomarkers to selectively enroll oncology patients into clinical trials to more efficiently demonstrate therapeutic benefit. Because the enriched population differs from the patient population eligible for screening with the biomarker assay, there is potential for bias when estimating clinical utility for the screening eligible population if the selection process is ignored. We write estimators of clinical utility as integrals averaging regression model predictions over the conditional distribution of the biomarker scores defined by the assay cutoff and discuss the conditions under which consistent estimation can be achieved while accounting for some nuances that may arise as the biomarker assay progresses toward a companion diagnostic. We outline and implement a Bayesian approach in estimating these clinical utility measures and use simulations to illustrate performance and the potential biases when estimation naively ignores enrichment. Results suggest that the proposed integral representation of clinical utility in combination with Bayesian methods provide a practical strategy to facilitate cutoff decision‐making in this setting. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Dose‐escalation trials commonly assume a homogeneous trial population to identify a single recommended dose of the experimental treatment for use in future trials. Wrongly assuming a homogeneous population can lead to a diluted treatment effect. Equally, exclusion of a subgroup that could in fact benefit from the treatment can cause a beneficial treatment effect to be missed. Accounting for a potential subgroup effect (ie, difference in reaction to the treatment between subgroups) in dose‐escalation can increase the chance of finding the treatment to be efficacious in a larger patient population. A standard Bayesian model‐based method of dose‐escalation is extended to account for a subgroup effect by including covariates for subgroup membership in the dose‐toxicity model. A stratified design performs well but uses available data inefficiently and makes no inferences concerning presence of a subgroup effect. A hypothesis test could potentially rectify this problem but the small sample sizes result in a low‐powered test. As an alternative, the use of spike and slab priors for variable selection is proposed. This method continually assesses the presence of a subgroup effect, enabling efficient use of the available trial data throughout escalation and in identifying the recommended dose(s). A simulation study, based on real trial data, was conducted and this design was found to be both promising and feasible.  相似文献   

10.
The zero-inflated negative binomial (ZINB) model is used to account for commonly occurring overdispersion detected in data that are initially analyzed under the zero-inflated Poisson (ZIP) model. Tests for overdispersion (Wald test, likelihood ratio test [LRT], and score test) based on ZINB model for use in ZIP regression models have been developed. Due to similarity to the ZINB model, we consider the zero-inflated generalized Poisson (ZIGP) model as an alternate model for overdispersed zero-inflated count data. The score test has an advantage over the LRT and the Wald test in that the score test only requires that the parameter of interest be estimated under the null hypothesis. This paper proposes score tests for overdispersion based on the ZIGP model and illustrates that the derived score statistics are exactly the same as the score statistics under the ZINB model. A simulation study indicates the proposed score statistics are preferred to other tests for higher empirical power. In practice, based on the approximate mean–variance relationship in the data, the ZINB or ZIGP model can be considered, and a formal score test based on asymptotic standard normal distribution can be employed for assessing overdispersion in the ZIP model. We provide an example to illustrate the procedures for data analysis.  相似文献   

11.
Sequential administration of immunotherapy following radiotherapy (immunoRT) has attracted much attention in cancer research. Due to its unique feature that radiotherapy upregulates the expression of a predictive biomarker for immunotherapy, novel clinical trial designs are needed for immunoRT to identify patient subgroups and the optimal dose for each subgroup. In this article, we propose a Bayesian phase I/II design for immunotherapy administered after standard-dose radiotherapy for this purpose. We construct a latent subgroup membership variable and model it as a function of the baseline and pre-post radiotherapy change in the predictive biomarker measurements. Conditional on the latent subgroup membership of each patient, we jointly model the continuous immune response and the binary efficacy outcome using plateau models, and model toxicity using the equivalent toxicity score approach to account for toxicity grades. During the trial, based on accumulating data, we continuously update model estimates and adaptively randomize patients to admissible doses. Simulation studies and an illustrative trial application show that our design has good operating characteristics in terms of identifying both patient subgroups and the optimal dose for each subgroup.  相似文献   

12.
Overdispersion is a common phenomenon in Poisson modeling. The generalized Poisson (GP) regression model accommodates both overdispersion and underdispersion in count data modeling, and is an increasingly popular platform for modeling overdispersed count data. The Poisson model is one of the special cases in the collection of models which may be specified by GP regression. Thus, we may derive a test of overdispersion which compares the equi-dispersion Poisson model within the context of the more general GP regression model. The score test has an advantage over the likelihood ratio test (LRT) and over the Wald test in that the score test only requires that the parameter of interest be estimated under the null hypothesis (the Poisson model). Herein, we propose a score test for overdispersion based on the GP model (specifically the GP-2 model) and compare the power of the test with the LRT and Wald tests. A simulation study indicates the proposed score test based on asymptotic standard normal distribution is more appropriate in practical applications.  相似文献   

13.
The median is a commonly used parameter to characterize biomarker data. In particular, with two vastly different underlying distributions, comparing medians provides different information than comparing means; however, very few tests for medians are available. We propose a series of two‐sample median‐specific tests using empirical likelihood methodology and investigate their properties. We present the technical details of incorporating the relevant constraints into the empirical likelihood function for in‐depth median testing. An extensive Monte Carlo study shows that the proposed tests have excellent operating characteristics even under unfavourable occasions such as non‐exchangeability under the null hypothesis. We apply the proposed methods to analyze biomarker data from Western blot analysis to compare normal cells with bronchial epithelial cells from a case–control study. The Canadian Journal of Statistics 39: 671–689; 2011. © 2011 Statistical Society of Canada  相似文献   

14.
Subgroup detection has received increasing attention recently in different fields such as clinical trials, public management and market segmentation analysis. In these fields, people often face time‐to‐event data, which are commonly subject to right censoring. This paper proposes a semiparametric Logistic‐Cox mixture model for subgroup analysis when the interested outcome is event time with right censoring. The proposed method mainly consists of a likelihood ratio‐based testing procedure for testing the existence of subgroups. The expectation–maximization iteration is applied to improve the testing power, and a model‐based bootstrap approach is developed to implement the testing procedure. When there exist subgroups, one can also use the proposed model to estimate the subgroup effect and construct predictive scores for the subgroup membership. The large sample properties of the proposed method are studied. The finite sample performance of the proposed method is assessed by simulation studies. A real data example is also provided for illustration.  相似文献   

15.
In this paper, we consider testing the effects of treatment on survival time when a subject experiences an immediate intermediate event (IE) prior to death or predetermined endpoint. A two-stage model incorporating both (i) the effects of the covariates on the immediate IE and (ii) survival regression with the immediate IE and other covariates is presented. We study the likelihood ratio test (LRT) for testing the treatment effect based on the proposed two stage model. We propose two procedures: an asymptotic-based procedure and a resampling-based procedure, to approximate the null distribution of the LRT. We numerically show the advantages of the two stage modeling over the existing single stage survival model with interactions between the covariates and the immediate IE. In addition, an illustrative empirical example is provided.  相似文献   

16.
Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers – equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely.  相似文献   

17.
In clinical studies, the researchers measure the patients' response longitudinally. In recent studies, Mixed models are used to determine effects in the individual level. In the other hand, Henderson et al. [3,4] developed a joint likelihood function which combines likelihood functions of longitudinal biomarkers and survival times. They put random effects in the longitudinal component to determine if a longitudinal biomarker is associated with time to an event. In this paper, we deal with a longitudinal biomarker as a growth curve and extend Henderson's method to determine if a longitudinal biomarker is associated with time to an event for the multivariate survival data.  相似文献   

18.
Generalized exponential, geometric extreme exponential and Weibull distributions are three non-negative skewed distributions that are suitable for analysing lifetime data. We present diagnostic tools based on the likelihood ratio test (LRT) and the minimum Kolmogorov distance (KD) method to discriminate between these models. Probability of correct selection has been calculated for each model and for several combinations of shape parameters and sample sizes using Monte Carlo simulation. Application of LRT and KD discrimination methods to some real data sets has also been studied.  相似文献   

19.
Biomarkers that predict efficacy and safety for a given drug therapy become increasingly important for treatment strategy and drug evaluation in personalized medicine. Methodology for appropriately identifying and validating such biomarkers is critically needed, although it is very challenging to develop, especially in trials of terminal diseases with survival endpoints. The marker‐by‐treatment predictiveness curve serves this need by visualizing the treatment effect on survival as a function of biomarker for each treatment. In this article, we propose the weighted predictiveness curve (WPC). Based on the nature of the data, it generates predictiveness curves by utilizing either parametric or nonparametric approaches. Especially for nonparametric predictiveness curves, by incorporating local assessment techniques, it requires minimum model assumptions and provides great flexibility to visualize the marker‐by‐treatment relationship. WPC can be used to compare biomarkers and identify the one with the highest potential impact. Equally important, by simultaneously viewing several treatment‐specific predictiveness curves across the biomarker range, WPC can also guide the biomarker‐based treatment regimens. Simulations representing various scenarios are employed to evaluate the performance of WPC. Application on a well‐known liver cirrhosis trial sheds new light on the data and leads to discovery of novel patterns of treatment biomarker interactions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
In mixed linear models, it is frequently of interest to test hypotheses on the variance components. F-test and likelihood ratio test (LRT) are commonly used for such purposes. Current LRTs available in literature are based on limiting distribution theory. With the development of finite sample distribution theory, it becomes possible to derive the exact test for likelihood ratio statistic. In this paper, we consider the problem of testing null hypotheses on the variance component in a one-way balanced random effects model. We use the exact test for the likelihood ratio statistic and compare the performance of F-test and LRT. Simulations provide strong support of the equivalence between these two tests. Furthermore, we prove the equivalence between these two tests mathematically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号