首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
With the development of molecular targeted drugs, predictive biomarkers have played an increasingly important role in identifying patients who are likely to receive clinically meaningful benefits from experimental drugs (i.e., sensitive subpopulation) even in early clinical trials. For continuous biomarkers, such as mRNA levels, it is challenging to determine cutoff value for the sensitive subpopulation, and widely accepted study designs and statistical approaches are not currently available. In this paper, we propose the Bayesian adaptive patient enrollment restriction (BAPER) approach to identify the sensitive subpopulation while restricting enrollment of patients from the insensitive subpopulation based on the results of interim analyses, in a randomized phase 2 trial with time‐to‐endpoint outcome and a single biomarker. Applying a four‐parameter change‐point model to the relationship between the biomarker and hazard ratio, we calculate the posterior distribution of the cutoff value that exhibits the target hazard ratio and use it for the restriction of the enrollment and the identification of the sensitive subpopulation. We also consider interim monitoring rules for termination because of futility or efficacy. Extensive simulations demonstrated that our proposed approach reduced the number of enrolled patients from the insensitive subpopulation, relative to an approach with no enrollment restriction, without reducing the likelihood of a correct decision for next trial (no‐go, go with entire population, or go with sensitive subpopulation) or correct identification of the sensitive subpopulation. Additionally, the four‐parameter change‐point model had a better performance over a wide range of simulation scenarios than a commonly used dichotomization approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided.  相似文献   

3.
With advancement of technologies such as genomic sequencing, predictive biomarkers have become a useful tool for the development of personalized medicine. Predictive biomarkers can be used to select subsets of patients, which are most likely to benefit from a treatment. A number of approaches for subgroup identification were proposed over the last years. Although overviews of subgroup identification methods are available, systematic comparisons of their performance in simulation studies are rare. Interaction trees (IT), model‐based recursive partitioning, subgroup identification based on differential effect, simultaneous threshold interaction modeling algorithm (STIMA), and adaptive refinement by directed peeling were proposed for subgroup identification. We compared these methods in a simulation study using a structured approach. In order to identify a target population for subsequent trials, a selection of the identified subgroups is needed. Therefore, we propose a subgroup criterion leading to a target subgroup consisting of the identified subgroups with an estimated treatment difference no less than a pre‐specified threshold. In our simulation study, we evaluated these methods by considering measures for binary classification, like sensitivity and specificity. In settings with large effects or huge sample sizes, most methods perform well. For more realistic settings in drug development involving data from a single trial only, however, none of the methods seems suitable for selecting a target population. Using the subgroup criterion as alternative to the proposed pruning procedures, STIMA and IT can improve their performance in some settings. The methods and the subgroup criterion are illustrated by an application in amyotrophic lateral sclerosis.  相似文献   

4.
In recent years, there has been a great deal of literature published concerning the identification of predictive biomarkers and indeed, an increasing number of therapies have been licenced on this basis. However, this progress has been made almost exclusively on the basis of biomarkers measured prior to exposure to treatment. There are quite different challenges when the responding population can only be identified on the basis of outcomes observed following exposure to treatment, especially if it represents only a small proportion of patients. The purpose of this paper is to explore whether or when a treatment could be licenced on the basis of post‐treatment predictive biomarkers (PTPB), the focus is on oncology but the concepts should apply to all therapeutic areas. We review the potential pitfalls in hypothesising the presence of a PTPB. We also present challenges in trial design required to confirm and licence on the basis of a PTPB: what's the control population?, could there be a detriment to non‐responders by exposure to the new treatment?, can responders be identified rapidly?, could prior exposure to the new treatment adversely affect performance of the control in responders? Nevertheless, if the patients to be treated could be rapidly identified after prior exposure to treatment, and without harm to non‐responders, in appropriately designed and analysed trials, may be more targeted therapies could be made available to patients. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
From a survival analysis perspective, bank failure data are often characterized by small default rates and heavy censoring. This empirical evidence can be explained by the existence of a subpopulation of banks likely immune from bankruptcy. In this regard, we use a mixture cure model to separate the factors with an influence on the susceptibility to default from the ones affecting the survival time of susceptible banks. In this paper, we extend a semi-parametric proportional hazards cure model to time-varying covariates and we propose a variable selection technique based on its penalized likelihood. By means of a simulation study, we show how this technique performs reasonably well. Finally, we illustrate an application to commercial bank failures in the United States over the period 2006–2016.  相似文献   

6.
Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR‐RTR‐RRT and TRTR‐RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR‐RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR‐RT designs. Under the assumption of zero subject‐by‐formulation interaction, this basic model is comparable to the FDA‐recommended model for TRR‐RTR‐RRT and TRTR‐RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real‐world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ‐based testing procedure that evaluates HV drugs under practical TR‐RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ‐based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods.  相似文献   

7.
We developed a flexible non-parametric Bayesian model for regional disease-prevalence estimation based on cross-sectional data that are obtained from several subpopulations or clusters such as villages, cities, or herds. The subpopulation prevalences are modeled with a mixture distribution that allows for zero prevalence. The distribution of prevalences among diseased subpopulations is modeled as a mixture of finite Polya trees. Inferences can be obtained for (1) the proportion of diseased subpopulations in a region, (2) the distribution of regional prevalences, (3) the mean and median prevalence in the region, (4) the prevalence of any sampled subpopulation, and (5) predictive distributions of prevalences for regional subpopulations not included in the study, including the predictive probability of zero prevalence. We focus on prevalence estimation using data from a single diagnostic test, but we also briefly discuss the scenario where two conditionally dependent (or independent) diagnostic tests are used. Simulated data demonstrate the utility of our non-parametric model over parametric analysis. An example involving brucellosis in cattle is presented.  相似文献   

8.
Biomarkers that predict efficacy and safety for a given drug therapy become increasingly important for treatment strategy and drug evaluation in personalized medicine. Methodology for appropriately identifying and validating such biomarkers is critically needed, although it is very challenging to develop, especially in trials of terminal diseases with survival endpoints. The marker‐by‐treatment predictiveness curve serves this need by visualizing the treatment effect on survival as a function of biomarker for each treatment. In this article, we propose the weighted predictiveness curve (WPC). Based on the nature of the data, it generates predictiveness curves by utilizing either parametric or nonparametric approaches. Especially for nonparametric predictiveness curves, by incorporating local assessment techniques, it requires minimum model assumptions and provides great flexibility to visualize the marker‐by‐treatment relationship. WPC can be used to compare biomarkers and identify the one with the highest potential impact. Equally important, by simultaneously viewing several treatment‐specific predictiveness curves across the biomarker range, WPC can also guide the biomarker‐based treatment regimens. Simulations representing various scenarios are employed to evaluate the performance of WPC. Application on a well‐known liver cirrhosis trial sheds new light on the data and leads to discovery of novel patterns of treatment biomarker interactions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Before biomarkers can be used in clinical trials or patients' management, the laboratory assays that measure their levels have to go through development and analytical validation. One of the most critical performance metrics for validation of any assay is related to the minimum amount of values that can be detected and any value below this limit is referred to as below the limit of detection (LOD). Most of the existing approaches that model such biomarkers, restricted by LOD, are parametric in nature. These parametric models, however, heavily depend on the distributional assumptions, and can result in loss of precision under the model or the distributional misspecifications. Using an example from a prostate cancer clinical trial, we show how a critical relationship between serum androgen biomarker and a prognostic factor of overall survival is completely missed by the widely used parametric Tobit model. Motivated by this example, we implement a semiparametric approach, through a pseudo-value technique, that effectively captures the important relationship between the LOD restricted serum androgen and the prognostic factor. Our simulations show that the pseudo-value based semiparametric model outperforms a commonly used parametric model for modeling below LOD biomarkers by having lower mean square errors of estimation.  相似文献   

10.
Abstract. A model‐based predictive estimator is proposed for the population proportions of a polychotomous response variable, based on a sample from the population and on auxiliary variables, whose values are known for the entire population. The responses for the non‐sample units are predicted using a multinomial logit model, which is a parametric function of the auxiliary variables. A bootstrap estimator is proposed for the variance of the predictive estimator, its consistency is proved and its small sample performance is compared with that of an analytical estimator. The proposed predictive estimator is compared with other available estimators, including model‐assisted ones, both in a simulation study involving different sampling designs and model mis‐specification, and using real data from an opinion survey. The results indicate that the prediction approach appears to use auxiliary information more efficiently than the model‐assisted approach.  相似文献   

11.
In a pharmacokinetic drug interaction study using a three‐period, three‐treatment (drug A, drug B, and drugs A and B concomitantly) crossover design, pharmacokinetic parameters for either drug are only measured in two of the three periods. Similar missing data problems can arise for a four‐period, four‐treatment crossover pharmacokinetic comparability study. This paper investigates whether the usual ANOVA model for the crossover design can be applied under this pattern of missing data. It is shown that the model can still be used, contrary to a belief that a new one is needed. The effect of this type of missing data pattern on the statistical properties of treatment, period and carryover effect estimates was derived and illustrated by means of simulations and an example. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

12.
近年来多维心理测验被广泛应用于各类评估,虽然编制测验时知道整个测验考察的潜在特质(或称为维度),但是测验题目具体考察的维度仍需确定。借助多维项目反应理论模型与广义线性模型的关系,使用LASSO和弹性网两种变量筛选方法,可解决测验题目的维度识别问题。模拟研究发现,LASSO方法比弹性网方法具有更好的维度识别效果,前者对不同类型的多维测验具有较高的维度识别准确率。  相似文献   

13.
Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data.  相似文献   

14.
Sequential administration of immunotherapy following radiotherapy (immunoRT) has attracted much attention in cancer research. Due to its unique feature that radiotherapy upregulates the expression of a predictive biomarker for immunotherapy, novel clinical trial designs are needed for immunoRT to identify patient subgroups and the optimal dose for each subgroup. In this article, we propose a Bayesian phase I/II design for immunotherapy administered after standard-dose radiotherapy for this purpose. We construct a latent subgroup membership variable and model it as a function of the baseline and pre-post radiotherapy change in the predictive biomarker measurements. Conditional on the latent subgroup membership of each patient, we jointly model the continuous immune response and the binary efficacy outcome using plateau models, and model toxicity using the equivalent toxicity score approach to account for toxicity grades. During the trial, based on accumulating data, we continuously update model estimates and adaptively randomize patients to admissible doses. Simulation studies and an illustrative trial application show that our design has good operating characteristics in terms of identifying both patient subgroups and the optimal dose for each subgroup.  相似文献   

15.
Varying-coefficient models have been widely used to investigate the possible time-dependent effects of covariates when the response variable comes from normal distribution. Much progress has been made for inference and variable selection in the framework of such models. However, the identification of model structure, that is how to identify which covariates have time-varying effects and which have fixed effects, remains a challenging and unsolved problem especially when the dimension of covariates is much larger than the sample size. In this article, we consider the structural identification and variable selection problems in varying-coefficient models for high-dimensional data. Using a modified basis expansion approach and group variable selection methods, we propose a unified procedure to simultaneously identify the model structure, select important variables and estimate the coefficient curves. The unique feature of the proposed approach is that we do not have to specify the model structure in advance, therefore, it is more realistic and appropriate for real data analysis. Asymptotic properties of the proposed estimators have been derived under regular conditions. Furthermore, we evaluate the finite sample performance of the proposed methods with Monte Carlo simulation studies and a real data analysis.  相似文献   

16.
An important problem in statistical practice is the selection of a suitable statistical model. Several model selection strategies are available in the literature, having different asymptotic and small sample properties, depending on the characteristics of the data generating mechanism. These characteristics are difficult to check in practice and there is a need for a data-driven adaptive procedure to identify an appropriate model selection strategy for the data at hand. We call such an identification a model metaselection, and we base it on the analysis of recursive prediction residuals obtained from each strategy with increasing sample sizes. Graphical tools are proposed in order to study these recursive residuals. Their use is illustrated on real and simulated data sets. When necessary, an automatic metaselection can be performed by simply accumulating predictive losses. Asymptotic and small sample results are presented.  相似文献   

17.
Predictive enrichment strategies use biomarkers to selectively enroll oncology patients into clinical trials to more efficiently demonstrate therapeutic benefit. Because the enriched population differs from the patient population eligible for screening with the biomarker assay, there is potential for bias when estimating clinical utility for the screening eligible population if the selection process is ignored. We write estimators of clinical utility as integrals averaging regression model predictions over the conditional distribution of the biomarker scores defined by the assay cutoff and discuss the conditions under which consistent estimation can be achieved while accounting for some nuances that may arise as the biomarker assay progresses toward a companion diagnostic. We outline and implement a Bayesian approach in estimating these clinical utility measures and use simulations to illustrate performance and the potential biases when estimation naively ignores enrichment. Results suggest that the proposed integral representation of clinical utility in combination with Bayesian methods provide a practical strategy to facilitate cutoff decision‐making in this setting. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Tree‐based methods are frequently used in studies with censored survival time. Their structure and ease of interpretability make them useful to identify prognostic factors and to predict conditional survival probabilities given an individual's covariates. The existing methods are tailor‐made to deal with a survival time variable that is measured continuously. However, survival variables measured on a discrete scale are often encountered in practice. The authors propose a new tree construction method specifically adapted to such discrete‐time survival variables. The splitting procedure can be seen as an extension, to the case of right‐censored data, of the entropy criterion for a categorical outcome. The selection of the final tree is made through a pruning algorithm combined with a bootstrap correction. The authors also present a simple way of potentially improving the predictive performance of a single tree through bagging. A simulation study shows that single trees and bagged‐trees perform well compared to a parametric model. A real data example investigating the usefulness of personality dimensions in predicting early onset of cigarette smoking is presented. The Canadian Journal of Statistics 37: 17‐32; 2009 © 2009 Statistical Society of Canada  相似文献   

19.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   

20.
This study considers the detection of treatment‐by‐subset interactions in a stratified, randomised clinical trial with a binary‐response variable. The focus lies on the detection of qualitative interactions. In addition, the presented method is useful more generally, as it can assess the inconsistency of the treatment effects among strata by using an a priori‐defined inconsistency margin. The methodology presented is based on the construction of ratios of treatment effects. In addition to multiplicity‐adjusted p‐values, simultaneous confidence intervals are recommended to use in detecting the source and the amount of a potential qualitative interaction. The proposed method is demonstrated on a multi‐regional trial using the open‐source statistical software R . Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号