首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To assess the value of a continuous marker in predicting the risk of a disease, a graphical tool called the predictiveness curve has been proposed. It characterizes the marker's predictiveness, or capacity to risk stratify the population by displaying the distribution of risk endowed by the marker. Methods for making inference about the curve and for comparing curves in a general population have been developed. However, knowledge about a marker's performance in the general population only is not enough. Since a marker's effect on the risk model and its distribution can both differ across subpopulations, its predictiveness may vary when applied to different subpopulations. Moreover, information about the predictiveness of a marker conditional on baseline covariates is valuable for individual decision making about having the marker measured or not. Therefore, to fully realize the usefulness of a risk prediction marker, it is important to study its performance conditional on covariates. In this article, we propose semiparametric methods for estimating covariate-specific predictiveness curves for a continuous marker. Unmatched and matched case-control study designs are accommodated. We illustrate application of the methodology by evaluating serum creatinine as a predictor of risk of renal artery stenosis.  相似文献   

2.
One of the objectives of personalized medicine is to take treatment decisions based on a biomarker measurement. Therefore, it is often interesting to evaluate how well a biomarker can predict the response to a treatment. To do so, a popular methodology consists of using a regression model and testing for an interaction between treatment assignment and biomarker. However, the existence of an interaction is not sufficient for a biomarker to be predictive. It is only necessary. Hence, the use of the marker‐by‐treatment predictiveness curve has been recommended. In addition to evaluate how well a single continuous biomarker predicts treatment response, it can further help to define an optimal threshold. This curve displays the risk of a binary outcome as a function of the quantiles of the biomarker, for each treatment group. Methods that assume a binary outcome or rely on a proportional hazard model for a time‐to‐event outcome have been proposed to estimate this curve. In this work, we propose some extensions for censored data. They rely on a time‐dependent logistic model, and we propose to estimate this model via inverse probability of censoring weighting. We present simulations results and three applications to prostate cancer, liver cirrhosis, and lung cancer data. They suggest that a large number of events need to be observed to define a threshold with sufficient accuracy for clinical usefulness. They also illustrate that when the treatment effect varies with the time horizon which defines the outcome, then the optimal threshold also depends on this time horizon.  相似文献   

3.
Time‐to‐event data are common in clinical trials to evaluate survival benefit of a new drug, biological product, or device. The commonly used parametric models including exponential, Weibull, Gompertz, log‐logistic, log‐normal, are simply not flexible enough to capture complex survival curves observed in clinical and medical research studies. On the other hand, the nonparametric Kaplan Meier (KM) method is very flexible and successful on catching the various shapes in the survival curves but lacks ability in predicting the future events such as the time for certain number of events and the number of events at certain time and predicting the risk of events (eg, death) over time beyond the span of the available data from clinical trials. It is obvious that neither the nonparametric KM method nor the current parametric distributions can fulfill the needs in fitting survival curves with the useful characteristics for predicting. In this paper, a full parametric distribution constructed as a mixture of three components of Weibull distribution is explored and recommended to fit the survival data, which is as flexible as KM for the observed data but have the nice features beyond the trial time, such as predicting future events, survival probability, and hazard function.  相似文献   

4.
In biomedical research, two or more biomarkers may be available for diagnosis of a particular disease. Selecting one single biomarker which ideally discriminate a diseased group from a healthy group is confront in a diagnostic process. Frequently, most of the people use the accuracy measure, area under the receiver operating characteristic (ROC) curve to choose the best diagnostic marker among the available markers for diagnosis. Some authors have tried to combine the multiple markers by an optimal linear combination to increase the discriminatory power. In this paper, we propose an alternative method that combines two continuous biomarkers by direct bivariate modeling of the ROC curve under log-normality assumption. The proposed method is applied to simulated data set and prostate cancer diagnostic biomarker data set.  相似文献   

5.
Drug‐induced organ toxicity (DIOT) that leads to the removal of marketed drugs or termination of candidate drugs has been a leading concern for regulatory agencies and pharmaceutical companies. In safety studies, the genomic assays are conducted after the treatment so that drug‐induced adverse effects can occur. Two types of biomarkers are observed: biomarkers of susceptibility and biomarkers of response. This paper presents a statistical model to distinguish two types of biomarkers and procedures to identify susceptible subpopulations. The biomarkers identified are used to develop classification model to identify susceptible subpopulation. Two methods to identify susceptibility biomarkers were evaluated in terms of predictive performance in subpopulation identification, including sensitivity, specificity, and accuracy. Method 1 considered the traditional linear model with a variable‐by‐treatment interaction term, and Method 2 considered fitting a single predictor variable model using only treatment data. Monte Carlo simulation studies were conducted to evaluate the performance of the two methods and impact of the subpopulation prevalence, probability of DIOT, and sample size on the predictive performance. Method 2 appeared to outperform Method 1, which was due to the lack of power for testing the interaction effect. Important statistical issues and challenges regarding identification of preclinical DIOT biomarkers were discussed. In summary, identification of predictive biomarkers for treatment determination highly depends on the subpopulation prevalence. When the proportion of susceptible subpopulation is 1% or less, a very large sample size is needed to ensure observing sufficient number of DIOT responses for biomarker and/or subpopulation identifications. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Identifying important biomarkers that are predictive for cancer patients’ prognosis is key in gaining better insights into the biological influences on the disease and has become a critical component of precision medicine. The emergence of large-scale biomedical survival studies, which typically involve excessive number of biomarkers, has brought high demand in designing efficient screening tools for selecting predictive biomarkers. The vast amount of biomarkers defies any existing variable selection methods via regularization. The recently developed variable screening methods, though powerful in many practical setting, fail to incorporate prior information on the importance of each biomarker and are less powerful in detecting marginally weak while jointly important signals. We propose a new conditional screening method for survival outcome data by computing the marginal contribution of each biomarker given priorily known biological information. This is based on the premise that some biomarkers are known to be associated with disease outcomes a priori. Our method possesses sure screening properties and a vanishing false selection rate. The utility of the proposal is further confirmed with extensive simulation studies and analysis of a diffuse large B-cell lymphoma dataset. We are pleased to dedicate this work to Jack Kalbfleisch, who has made instrumental contributions to the development of modern methods of analyzing survival data.  相似文献   

7.
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log‐rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan–Meier estimators of survival curves using an IPTW log‐rank test for multi‐valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
In clinical studies, the researchers measure the patients' response longitudinally. In recent studies, Mixed models are used to determine effects in the individual level. In the other hand, Henderson et al. [3,4] developed a joint likelihood function which combines likelihood functions of longitudinal biomarkers and survival times. They put random effects in the longitudinal component to determine if a longitudinal biomarker is associated with time to an event. In this paper, we deal with a longitudinal biomarker as a growth curve and extend Henderson's method to determine if a longitudinal biomarker is associated with time to an event for the multivariate survival data.  相似文献   

9.
Identifying an optimal cutoff value for a continuous biomarker is often useful for medical applications. For binary outcome, commonly used cutoff finding criteria include Youden's index, classification accuracy, and the Euclidean distance to the upper left corner on the ROC curve. We extend these three criteria to accommodate censored survival time that subjected to competing risks. We provide various definitions of time-dependent true positive rate and false positive rate and estimate those quantities using nonparametric methods. In simulation studies, the Euclidean distance to the upper left corner on the ROC curve shows the best overall performance.  相似文献   

10.
A simple approach for analyzing longitudinally measured biomarkers is to calculate summary measures such as the area under the curve (AUC) for each individual and then compare the mean AUC between treatment groups using methods such as t test. This two-step approach is difficult to implement when there are missing data since the AUC cannot be directly calculated for individuals with missing measurements. Simple methods for dealing with missing data include the complete case analysis and imputation. A recent study showed that the estimated mean AUC difference between treatment groups based on the linear mixed model (LMM), rather than on individually calculated AUCs by simple imputation, has negligible bias under random missing assumptions and only small bias when missing is not at random. However, this model assumes the outcome to be normally distributed, which is often violated in biomarker data. In this paper, we propose to use a LMM on log-transformed biomarkers, based on which statistical inference for the ratio, rather than difference, of AUC between treatment groups is provided. The proposed method can not only handle the potential baseline imbalance in a randomized trail but also circumvent the estimation of the nuisance variance parameters in the log-normal model. The proposed model is applied to a recently completed large randomized trial studying the effect of nicotine reduction on biomarker exposure of smokers.  相似文献   

11.
It is well known that, when sample observations are independent, the area under the receiver operating characteristic (ROC) curve corresponds to the Wilcoxon statistics if the area is calculated by the trapezoidal rule. Correlated ROC curves arise often in medical research and have been studied by various parametric methods. On the basis of the Mann–Whitney U-statistics for clustered data proposed by Rosner and Grove, we construct an average ROC curve and derive nonparametric methods to estimate the area under the average curve for correlated ROC curves obtained from multiple readers. For the more complicated case where, in addition to multiple readers examining results on the same set of individuals, two or more diagnostic tests are involved, we derive analytic methods to compare the areas under correlated average ROC curves for these diagnostic tests. We demonstrate our methods in an example and compare our results with those obtained by other methods. The nonparametric average ROC curve and the analytic methods that we propose are easy to explain and simple to implement.  相似文献   

12.
Many commonly used statistical methods for data analysis or clinical trial design rely on incorrect assumptions or assume an over‐simplified framework that ignores important information. Such statistical practices may lead to incorrect conclusions about treatment effects or clinical trial designs that are impractical or that do not accurately reflect the investigator's goals. Bayesian nonparametric (BNP) models and methods are a very flexible new class of statistical tools that can overcome such limitations. This is because BNP models can accurately approximate any distribution or function and can accommodate a broad range of statistical problems, including density estimation, regression, survival analysis, graphical modeling, neural networks, classification, clustering, population models, forecasting and prediction, spatiotemporal models, and causal inference. This paper describes 3 illustrative applications of BNP methods, including a randomized clinical trial to compare treatments for intraoperative air leaks after pulmonary resection, estimating survival time with different multi‐stage chemotherapy regimes for acute leukemia, and evaluating joint effects of targeted treatment and an intermediate biological outcome on progression‐free survival time in prostate cancer.  相似文献   

13.
Summary.  In longitudinal studies of biological markers, different individuals may have different underlying patterns of response. In some applications, a subset of individuals experiences latent events, causing an instantaneous change in the level or slope of the marker trajectory. The paper presents a general mixture of hierarchical longitudinal models for serial biomarkers. Interest centres both on the time of the event and on levels of the biomarker before and after the event. In observational studies where marker series are incomplete, the latent event can be modelled by a survival distribution. Risk factors for the occurrence of the event can be investigated by including covariates in the survival distribution. A combination of Gibbs, Metropolis–Hastings and reversible jump Markov chain Monte Carlo sampling is used to fit the models to serial measurements of forced expiratory volume from lung transplant recipients.  相似文献   

14.
Flexible incorporation of both geographical patterning and risk effects in cancer survival models is becoming increasingly important, due in part to the recent availability of large cancer registries. Most spatial survival models stochastically order survival curves from different subpopulations. However, it is common for survival curves from two subpopulations to cross in epidemiological cancer studies and thus interpretable standard survival models can not be used without some modification. Common fixes are the inclusion of time-varying regression effects in the proportional hazards model or fully nonparametric modeling, either of which destroys any easy interpretability from the fitted model. To address this issue, we develop a generalized accelerated failure time model which allows stratification on continuous or categorical covariates, as well as providing per-variable tests for whether stratification is necessary via novel approximate Bayes factors. The model is interpretable in terms of how median survival changes and is able to capture crossing survival curves in the presence of spatial correlation. A detailed Markov chain Monte Carlo algorithm is presented for posterior inference and a freely available function frailtyGAFT is provided to fit the model in the R package spBayesSurv. We apply our approach to a subset of the prostate cancer data gathered for Louisiana by the surveillance, epidemiology, and end results program of the National Cancer Institute.  相似文献   

15.
A novel class of hierarchical nonparametric Bayesian survival regression models for time-to-event data with uninformative right censoring is introduced. The survival curve is modeled as a random function whose prior distribution is defined using the beta-Stacy (BS) process. The prior mean of each survival probability and its prior variance are linked to a standard parametric survival regression model. This nonparametric survival regression can thus be anchored to any reference parametric form, such as a proportional hazards or an accelerated failure time model, allowing substantial departures of the predictive survival probabilities when the reference model is not supported by the data. Also, under this formulation the predictive survival probabilities will be close to the empirical survival distribution near the mode of the reference model and they will be shrunken towards its probability density in the tails of the empirical distribution.  相似文献   

16.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   

17.
In this article, we discuss how to identify longitudinal biomarkers in survival analysis under the accelerated failure time model and also discuss the effectiveness of biomarkers under the accelerated failure time model. Two methods proposed by Shcemper et al. are deployed to measure the efficacy of biomarkers. We use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. The simulations are used to explore how the number of individuals, the number of time points per individual influence the effectiveness of the biomarker to predict survival at the given endpoint under the accelerated failure time model. We illustrate our methods using a prothrombin index as a predictor of survival in liver cirrhosis patients.  相似文献   

18.
Under the Loewe additivity, constant relative potency between two drugs is a sufficient condition for the two drugs to be additive. Implicit in this condition is that one drug acts like a dilution of the other. Geometrically, it means that the dose‐response curve of one drug is a copy of another that is shifted horizontally by a constant over the log‐dose axis. Such phenomenon is often referred to as parallelism. Thus, testing drug additivity is equivalent to the demonstration of parallelism between two dose‐response curves. Current methods used for testing parallelism are usually based on significance tests for differences between parameters in the dose‐response curves of the monotherapies. A p‐value of less than 0.05 is indicative of non‐parallelism. The p‐value‐based methods, however, may be fundamentally flawed because an increase in either sample size or precision of the assay used to measure drug effect may result in more frequent rejection of parallel lines for a trivial difference. Moreover, similarity (difference) between model parameters does not necessarily translate into the similarity (difference) between the two response curves. As a result, a test may conclude that the model parameters are similar (different), yet there is little assurance on the similarity between the two dose‐response curves. In this paper, we introduce a Bayesian approach to directly test the hypothesis that the two drugs have a constant relative potency. An important utility of our proposed method is in aiding go/no‐go decisions concerning two drug combination studies. It is illustrated with both a simulated example and a real‐life example. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
洛伦兹曲线与基尼系数是研究社会收入分配差异的重要工具.社会收入分配是一个复杂的过程,用尽可能精确的曲线给出洛伦兹曲线的估计进而给出基尼系数的估计,历来是统计学者和经济学者的工作目标.基于将参数方法与非参数方法相结合的思想给出洛伦兹曲线的半参数估计,进而导出基尼系数的估计,并据此进行了实证分析.  相似文献   

20.
To assess the classification accuracy of a continuous diagnostic result, the receiver operating characteristic (ROC) curve is commonly used in applications. The partial area under the ROC curve (pAUC) is one of the widely accepted summary measures due to its generality and ease of probability interpretation. In the field of life science, a direct extension of the pAUC into the time-to-event setting can be used to measure the usefulness of a biomarker for disease detection over time. Without using a trapezoidal rule, we propose nonparametric estimators, which are easily computed and have closed-form expressions, for the time-dependent pAUC. The asymptotic Gaussian processes of the estimators are established and the estimated variance-covariance functions are provided, which are essential in the construction of confidence intervals. The finite sample performance of the proposed inference procedures are investigated through a series of simulations. Our method is further applied to evaluate the classification ability of CD4 cell counts on patient's survival time in the AIDS Clinical Trials Group (ACTG) 175 study. In addition, the inferences can be generalized to compare the time-dependent pAUCs between patients received the prior antiretroviral therapy and those without it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号