首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 15 毫秒
1.
The ROC (receiver operating characteristic) curve is frequently used for describing effectiveness of a diagnostic marker or test. Classical estimation of the ROC curve uses independent identically distributed samples taken randomly from the healthy and diseased populations. Frequently not all subjects undergo a definitive gold standard assessment of disease status (verification). Estimation of the ROC curve based on data only from subjects with verified disease status may be badly biased (verification bias). In this work we investigate the properties of the doubly robust (DR) method for estimating the ROC curve adjusted for covariates (ROC regression) under verification bias. We develop the estimator's asymptotic distribution and examine its finite sample size properties via a simulation study. We apply this procedure to fingerstick postprandial blood glucose measurement data adjusting for age.  相似文献   

2.
Abstract

ROC curve is a fundamental evaluation tool in medical researches and survival analysis. The estimation of ROC curve has been studied extensively with complete data and right-censored survival data. However, these methods are not suitable to analyze the length-biased and right-censored data. Since this kind of data includes the auxiliary information that truncation time and residual time share the same distribution, the two new estimators for the ROC curve are proposed by taking into account this auxiliary information to improve estimation efficiency. Numerical simulation studies with different assumed cases and real data analysis are conducted.  相似文献   

3.
Based on the SCAD penalty and the area under the ROC curve (AUC), we propose a new method for selecting and combining biomarkers for disease classification and prediction. The proposed estimator for the combination of the biomarkers has an oracle property; that is, the estimated combination of the biomarkers performs as well as it would have been if the biomarkers significantly associated with the outcome had been known in advance, in terms of discriminative power. The proposed estimator is computationally feasible, n1/2‐consistent and asymptotically normal. Simulation studies show that the proposed method performs better than existing methods. We illustrate the proposed methodology in the acoustic startle response study. The Canadian Journal of Statistics 39: 324–343; 2011 © 2011 Statistical Society of Canada  相似文献   

4.
In the cases with three ordinal diagnostic groups, the important measures of diagnostic accuracy are the volume under surface (VUS) and the partial volume under surface (PVUS) which are the extended forms of the area under curve (AUC) and the partial area under curve (PAUC). This article addresses confidence interval estimation of the difference in paired VUS s and the difference in paired PVUS s. To focus especially on studies with small to moderate sample sizes, we propose an approach based on the concepts of generalized inference. A Monte Carlo study demonstrates that the proposed approach generally can provide confidence intervals with reasonable coverage probabilities even at small sample sizes. The proposed approach is compared to a parametric bootstrap approach and a large sample approach through simulation. Finally, the proposed approach is illustrated via an application to a data set of blood test results of anemia patients.  相似文献   

5.
A virologic marker, the number of HIV RNA copies or viral load, is currently used to evaluate antiretroviral (ARV) therapies in AIDS clinical trials. This marker can be used to assess the antiviral potency of therapies, but may be easily affected by clinical factors such as drug exposures and drug resistance as well as baseline characteristics during the long-term treatment evaluation process. HIV dynamic studies have significantly contributed to the understanding of HIV pathogenesis and ARV treatment strategies. Viral dynamic models can be formulated through differential equations, but there has been only limited development of statistical methodologies for estimating such models or assessing their agreement with observed data. This paper develops mechanism-based nonlinear differential equation models for characterizing long-term viral dynamics with ARV therapy. In this model we not only incorporate clinical factors (drug exposures, and susceptibility), but also baseline covariate (baseline viral load, CD4 count, weight, or age) into a function of treatment efficacy. A Bayesian nonlinear mixed-effects modeling approach is investigated with application to an AIDS clinical trial study. The effects of confounding interaction of clinical factors with covariate-based models are compared using the deviance information criteria (DIC), a Bayesian version of the classical deviance for model assessment, designed from complex hierarchical model settings. Relationships between baseline covariate combined with confounding clinical factors and drug efficacy are explored. In addition, we compared models incorporating each of four baseline covariates through DIC and some interesting findings are presented. Our results suggest that modeling HIV dynamics and virologic responses with consideration of time-varying clinical factors as well as baseline characteristics may play an important role in understanding HIV pathogenesis, designing new treatment strategies for long-term care of AIDS patients.  相似文献   

6.
In this article, we consider statistical inference for longitudinal partial linear models when the response variable is sometimes missing with missingness probability depending on the covariate that is measured with error. A generalized empirical likelihood (GEL) method is proposed by combining correction attenuation and quadratic inference functions. The method that takes into consideration the correlation within groups is used to estimate the regression coefficients. Furthermore, residual-adjusted empirical likelihood (EL) is employed for estimating the baseline function so that undersmoothing is avoided. The empirical log-likelihood ratios are proven to be asymptotically Chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Compared with methods based on NAs, the GEL does not require consistent estimators for the asymptotic variance and bias. The numerical study is conducted to compare the performance of the EL and the normal approximation-based method, and a real example is analysed.  相似文献   

7.
ABSTRACT

The Concordance statistic (C-statistic) is commonly used to assess the predictive performance (discriminatory ability) of logistic regression model. Although there are several approaches for the C-statistic, their performance in quantifying the subsequent improvement in predictive accuracy due to inclusion of novel risk factors or biomarkers in the model has been extremely criticized in literature. This paper proposed a model-based concordance-type index, CK, for use with logistic regression model. The CK and its asymptotic sampling distribution is derived following Gonen and Heller's approach for Cox PH model for survival data but taking necessary modifications for use with binary data. Unlike the existing C-statistics for logistic model, it quantifies the concordance probability by taking the difference in the predicted risks between two subjects in a pair rather than ranking them and hence is able to quantify the equivalent incremental value from the new risk factor or marker. The simulation study revealed that the CK performed well when the model parameters are correctly estimated for large sample and showed greater improvement in quantifying the additional predictive value from the new risk factor or marker than the existing C-statistics. Furthermore, the illustration using three datasets supports the findings from simulation study.  相似文献   

8.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

9.
We implement a joint model for mixed multivariate longitudinal measurements, applied to the prediction of time until lung transplant or death in idiopathic pulmonary fibrosis. Specifically, we formulate a unified Bayesian joint model for the mixed longitudinal responses and time-to-event outcomes. For the longitudinal model of continuous and binary responses, we investigate multivariate generalized linear mixed models using shared random effects. Longitudinal and time-to-event data are assumed to be independent conditional on available covariates and shared parameters. A Markov chain Monte Carlo algorithm, implemented in OpenBUGS, is used for parameter estimation. To illustrate practical considerations in choosing a final model, we fit 37 different candidate models using all possible combinations of random effects and employ a deviance information criterion to select a best-fitting model. We demonstrate the prediction of future event probabilities within a fixed time interval for patients utilizing baseline data, post-baseline longitudinal responses, and the time-to-event outcome. The performance of our joint model is also evaluated in simulation studies.  相似文献   

10.
Missing data in clinical trials are inevitable. We highlight the ICH guidelines and CPMP points to consider on missing data. Specifically, we outline how we should consider missing data issues when designing, planning and conducting studies to minimize missing data impact. We also go beyond the coverage of the above two documents, provide a more detailed review of the basic concepts of missing data and frequently used terminologies, and examples of the typical missing data mechanism, and discuss technical details and literature for several frequently used statistical methods and associated software. Finally, we provide a case study where the principles outlined in this paper are applied to one clinical program at protocol design, data analysis plan and other stages of a clinical trial.  相似文献   

11.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号