首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Diagnostic odds ratio is defined as the ratio of the odds of the positivity of a diagnostic test results in the diseased population relative to that in the non-diseased population. It is a function of sensitivity and specificity, which can be seen as an indicator of the diagnostic accuracy for the evaluation of a biomarker/test. The naïve estimator of diagnostic odds ratio fails when either sensitivity or specificity is close to one, which leads the denominator of diagnostic odds ratio equal to zero. We propose several methods to adjust for such situation. Agresti and Coull’s adjustment is a common and straightforward way for extreme binomial proportions. Alternatively, estimation methods based on a more advanced sampling design can be applied, which systematically selects samples from underlying population based on judgment ranks. Under such design, the odds can be estimated by the sum of indicator functions and thus avoid the situation of dividing by zero and provide a valid estimation. The asymptotic mean and variance of the proposed estimators are derived. All methods are readily applied for the confidence interval estimation and hypothesis testing for diagnostic odds ratio. A simulation study is conducted to compare the efficiency of the proposed methods. Finally, the proposed methods are illustrated using a real dataset.  相似文献   

2.
The area under the Receiver Operating Characteristic (ROC) curve (AUC) and related summary indices are widely used for assessment of accuracy of an individual and comparison of performances of several diagnostic systems in many areas including studies of human perception, decision making, and the regulatory approval process for new diagnostic technologies. Many investigators have suggested implementing the bootstrap approach to estimate variability of AUC-based indices. Corresponding bootstrap quantities are typically estimated by sampling a bootstrap distribution. Such a process, frequently termed Monte Carlo bootstrap, is often computationally burdensome and imposes an additional sampling error on the resulting estimates. In this article, we demonstrate that the exact or ideal (sampling error free) bootstrap variances of the nonparametric estimator of AUC can be computed directly, i.e., avoiding resampling of the original data, and we develop easy-to-use formulas to compute them. We derive the formulas for the variances of the AUC corresponding to a single given or random reader, and to the average over several given or randomly selected readers. The derived formulas provide an algorithm for computing the ideal bootstrap variances exactly and hence improve many bootstrap methods proposed earlier for analyzing AUCs by eliminating the sampling error and sometimes burdensome computations associated with a Monte Carlo (MC) approximation. In addition, the availability of closed-form solutions provides the potential for an analytical assessment of the properties of bootstrap variance estimators. Applications of the proposed method are shown on two experimentally ascertained datasets that illustrate settings commonly encountered in diagnostic imaging. In the context of the two examples we also demonstrate the magnitude of the effect of the sampling error of the MC estimators on the resulting inferences.  相似文献   

3.
The use of the area under the receiver-operating characteristic, ROC, curve (AUC) as an index of diagnostic accuracy is overwhelming in fields such as biomedical science and machine learning. It seems that a larger AUC value has become synonymous with a better performance. The functional transformation of the marker values has been proposed in the specialized literature as a procedure for increasing the AUC and therefore the diagnostic accuracy. However, the classification process is based on some regions (classification subsets) which support the decision made; one subject is classified as positive if its marker is within this region and classified as negative otherwise. In this paper we study the capacity of improving the classification performance of univariate biomarkers via functional transformations and the impact of this transformation on the final classification regions based on a real-world dataset. Particularly, we consider the problem of determining the gender of a subject based on the Mode frequency of his/her voice. The shape of the cumulative distribution function of this characteristic in both the male and the female groups makes the resulting classification problem useful for illustrating the differences between having useful diagnostic rules and obtaining an optimal AUC value. Our point is that improving the AUC by means of a functional transformation can produce classification regions with no practical interpretability. We propose to improve the classification accuracy by making the selection of the classification subsets more flexible while preserving their interpretability. Besides, we provide different graphical approximations which allow us a better understanding of the classification problem.  相似文献   

4.
Implementation of the Gibbs sampler for estimating the accuracy of multiple binary diagnostic tests in one population has been investigated. This method, proposed by Joseph, Gyorkos and Coupal, makes use of a Bayesian approach and is used in the absence of a gold standard to estimate the prevalence, the sensitivity and specificity of medical diagnostic tests. The expressions that allow this method to be implemented for an arbitrary number of tests are given. By using the convergence diagnostics procedure of Raftery and Lewis, the relation between the number of iterations of Gibbs sampling and the precision of the estimated quantiles of the posterior distributions is derived. An example concerning a data set of gastro-esophageal reflux disease patients collected to evaluate the accuracy of the water siphon test compared with 24 h pH-monitoring, endoscopy and histology tests is presented. The main message that emerges from our analysis is that implementation of the Gibbs sampler to estimate the parameters of multiple binary diagnostic tests can be critical and convergence diagnostic is advised for this method. The factors which affect the convergence of the chains to the posterior distributions and those that influence the precision of their quantiles are analyzed.  相似文献   

5.
In diagnostic trials, the performance of a product is most frequently measured in terms such as sensitivity, specificity and the area under the ROC-curve (AUC). In multiple-reader trials, correlated data appear in a natural way since the same patient is observed under different conditions by several readers. The repeated measures may have quite an involved correlation structure. Even though sensitivity, specificity and the AUC are all assessments of diagnostic ability, a unified approach to analyze all such measurements allowing for an arbitrary correlation structure does not exist. Thus, a unified approach for these three effect measures of diagnostic ability will be presented in this paper. The fact that sensitivity and specificity are particular AUCs will serve as a basis for our method of analysis. As the presented theory can also be used in set-ups with correlated binomial random-variables, it may have a more extensive application than only in diagnostic trials.  相似文献   

6.
The area under the ROC curve (AUC) can be interpreted as the probability that the classification scores of a diseased subject is larger than that of a non-diseased subject for a randomly sampled pair of subjects. From the perspective of classification, we want to find a way to separate two groups as distinctly as possible via AUC. When the difference of the scores of a marker is small, its impact on classification is less important. Thus, a new diagnostic/classification measure based on a modified area under the ROC curve (mAUC) is proposed, which is defined as a weighted sum of two AUCs, where the AUC with the smaller difference is assigned a lower weight, and vice versa. Using mAUC is robust in the sense that mAUC gets larger as AUC gets larger as long as they are not equal. Moreover, in many diagnostic situations, only a specific range of specificity is of interest. Under normal distributions, we show that if the AUCs of two markers are within similar ranges, the larger mAUC implies the larger partial AUC for a given specificity. This property of mAUC will help to identify the marker with the higher partial AUC, even when the AUCs are similar. Two nonparametric estimates of an mAUC and their variances are given. We also suggest the use of mAUC as the objective function for classification, and the use of the gradient Lasso algorithm for classifier construction and marker selection. Application to simulation datasets and real microarray gene expression datasets show that our method finds a linear classifier with a higher ROC curve than some other existing linear classifiers, especially in the range of low false positive rates.  相似文献   

7.
Directly relating to sensitivity and specificity and providing an optimal cut-point, which maximizes overall classification effectiveness for diagnosis purpose, the Youden index has been frequently utilized in biomedical diagnosis practice. Current application of the Youden index is limited to two diagnostic groups. However, there usually exists a transitional intermediate stage in many disease processes. Early recognition of this intermediate stage is vital to open an optimal window for therapeutic intervention. In this article, we extend the Youden index to assess diagnostic accuracy when there are three ordinal diagnostic groups. Parametric and nonparametric methods are presented to estimate the optimal Youden index, the underlying optimal cut-points, and the associated confidence intervals. Extensive simulation studies covering representative distributional assumptions are reported to compare performance of the proposed methods. A real example illustrates the usefulness of the Youden index in evaluating discriminating ability of diagnostic tests.  相似文献   

8.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   

9.
Accurate diagnosis of a molecularly defined subtype of cancer is often an important step toward its effective control and treatment. For the diagnosis of some subtypes of a cancer, a gold standard with perfect sensitivity and specificity may be unavailable. In those scenarios, tumor subtype status is commonly measured by multiple imperfect diagnostic markers. Additionally, in many such studies, some subjects are only measured by a subset of diagnostic tests and the missing probabilities may depend on the unknown disease status. In this paper, we present statistical methods based on the EM algorithm to evaluate incomplete multiple imperfect diagnostic tests under a missing at random assumption and one missing not at random scenario. We apply the proposed methods to a real data set from the National Cancer Institute (NCI) colon cancer family registry on diagnosing microsatellite instability for hereditary non-polyposis colorectal cancer to estimate diagnostic accuracy parameters (i.e. sensitivities and specificities), prevalence, and potential differential missing probabilities for 11 biomarker tests. Simulations are also conducted to evaluate the small-sample performance of our methods.  相似文献   

10.
Cohen’s kappa, a special case of the weighted kappa, is a chance‐corrected index used extensively to quantify inter‐rater agreement in validation and reliability studies. In this paper, it is shown that in inter‐rater agreement for 2 × 2 tables, for two raters having the same number of opposite ratings, the weighted kappa, Cohen’s kappa, Peirce, Yule, Maxwell and Pilliner and Fleiss indices are identical. This implies that the weights in the weighted kappa are less important under such assumptions. Equivalently, it is shown that for two partitions of the same data set, resulting from two clustering algorithms having the same number of clusters with equal cluster sizes, these similarity indices are identical. Hence, an important characterisation is formulated relating equal numbers of clusters with the same cluster sizes to the presence/absence of a trait in a reliability study. Two numerical examples that exemplify the implication of this relationship are presented.  相似文献   

11.
This article examines the forecasting accuracies of various methods used by Federal Reserve Banks to estimate real value added by regional manufacturing industries. Using Texas manufacturing data and weighted forecasting accuracy measures consistent with index number construction for Texas, obtained results support the use of very simple methods based on the assumption of product exhaustion, allowing for technical change. More complex methods using Cobb-Douglas production functions estimated by Bayesian techniques did not perform as well, not because of lack of conceptual sophistication or appropriate prior information but probably because of the small number of observations and collinearity of the data that are available when constructing regional production indices. These results must be qualified. The weighted forecasting accuracy measures tend to obscure the fact that no one method is uniformly superior to the other methods for all industries. Given industry weights different from those for Texas, the results presented here could be reversed. Confirmation of the conclusions drawn await the results of other regional manufacturing studies.  相似文献   

12.
When missing data occur in studies designed to compare the accuracy of diagnostic tests, a common, though naive, practice is to base the comparison of sensitivity, specificity, as well as of positive and negative predictive values on some subset of the data that fits into methods implemented in standard statistical packages. Such methods are usually valid only under the strong missing completely at random (MCAR) assumption and may generate biased and less precise estimates. We review some models that use the dependence structure of the completely observed cases to incorporate the information of the partially categorized observations into the analysis and show how they may be fitted via a two-stage hybrid process involving maximum likelihood in the first stage and weighted least squares in the second. We indicate how computational subroutines written in R may be used to fit the proposed models and illustrate the different analysis strategies with observational data collected to compare the accuracy of three distinct non-invasive diagnostic methods for endometriosis. The results indicate that even when the MCAR assumption is plausible, the naive partial analyses should be avoided.  相似文献   

13.
It is customary to use two groups of indices to evaluate a diagnostic method with a binary outcome: validity indices with a standard rater (sensitivity, specificity, and positive or negative predictive values) and reliability indices (positive, negative and overall agreements) without a standard rater. However neither of these classic indices is chance-corrected, and this may distort the analysis of the problem (especially in comparative studies). One way of chance-correcting these indices is by using the Delta model (an alternative to the Kappa model), but this means having to use a computer program to work out the calculations. This paper gives an asymptotic version of the Delta model, thus allowing simple expressions to be obtained for the estimator of each of the above-mentioned chance-corrected indices (as well as for its standard error).  相似文献   

14.
Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased conditions from non‐diseased conditions. For a continuous‐scale diagnostic test, a popular summary index of the receiver operating characteristic (ROC) curve is the area under the curve (AUC). However, when our focus is on a certain region of false positive rates, we often use the partial AUC instead. In this paper we have derived the asymptotic normal distribution for the non‐parametric estimator of the partial AUC with an explicit variance formula. The empirical likelihood (EL) ratio for the partial AUC is defined and it is shown that its limiting distribution is a scaled chi‐square distribution. Hybrid bootstrap and EL confidence intervals for the partial AUC are proposed by using the newly developed EL theory. We also conduct extensive simulation studies to compare the relative performance of the proposed intervals and existing intervals for the partial AUC. A real example is used to illustrate the application of the recommended intervals. The Canadian Journal of Statistics 39: 17–33; 2011 © 2011 Statistical Society of Canada  相似文献   

15.
The semiparametric LABROC approach of fitting binormal model for estimating AUC as a global index of accuracy has been justified (except for bimodal forms), while for estimating a local index of accuracy such as TPF, it may lead to a bias in severe departure of data from binormality. We extended parametric ROC analysis for quantitative data when one or both pair members are mixture of Gaussian (MG) in particular for bimodal forms. We analytically showed that AUC and TPF are a mixture of weighting parameters of different components of AUCs and TPFs of a mixture of underlying distributions. In a simulation study of six configurations of MG distributions:{bimodal, normal} and {bimodal, bimodal} pairs, the parameters of MG distributions were estimated using the EM algorithm. The results showed that the estimated AUC from our proposed model was essentially unbiased, and that the bias in the estimated TPF at a clinically relevant range of FPF was roughly 0.01 for a sample size of n=100/100. In practice, with severe departures from binormality, we recommend an extension of the LABROC and software development for future research to allow for each member of the pair of distributions to be a mixture of Gaussian that is a more flexible parametric form.  相似文献   

16.
In this paper, we provide a method for constructing confidence interval for accuracy in correlated observations, where one sample of patients is being rated by two or more diagnostic tests. Confidence intervals for other measures of diagnostic tests, such as sensitivity, specificity, positive predictive value, and negative predictive value, have already been developed for clustered or correlated observations using the generalized estimating equations (GEE) method. Here, we use the GEE and delta‐method to construct confidence intervals for accuracy, the proportion of patients who are correctly classified. Simulation results verify that the estimated confidence intervals exhibit consistent/appropriate coverage rates.  相似文献   

17.
Summary.  Many contemporary classifiers are constructed to provide good performance for very high dimensional data. However, an issue that is at least as important as good classification is determining which of the many potential variables provide key information for good decisions. Responding to this issue can help us to determine which aspects of the datagenerating mechanism (e.g. which genes in a genomic study) are of greatest importance in terms of distinguishing between populations. We introduce tilting methods for addressing this problem. We apply weights to the components of data vectors, rather than to the data vectors themselves (as is commonly the case in related work). In addition we tilt in a way that is governed by L 2-distance between weight vectors, rather than by the more commonly used Kullback–Leibler distance. It is shown that this approach, together with the added constraint that the weights should be non-negative, produces an algorithm which eliminates vector components that have little influence on the classification decision. In particular, use of the L 2-distance in this problem produces properties that are reminiscent of those that arise when L 1-penalties are employed to eliminate explanatory variables in very high dimensional prediction problems, e.g. those involving the lasso. We introduce techniques that can be implemented very rapidly, and we show how to use bootstrap methods to assess the accuracy of our variable ranking and variable elimination procedures.  相似文献   

18.
Receiver operating characteristic (ROC) curves can be used to assess the accuracy of tests measured on ordinal or continuous scales. The most commonly used measure for the overall diagnostic accuracy of diagnostic tests is the area under the ROC curve (AUC). A gold standard (GS) test on the true disease status is required to estimate the AUC. However, a GS test may be too expensive or infeasible. In many medical researches, the true disease status of the subjects may remain unknown. Under the normality assumption on test results from each disease group of subjects, we propose a heuristic method of estimating confidence intervals for the difference in paired AUCs of two diagnostic tests in the absence of a GS reference. This heuristic method is a three-stage method by combining the expectation-maximization (EM) algorithm, bootstrap method, and an estimation based on asymptotic generalized pivotal quantities (GPQs) to construct generalized confidence intervals for the difference in paired AUCs in the absence of a GS. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities and expected interval lengths. The numerical example using a published dataset illustrates the proposed method.  相似文献   

19.
This paper examines the use of Dirichlet process mixtures for curve fitting. An important modelling aspect in this setting is the choice between constant and covariate‐dependent weights. By examining the problem of curve fitting from a predictive perspective, we show the advantages of using covariate‐dependent weights. These advantages are a result of the incorporation of covariate proximity in the latent partition. However, closer examination of the partition yields further complications, which arise from the vast number of total partitions. To overcome this, we propose to modify the probability law of the random partition to strictly enforce the notion of covariate proximity, while still maintaining certain properties of the Dirichlet process. This allows the distribution of the partition to depend on the covariate in a simple manner and greatly reduces the total number of possible partitions, resulting in improved curve fitting and faster computations. Numerical illustrations are presented.  相似文献   

20.
The problem of heavy tail in regression models is studied. It is proposed that regression models are estimated by a standard procedure and a statistical check for heavy tail using residuals is conducted as a tool for regression diagnostic. Using the peaks-over-threshold approach, the generalized Pareto distribution quantifies the degree of heavy tail by the extreme value index. The number of excesses is determined by means of an innovative threshold model which partitions the random sample into extreme values and ordinary values. The overall decision on a significant heavy tail is justified by both a statistical test and a quantile–quantile plot. The usefulness of the approach includes justification of goodness of fit of the estimated regression model and quantification of the occurrence of extremal events. The proposed methodology is supplemented by surface ozone level in the city center of Leeds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号