首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20篇
  免费   1篇
统计学   21篇
  2018年   1篇
  2017年   4篇
  2015年   1篇
  2013年   4篇
  2012年   5篇
  2008年   2篇
  2005年   2篇
  2004年   1篇
  1986年   1篇
排序方式: 共有21条查询结果,搜索用时 15 毫秒
1.
The weighted kappa coefficient of a binary diagnostic test is a measure of the beyond-chance agreement between the diagnostic test and the gold standard, and is a measure that allows us to assess and compare the performance of binary diagnostic tests. In the presence of partial disease verification, the comparison of the weighted kappa coefficients of two or more binary diagnostic tests cannot be carried out ignoring the individuals with an unknown disease status, since the estimators obtained would be affected by verification bias. In this article, we propose a global hypothesis test based on the chi-square distribution to simultaneously compare the weighted kappa coefficients when in the presence of partial disease verification the missing data mechanism is ignorable. Simulation experiments have been carried out to study the type I error and the power of the global hypothesis test. The results have been applied to the diagnosis of coronary disease.  相似文献   
2.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   
3.
The kappa coefficient is a widely used measure for assessing agreement on a nominal scale. Weighted kappa is an extension of Cohen's kappa that is commonly used for measuring agreement on an ordinal scale. In this article, it is shown that weighted kappa can be computed as a function of unweighted kappas. The latter coefficients are kappa coefficients that correspond to smaller contingency tables that are obtained by merging categories.  相似文献   
4.
It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   
5.
As assumed hypothetical consensus category corresponding to a case being classified provides a basis for assessment of reliability of judges. Equivalent judges are characterised by the joint probability distribution of the judge assignment and the consensus category. Estimates of the conditional probabilities of judge assignment given consensus category and of consensus category given judge assignments are indices of reliability. All parameters can be estimated if data include classifications of a number of cases by 3 or more judges. Restrictive assumptions are imposed to obtain models for data from classifications by two judges. Maximum likelihood estimation is discussed and illustrated by example for the 3 or more judges case.  相似文献   
6.
Algebraic exact inference for rater agreement models   总被引:1,自引:0,他引:1  
In recent years, a method for sampling from conditional distributions for categorical data has been presented by Diaconis and Sturmfels. Their algorithm is based on the algebraic theory of toric ideals which are used to create so called “Markov Bases”. The Diaconis-Sturmfels algorithm leads to a non-asymptotic Monte Carlo Markov Chain algorithm for exact inference on some classes of models, such as log-linear models. In this paper we apply the Diaconis-Sturmfels algorithm to a set of models arising from the rater agreement problem with special attention to the multi-rater case. The relevant Markov bases are explicitly computed and some results for simplify the computation are presented. An extended example on a real data set shows the wide applicability of this methodology. Partially supported by MIUR Cofin03 (G. Consonni) and by INdAM projectAlgebraic Statistics.  相似文献   
7.
Category Distinguishability and Observer Agreement   总被引:1,自引:0,他引:1  
It is common in the medical, biological, and social sciences for the categories into which an object is classified not to have a fully objective definition. Theoretically speaking the categories are therefore not completely distinguishable. The practical extent of their distinguishability can be measured when two expert observers classify the same sample of objects. It is shown, under reasonable assumptions, that the matrix of joint classification probabilities is quasi-symmetric, and that the symmetric matrix component is non-negative definite. The degree of distinguishability between two categories is defined and is used to give a measure of overall category distinguishability. It is argued that the kappa measure of observer agreement is unsatisfactory as a measure of overall category distinguishability.  相似文献   
8.
The authors describe a model‐based kappa statistic for binary classifications which is interpretable in the same manner as Scott's pi and Cohen's kappa, yet does not suffer from the same flaws. They compare this statistic with the data‐driven and population‐based forms of Scott's pi in a population‐based setting where many raters and subjects are involved, and inference regarding the underlying diagnostic procedure is of interest. The authors show that Cohen's kappa and Scott's pi seriously underestimate agreement between experts classifying subjects for a rare disease; in contrast, the new statistic is robust to changes in prevalence. The performance of the three statistics is illustrated with simulations and prostate cancer data.  相似文献   
9.
Cohen's kappa coefficient is traditionally used to quantify the degree of agreement between two raters on a nominal scale. Correlated kappas occur in many settings (e.g., repeated agreement by raters on the same individuals, concordance between diagnostic tests and a gold standard) and often need to be compared. While different techniques are now available to model correlated κ coefficients, they are generally not easy to implement in practice. The present paper describes a simple alternative method based on the bootstrap for comparing correlated kappa coefficients. The method is illustrated by examples and its type I error studied using simulations. The method is also compared with the generalized estimating equations of the second order and the weighted least-squares methods.  相似文献   
10.
Agreement measures are designed to assess consistency between different instruments rating measurements of interest. When the individual responses are correlated with multilevel structure of nestings and clusters, traditional approaches are not readily available to estimate the inter- and intra-agreement for such complex multilevel settings. Our research stems from conformity evaluation between optometric devices with measurements on both eyes, equality tests of agreement in high myopic status between monozygous twins and dizygous twins, and assessment of reliability for different pathologists in dysplasia. In this paper, we focus on applying a Bayesian hierarchical correlation model incorporating adjustment for explanatory variables and nesting correlation structures to assess the inter- and intra-agreement through correlations of random effects for various sources. This Bayesian generalized linear mixed-effects model (GLMM) is further compared with the approximate intra-class correlation coefficients and kappa measures by the traditional Cohen’s kappa statistic and the generalized estimating equations (GEE) approach. The results of comparison studies reveal that the Bayesian GLMM provides a reliable and stable procedure in estimating inter- and intra-agreement simultaneously after adjusting for covariates and correlation structures, in marked contrast to Cohen’s kappa and the GEE approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号