首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Statistical inference of agreement coefficient between two raters with binary outcomes
Authors:Tetsuji Ohyama
Institution:1. Biostatistics Center, Kurume University, Fukuoka, Japanohyama_tetsuji@med.kurume-u.ac.jp
Abstract:Abstract

Scott’s pi and Cohen’s kappa are widely used for assessing the degree of agreement between two raters with binary outcomes. However, many authors have pointed out its paradoxical behavior, that comes from the dependence on the prevalence of a trait under study. To overcome the limitation, Gwet Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology 61(1):29–48] proposed an alternative and more stable agreement coefficient referred to as the AC1. In this article, we discuss a likelihood-based inference of the AC1 in the case of two raters with binary outcomes. Construction of confidence intervals is mainly discussed. In addition, hypothesis testing, and sample size estimation are also presented.
Keywords:AC1  Agreement  Inter-rater reliability  Kappa coefficient  Scott’s pi
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号