Statistical inference of agreement coefficient between two raters with binary outcomes |
| |
Authors: | Tetsuji Ohyama |
| |
Institution: | 1. Biostatistics Center, Kurume University, Fukuoka, Japanohyama_tetsuji@med.kurume-u.ac.jp |
| |
Abstract: | AbstractScott’s pi and Cohen’s kappa are widely used for assessing the degree of agreement between two raters with binary outcomes. However, many authors have pointed out its paradoxical behavior, that comes from the dependence on the prevalence of a trait under study. To overcome the limitation, Gwet Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology 61(1):29–48] proposed an alternative and more stable agreement coefficient referred to as the AC1. In this article, we discuss a likelihood-based inference of the AC1 in the case of two raters with binary outcomes. Construction of confidence intervals is mainly discussed. In addition, hypothesis testing, and sample size estimation are also presented. |
| |
Keywords: | AC1 Agreement Inter-rater reliability Kappa coefficient Scott’s pi |
|
|