Calculating power for the comparison of dependent κ-coefficients |
| |
Authors: | Hung-Mo Lin John M Williamson Stuart R Lipsitz |
| |
Institution: | Penn State University, Hershey, USA ;Centers for Disease Control and Prevention, Atlanta, USA ;Medical University of South Carolina, Charleston, USA |
| |
Abstract: | Summary. In the psychosocial and medical sciences, some studies are designed to assess the agreement between different raters and/or different instruments. Often the same sample will be used to compare the agreement between two or more assessment methods for simplicity and to take advantage of the positive correlation of the ratings. Although sample size calculations have become an important element in the design of research projects, such methods for agreement studies are scarce. We adapt the generalized estimating equations approach for modelling dependent κ -statistics to estimate the sample size that is required for dependent agreement studies. We calculate the power based on a Wald test for the equality of two dependent κ -statistics. The Wald test statistic has a non-central χ 2-distribution with non-centrality parameter that can be estimated with minimal assumptions. The method proposed is useful for agreement studies with two raters and two instruments, and is easily extendable to multiple raters and multiple instruments. Furthermore, the method proposed allows for rater bias. Power calculations for binary ratings under various scenarios are presented. Analyses of two biomedical studies are used for illustration. |
| |
Keywords: | Agreement Generalized estimating equations κ Power Sample size |
|
|