Contact
Inter-rater reliability
-1
archive,category,category-inter-rater-reliability,category-18,bridge-core-2.4.3,ajax_fade,page_not_loaded,,qode_grid_1300,side_area_uncovered_from_content,overlapping_content,qode-content-sidebar-responsive,qode-theme-ver-22.8,qode-theme-bridge,disabled_footer_top,wpb-js-composer js-comp-ver-6.3.0,vc_responsive,elementor-default,elementor-kit-3194

Inter-rater reliability

Inter-rater reliability, R-code / 24.12.2014

Inter rater reliability

If you want to obtain inter-rater reliability measures for dichotomous ratings, by more than two raters, but not all raters rated all items, Fleiss and Cuzick (1979) will be the referece you'll find. For example, we asked researchers which model they consider a process model (dichotomous rating), and we asked about 60 researchers (more than two rater), of whom not everyone was familiar with every model (not all raters rated all items). They proposed a measure between a minimum value and 1. Here is an function to calculate their kappa measure in R.