Your browser doesn't support javascript.
loading
A method for measuring interrater agreement on checklists.
Sinacore, J M; Connell, K J; Olthoff, A J; Friedman, M H; Gecht, M R.
Affiliation
  • Sinacore JM; University of Illinois at Chicago, Department of Family Medicine 60612-7248, USA. Sinacore@uic.edu
Eval Health Prof ; 22(2): 221-34, 1999 Jun.
Article in En | MEDLINE | ID: mdl-10557857
A method for measuring interrater agreement on checklists is presented. This technique does not assign individual scores to raters, but computes a single agreement score from the concordance of their check mark configurations. An overall coefficient of agreement, called phi, is derived. The agreement coefficient that is expected by chance and the statistical significance of phi are determined by statistical simulation. Despite the dichotomous nature of the checklist agreement (raters either agree or disagree on items), we show that the binomial distribution does not provide a means for testing the statistical significance of phi. A medical education study is used to illustrate the phi methodology.
Subject(s)
Search on Google
Collection: 01-internacional Database: MEDLINE Main subject: Physical Examination / Program Evaluation / Education, Medical, Undergraduate / Educational Measurement Type of study: Evaluation_studies Limits: Humans Language: En Journal: Eval Health Prof Year: 1999 Document type: Article Affiliation country: United States Country of publication: United States
Search on Google
Collection: 01-internacional Database: MEDLINE Main subject: Physical Examination / Program Evaluation / Education, Medical, Undergraduate / Educational Measurement Type of study: Evaluation_studies Limits: Humans Language: En Journal: Eval Health Prof Year: 1999 Document type: Article Affiliation country: United States Country of publication: United States