Your browser doesn't support javascript.
loading
Full Resolution Simulation for Evaluation of Critical Care Imaging Interpretation; Part 2: Random Effects Reveal the Interplay Between Case Difficulty, Resident Competence, and the Training Environment.
Sistrom, Chris L; Slater, Roberta M; Rajderkar, Dhanashree A; Grajo, Joseph R; Rees, John H; Mancuso, Anthony A.
Afiliação
  • Sistrom CL; University of Florida, Gainesville, Florida. Electronic address: sistrc@radiology.ufl.edu.
  • Slater RM; Department of Radiology University of Florida Health Center, Gainesville, Florida.
  • Rajderkar DA; Department of Radiology University of Florida Health Center, Gainesville, Florida.
  • Grajo JR; Department of Radiology University of Florida Health Center, Gainesville, Florida.
  • Rees JH; Department of Radiology University of Florida Health Center, Gainesville, Florida.
  • Mancuso AA; Department of Radiology University of Florida Health Center, Gainesville, Florida.
Acad Radiol ; 27(7): 1016-1024, 2020 07.
Article em En | MEDLINE | ID: mdl-32402787
RATIONALE AND OBJECTIVES: To further characterize empirical data from a full-resolution simulation of critical care imaging coupled with post hoc grading of resident's interpretations by senior radiologists. To present results from estimating the random effects terms in a comprehensive mixed (hierarchical) regression model. MATERIALS AND METHODS: After accounting for 9 fixed effects detailed in Part 1 of this paper, we estimated normally distributed random effects, expressed in terms of score offsets for each case, resident, program, and grader. RESULTS: The fixed effects alone explained 8.8% of score variation and adding the random effects increased explanatory power of the model to account for 36% of score variation. As quantified by intraclass correlation coefficient (ICC = 28.5%; CI: 25.1-31.6) the majority of score variation is directly attributable to the case at hand. This "case difficulty" measure has reliability of 95%. Individual residents accounted for much of the remaining score variation (ICC = 5.3%; CI: 4.6-5.9) after adjusting for all other effects including level of training. The reliability of this "resident competence" measure is 82%. Residency training program influence on scores was small (ICC = 1.1%; CI: 0.42-1.7). Although a few significantly high and low ones can be identified, reliability of 73% militates for caution. At the same time, low intraprogram variation is very encouraging. Variation attributable to differences between graders was minimal (ICC = 0.58%; CI: 0.0-1.2) which reassures us that the method of scoring is reliable, consistent, and likely extensible. CONCLUSION: Full resolution simulation based evaluation of critical care radiology interpretation is being conducted remotely and efficiently at large scale. A comprehensive mixed model of the resulting scores reliably quantifies case difficulty and resident competence.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Radiologia / Internato e Residência Tipo de estudo: Clinical_trials / Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Radiologia / Internato e Residência Tipo de estudo: Clinical_trials / Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2020 Tipo de documento: Article