Natural Language Processing of Learners' Evaluations of Attendings to Identify Professionalism Lapses.
Eval Health Prof
; 46(3): 225-232, 2023 09.
Article
em En
| MEDLINE
| ID: mdl-36826805
ABSTRACT
Unprofessional faculty behaviors negatively impact the well-being of trainees yet are infrequently reported through established reporting systems. Manual review of narrative faculty evaluations provides an additional avenue for identifying unprofessional behavior but is time- and resource-intensive, and therefore of limited value for identifying and remediating faculty with professionalism concerns. Natural language processing (NLP) techniques may provide a mechanism for streamlining manual review processes to identify faculty professionalism lapses. In this retrospective cohort study of 15,432 narrative evaluations of medical faculty by medical trainees, we identified professionalism lapses using automated analysis of the text of faculty evaluations. We used multiple NLP approaches to develop and validate several classification models, which were evaluated primarily based on the positive predictive value (PPV) and secondarily by their calibration. A NLP-model using sentiment analysis (quantifying subjectivity of the text) in combination with key words (using the ensemble technique) had the best performance overall with a PPV of 49% (CI 38%-59%). These findings highlight how NLP can be used to screen narrative evaluations of faculty to identify unprofessional faculty behaviors. Incorporation of NLP into faculty review workflows enables a more focused manual review of comments, providing a supplemental mechanism to identify faculty professionalism lapses.
Palavras-chave
Texto completo:
1
Base de dados:
MEDLINE
Assunto principal:
Estudantes de Medicina
/
Profissionalismo
Tipo de estudo:
Observational_studies
/
Prognostic_studies
/
Risk_factors_studies
Limite:
Humans
Idioma:
En
Ano de publicação:
2023
Tipo de documento:
Article