Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Acad Med ; 99(4S Suppl 1): S42-S47, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38166201

RESUMO

ABSTRACT: Medical education assessment faces multifaceted challenges, including data complexity, resource constraints, bias, feedback translation, and educational continuity. Traditional approaches often fail to adequately address these issues, creating stressful and inequitable learning environments. This article introduces the concept of precision education, a data-driven paradigm aimed at personalizing the educational experience for each learner. It explores how artificial intelligence (AI), including its subsets machine learning (ML) and deep learning (DL), can augment this model to tackle the inherent limitations of traditional assessment methods.AI can enable proactive data collection, offering consistent and objective assessments while reducing resource burdens. It has the potential to revolutionize not only competency assessment but also participatory interventions, such as personalized coaching and predictive analytics for at-risk trainees. The article also discusses key challenges and ethical considerations in integrating AI into medical education, such as algorithmic transparency, data privacy, and the potential for bias propagation.AI's capacity to process large datasets and identify patterns allows for a more nuanced, individualized approach to medical education. It offers promising avenues not only to improve the efficiency of educational assessments but also to make them more equitable. However, the ethical and technical challenges must be diligently addressed. The article concludes that embracing AI in medical education assessment is a strategic move toward creating a more personalized, effective, and fair educational landscape. This necessitates collaborative, multidisciplinary research and ethical vigilance to ensure that the technology serves educational goals while upholding social justice and ethical integrity.


Assuntos
Educação Médica , Tutoria , Humanos , Inteligência Artificial , Escolaridade , Avaliação Educacional
2.
J Gen Intern Med ; 37(9): 2230-2238, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35710676

RESUMO

BACKGROUND: Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE: The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES: Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS: The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS: The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.


Assuntos
Raciocínio Clínico , Documentação , Registros Eletrônicos de Saúde , Humanos , Aprendizado de Máquina , Processamento de Linguagem Natural , Reprodutibilidade dos Testes , Estudos Retrospectivos
3.
J Gen Intern Med ; 37(3): 507-512, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-33945113

RESUMO

BACKGROUND: Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability. OBJECTIVE: Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool. DESIGN, PARTICIPANTS, AND MAIN MEASURES: The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting. KEY RESULTS: The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality. CONCLUSIONS: The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.


Assuntos
Competência Clínica , Raciocínio Clínico , Documentação , Retroalimentação , Humanos , Modelos Psicológicos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA