Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
medRxiv ; 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39148822

RESUMO

Importance: Large language model (LLM) artificial intelligence (AI) systems have shown promise in diagnostic reasoning, but their utility in management reasoning with no clear right answers is unknown. Objective: To determine whether LLM assistance improves physician performance on open-ended management reasoning tasks compared to conventional resources. Design: Prospective, randomized controlled trial conducted from 30 November 2023 to 21 April 2024. Setting: Multi-institutional study from Stanford University, Beth Israel Deaconess Medical Center, and the University of Virginia involving physicians from across the United States. Participants: 92 practicing attending physicians and residents with training in internal medicine, family medicine, or emergency medicine. Intervention: Five expert-developed clinical case vignettes were presented with multiple open-ended management questions and scoring rubrics created through a Delphi process. Physicians were randomized to use either GPT-4 via ChatGPT Plus in addition to conventional resources (e.g., UpToDate, Google), or conventional resources alone. Main Outcomes and Measures: The primary outcome was difference in total score between groups on expert-developed scoring rubrics. Secondary outcomes included domain-specific scores and time spent per case. Results: Physicians using the LLM scored higher compared to those using conventional resources (mean difference 6.5 %, 95% CI 2.7-10.2, p<0.001). Significant improvements were seen in management decisions (6.1%, 95% CI 2.5-9.7, p=0.001), diagnostic decisions (12.1%, 95% CI 3.1-21.0, p=0.009), and case-specific (6.2%, 95% CI 2.4-9.9, p=0.002) domains. GPT-4 users spent more time per case (mean difference 119.3 seconds, 95% CI 17.4-221.2, p=0.02). There was no significant difference between GPT-4-augmented physicians and GPT-4 alone (-0.9%, 95% CI -9.0 to 7.2, p=0.8). Conclusions and Relevance: LLM assistance improved physician management reasoning compared to conventional resources, with particular gains in contextual and patient-specific decision-making. These findings indicate that LLMs can augment management decision-making in complex cases. Trial registration: ClinicalTrials.gov Identifier: NCT06208423; https://classic.clinicaltrials.gov/ct2/show/NCT06208423.

2.
BMJ Qual Saf ; 30(12): 1002-1009, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34417335

RESUMO

BACKGROUND: Errors in reasoning are a common cause of diagnostic error. However, it is difficult to improve performance partly because providers receive little feedback on diagnostic performance. Examining means of providing consistent feedback and enabling continuous improvement may provide novel insights for diagnostic performance. METHODS: We developed a model for improving diagnostic performance through feedback using a six-step qualitative research process, including a review of existing models from within and outside of medicine, a survey, semistructured interviews with individuals working in and outside of medicine, the development of the new model, an interdisciplinary consensus meeting, and a refinement of the model. RESULTS: We applied theory and knowledge from other fields to help us conceptualise learning and comparison and translate that knowledge into an applied diagnostic context. This helped us develop a model, the Diagnosis Learning Cycle, which illustrates the need for clinicians to be given feedback about both their confidence and reasoning in a diagnosis and to be able to seamlessly compare diagnostic hypotheses and outcomes. This information would be stored in a repository to allow accessibility. Such a process would standardise diagnostic feedback and help providers learn from their practice and improve diagnostic performance. This model adds to existing models in diagnosis by including a detailed picture of diagnostic reasoning and the elements required to improve outcomes and calibration. CONCLUSION: A consistent, standard programme of feedback that includes representations of clinicians' confidence and reasoning is a common element in non-medical fields that could be applied to medicine. Adapting this approach to diagnosis in healthcare is a promising next step. This information must be stored reliably and accessed consistently. The next steps include testing the Diagnosis Learning Cycle in clinical settings.


Assuntos
Atenção à Saúde , Erros de Diagnóstico , Retroalimentação , Humanos , Pesquisa Qualitativa
3.
Diagnosis (Berl) ; 7(3): 307-312, 2020 08 27.
Artigo em Inglês | MEDLINE | ID: mdl-32697754

RESUMO

Teamwork is fundamental for high-quality clinical reasoning and diagnosis, and many different individuals are involved in the diagnostic process. However, there are substantial gaps in how these individuals work as members of teams and, often, work is done in parallel, rather than in an integrated, collaborative fashion. In order to understand how individuals work together to create knowledge in the clinical context, it is important to consider social cognitive theories, including situated cognition and distributed cognition. In this article, the authors describe existing gaps and then describe these theories as well as common structures of teams in health care and then provide ideas for future study and improvement.


Assuntos
Competência Clínica , Raciocínio Clínico , Cognição , Atenção à Saúde , Humanos
4.
J Hosp Med ; 14(10): 622-625, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31433779

RESUMO

Appropriate calibration of clinical reasoning is critical to becoming a competent physician. Lack of follow-up after transitions of care can present a barrier to calibration. This study aimed to implement structured feedback about clinical reasoning for residents performing overnight admissions, measure the frequency of diagnostic changes, and determine how feedback impacts learners' self-efficacy. Trainees shared feedback via a structured form within their electronic health record's secure messaging system. Forms were analyzed for diagnostic changes. Surveys evaluated comfort with sharing feedback, self-efficacy in identifying and mitigating cognitive biases' negative effects, and perceived educational value of night admissions-all of which improved after implementation. Analysis of 544 forms revealed a 43.7% diagnostic change rate spanning the transition from night-shift to day-shift providers; of the changes made, 29% (12.7% of cases overall) were major changes. This study suggests that structured feedback on clinical reasoning for overnight admissions is a promising approach to improve residents' diagnostic calibration, particularly given how often diagnostic changes occur.


Assuntos
Tomada de Decisão Clínica , Medicina Interna/educação , Internato e Residência/organização & administração , Transferência da Responsabilidade pelo Paciente/organização & administração , Atitude do Pessoal de Saúde , Competência Clínica , Erros de Diagnóstico/prevenção & controle , Retroalimentação , Humanos , Transferência da Responsabilidade pelo Paciente/normas , Estudos Prospectivos , Autoeficácia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa