Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Ano de publicação
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Surg Obes Relat Dis ; 20(7): 609-613, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38782611

RESUMO

BACKGROUND: The American Society for Metabolic and Bariatric Surgery (ASMBS) textbook serves as a comprehensive resource for bariatric surgery, covering recent advancements and clinical questions. Testing artificial intelligence (AI) engines using this authoritative source ensures accurate and up-to-date information and provides insight in its potential implications for surgical education and training. OBJECTIVES: To determine the quality and to compare different large language models' (LLMs) ability to respond to textbook questions relating to bariatric surgery. SETTING: Remote. METHODS: Prompts to be entered into the LLMs were multiple-choice questions found in "The ASMBS Textbook of Bariatric Surgery, second Edition. The prompts were queried into 3 LLMs: OpenAI's ChatGPT-4, Microsoft's Bing, and Google's Bard. The generated responses were assessed based on overall accuracy, the number of correct answers according to subject matter, and the number of correct answers based on question type. Statistical analysis was performed to determine the number of responses per LLMs per category that were correct. RESULTS: Two hundred questions were used to query the AI models. There was an overall significant difference in the accuracy of answers, with an accuracy of 83.0% for ChatGPT-4, followed by Bard (76.0%) and Bing (65.0%). Subgroup analysis revealed a significant difference between the models' performance in question categories, with ChatGPT-4's demonstrating the highest proportion of correct answers in questions related to treatment and surgical procedures (83.1%) and complications (91.7%). There was also a significant difference between the performance in different question types, with ChatGPT-4 showing superior performance in inclusionary questions. Bard and Bing were unable to answer certain questions whereas ChatGPT-4 left no questions unanswered. CONCLUSIONS: LLMs, particularly ChatGPT-4, demonstrated promising accuracy when answering clinical questions related to bariatric surgery. Continued AI advancements and research is required to elucidate the potential applications of LLMs in training and education.


Assuntos
Inteligência Artificial , Cirurgia Bariátrica , Cirurgia Bariátrica/educação , Humanos , Livros de Texto como Assunto , Estados Unidos , Sociedades Médicas , Competência Clínica
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa