Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Acad Radiol ; 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38839458

RESUMEN

RATIONALE AND OBJECTIVES: This study aimed to evaluate the accuracy and reliability of educational patient pamphlets created by ChatGPT, a large language model, for common interventional radiology (IR) procedures. METHODS AND MATERIALS: Twenty frequently performed IR procedures were selected, and five users were tasked to independently request ChatGPT to generate educational patient pamphlets for each procedure using identical commands. Subsequently, two independent radiologists assessed the content, quality, and accuracy of the pamphlets. The review focused on identifying potential errors, inaccuracies, the consistency of pamphlets. RESULTS: In a thorough analysis of the education pamphlets, we identified shortcomings in 30% (30/100) of pamphlets, with a total of 34 specific inaccuracies, including missing information about sedation for the procedure (10/34), inaccuracies related to specific procedural-related complications (8/34). A key-word co-occurrence network showed consistent themes within each group of pamphlets, while a line-by-line comparison at the level of users and across different procedures showed statistically significant inconsistencies (P < 0.001). CONCLUSION: ChatGPT-generated education pamphlets demonstrated potential clinical relevance and fairly consistent terminology; however, the pamphlets were not entirely accurate and exhibited some shortcomings and inter-user structural variabilities. To ensure patient safety, future improvements and refinements in large language models are warranted, while maintaining human supervision and expert validation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...