Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Aesthetic Plast Surg ; 48(13): 2389-2398, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38684536

RESUMEN

BACKGROUND: ChatGPT is a free artificial intelligence (AI) language model developed and released by OpenAI in late 2022. This study aimed to evaluate the performance of ChatGPT to accurately answer clinical questions (CQs) on the Guideline for the Management of Blepharoptosis published by the American Society of Plastic Surgeons (ASPS) in 2022. METHODS: CQs in the guideline were used as question sources in both English and Japanese. For each question, ChatGPT provided answers for CQs, evidence quality, recommendation strength, reference match, and answered word counts. We compared the performance of ChatGPT in each component between English and Japanese queries. RESULTS: A total of 11 questions were included in the final analysis, and ChatGPT answered 61.3% of these correctly. ChatGPT demonstrated a higher accuracy rate in English answers for CQs compared to Japanese answers for CQs (76.4% versus 46.4%; p = 0.004) and word counts (123 words versus 35.9 words; p = 0.004). No statistical differences were noted for evidence quality, recommendation strength, and reference match. A total of 697 references were proposed, but only 216 of them (31.0%) existed. CONCLUSIONS: ChatGPT demonstrates potential as an adjunctive tool in the management of blepharoptosis. However, it is crucial to recognize that the existing AI model has distinct limitations, and its primary role should be to complement the expertise of medical professionals. LEVEL OF EVIDENCE V: Observational study under respected authorities. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .


Asunto(s)
Inteligencia Artificial , Blefaroptosis , Guías de Práctica Clínica como Asunto , Blefaroptosis/cirugía , Humanos , Blefaroplastia/métodos , Japón
2.
Aesthetic Plast Surg ; 2024 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-39322837

RESUMEN

BACKGROUND: The Generative Pre-trained Transformer (GPT) series, which includes ChatGPT, is an artificial large language model that provides human-like text dialogue. This study aimed to evaluate the performance of artificial intelligence chatbots in answering clinical questions based on practical rhinoplasty guidelines. METHODS: Clinical questions (CQs) developed from the guidelines were used as question sources. For each question, we asked GPT-4 and GPT-3.5 (ChatGPT), developed by OpenAI, to provide answers for the CQs, Policy Level, Aggregate Evidence Quality, Level of Confidence in Evidence, and References. We compared the performance of the two types of artificial intelligence (AI) chatbots. RESULTS: A total of 10 questions were included in the final analysis, and the AI chatbots correctly answered 90.0% of these. GPT-4 demonstrated a lower accuracy rate than GPT-3.5 in answering CQs, although without statistically significant difference (86.0% vs. 94.0%; p = 0.05), whereas GPT-4 showed significantly higher accuracy for the level of confidence in Evidence than GPT-3.5 (52.0% vs. 28.0%; p < 0.01). No statistical differences were observed in Policy Level, Aggregate Evidence Quality, and Reference Match. In addition, GPT-4 rated significantly higher in presenting existing references than GPT-3.5 (36.9% vs. 24.1%; p = 0.01). CONCLUSIONS: The overall performance of GPT-4 was similar to that of GPT-3.5. However, GPT-4 provided existing references at a higher rate than GPT-3.5. GPT-4 has the potential to provide a more accurate reference in professional fields, including rhinoplasty. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA