Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38955859

RESUMO

OBJECTIVE: The purpose of this study was to assess how well ChatGPT, an AI-powered chatbot, performed in helping to manage pediatric sialadenitis and identify when sialendoscopy was necessary. METHODS: 49 clinical cases of pediatric sialadenitis were retrospectively reviewed. ChatGPT was given patient data, and it offered differential diagnoses, proposed further tests, and suggested treatments. The decisions made by the treating otolaryngologists were contrasted with the answers provided by ChatGPT. Analysis was done on ChatGPT response consistency and interrater reliability. RESULTS: ChatGPT showed 78.57% accuracy in primary diagnosis, and 17.35% of cases were considered likely. On the other hand, otolaryngologists recommended fewer further examinations than ChatGPT (111 vs. 60, p < 0.001). For additional exams, poor agreement was found between ChatGPT and otolaryngologists. Only 28.57% of cases received a pertinent and essential treatment plan via ChatGPT, indicating that the platform's treatment recommendations were frequently lacking. For treatment ratings, judges' interrater reliability was greatest (Kendall's tau = 0.824, p < 0.001). For the most part, ChatGPT's response constancy was high. CONCLUSIONS: Although ChatGPT has the potential to correctly diagnose pediatric sialadenitis, there are a number of noteworthy limitations with regard to its ability to suggest further testing and treatment regimens. Before widespread clinical use, more research and confirmation are required. To guarantee that chatbots are utilized properly and effectively to supplement human expertise rather than to replace it, a critical viewpoint is required.

2.
Fr J Urol ; 34(7-8): 102666, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38849035

RESUMO

OBJECTIVES: Artificial intelligence (AI) applications are increasingly being utilized by both patients and physicians for accessing medical information. This study focused on the urolithiasis section (pertaining to kidney and ureteral stones) of the European Association of Urology (EAU) guideline, a key reference for urologists. MATERIAL AND METHODS: We directed inquiries to four distinct AI chatbots to assess their responses in relation to guideline adherence. A total of 115 recommendations were transformed into questions, and responses were evaluated by two urologists with a minimum of 5 years of experience using a 5-point Likert scale (1 - False, 2 - Inadequate, 3 - Sufficient, 4 - Correct, and 5 - Very correct). RESULTS: The mean scores for Perplexity and ChatGPT 4.0 were 4.68 (SD: 0.80) and 4.80 (SD: 0.47), respectively, both significantly differed the scores of Bing and Bard (Bing vs. Perplexity, P<0.001; Bard vs. Perplexity, P<0.001; Bing vs. ChatGPT, P<0.001; Bard vs. ChatGPT, P<0.001). Bing had a mean score of 4.21 (SD: 0.96), while Bard scored 3.56 (SD: 1.14), with a significant difference (Bing vs. Bard, P<0.001). Bard exhibited the lowest score among all chatbots. Analysis of references revealed that Perplexity and Bing cited the guideline most frequently (47.3% and 30%, respectively). CONCLUSION: Our findings demonstrate that ChatGPT 4.0 and, notably, Perplexity align well with EAU guideline recommendations. These continuously evolving applications may play a crucial role in delivering information to physicians in the future, especially for urolithiasis.


Assuntos
Inteligência Artificial , Fidelidade a Diretrizes , Guias de Prática Clínica como Assunto , Urolitíase , Urologia , Humanos , Urolitíase/terapia , Urologia/normas , Europa (Continente)
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA