Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Int Braz J Urol ; 50(2): 192-198, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38386789

RESUMEN

PURPOUSE: One of the many artificial intelligence based tools that has gained popularity is the Chat-Generative Pre-Trained Transformer (ChatGPT). Due to its popularity, incorrect information provided by ChatGPT will have an impact on patient misinformation. Furthermore, it may cause misconduct as ChatGPT can mislead physicians on the decision-making pathway. Therefore, the aim of this study is to evaluate the accuracy and reproducibility of ChatGPT answers regarding urological diagnoses. MATERIALS AND METHODS: ChatGPT 3.5 version was used. The questions asked for the program involved Primary Megaureter (pMU), Enuresis and Vesicoureteral Reflux (VUR). There were three queries for each topic. The queries were inserted twice, and both responses were recorded to examine the reproducibility of ChatGPT's answers. Afterwards, both answers were combined. Finally, those rwere evaluated qualitatively by a board of three specialists. A descriptive analysis was performed. RESULTS AND CONCLUSION: ChatGPT simulated general knowledge on the researched topics. Regarding Enuresis, the provided definition was partially correct, as the generic response allowed for misinterpretation. For VUR, the response was considered appropriate. For pMU it was partially correct, lacking essential aspects of its definition such as the diameter of the dilatation of the ureter. Unnecessary exams were suggested, for Enuresis and pMU. Regarding the treatment of the conditions mentioned, it specified treatments for Enuresis that are ineffective, such as bladder training. Therefore, ChatGPT responses present a combination of accurate information, but also incomplete, ambiguous and, occasionally, misleading details.


Asunto(s)
Enuresis Nocturna , Médicos , Urología , Humanos , Inteligencia Artificial , Reproducibilidad de los Resultados
2.
Int. braz. j. urol ; 50(2): 192-198, Mar.-Apr. 2024. tab
Artículo en Inglés | LILACS-Express | LILACS | ID: biblio-1558057

RESUMEN

ABSTRACT Purpouse: One of the many artificial intelligence based tools that has gained popularity is the Chat-Generative Pre-Trained Transformer (ChatGPT). Due to its popularity, incorrect information provided by ChatGPT will have an impact on patient misinformation. Furthermore, it may cause misconduct as ChatGPT can mislead physicians on the decision-making pathway. Therefore, the aim of this study is to evaluate the accuracy and reproducibility of ChatGPT answers regarding urological diagnoses. Materials and Methods: ChatGPT 3.5 version was used. The questions asked for the program involved Primary Megaureter (pMU), Enuresis and Vesicoureteral Reflux (VUR). There were three queries for each topic. The queries were inserted twice, and both responses were recorded to examine the reproducibility of ChatGPT's answers. Afterwards, both answers were combined. Finally, those rwere evaluated qualitatively by a board of three specialists. A descriptive analysis was performed. Results and Conclusion: ChatGPT simulated general knowledge on the researched topics. Regarding Enuresis, the provided definition was partially correct, as the generic response allowed for misinterpretation. For VUR, the response was considered appropriate. For pMU it was partially correct, lacking essential aspects of its definition such as the diameter of the dilatation of the ureter. Unnecessary exams were suggested, for Enuresis and pMU. Regarding the treatment of the conditions mentioned, it specified treatments for Enuresis that are ineffective, such as bladder training. Therefore, ChatGPT responses present a combination of accurate information, but also incomplete, ambiguous and, occasionally, misleading details.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA