Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
2.
Foot Ankle Surg ; 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39117535

RESUMO

BACKGROUND: This study evaluates the accuracy and readability of Google, ChatGPT-3.5, and 4.0 (two versions of an artificial intelligence model) responses to common questions regarding bunion surgery. METHODS: A Google search of "bunionectomy" was performed, and the first ten questions under "People Also Ask" were recorded. ChatGPT-3.5 and 4.0 were asked these ten questions individually, and their answers were analyzed using the Flesch-Kincaid Reading Ease and Gunning-Fog Level algorithms. RESULTS: When compared to Google, ChatGPT-3.5 and 4.0 had a larger word count with 315 ± 39 words (p < .0001) and 294 ± 39 words (p < .0001), respectively. A significant difference was found between ChatGPT-3.5 and 4.0 compared to Google using Flesch-Kincaid Reading Ease (p < .0001). CONCLUSIONS: Our findings demonstrate that ChatGPT provided significantly lengthier responses than Google and there was a significant difference in reading ease. Both platforms exceeded the seventh to eighth-grade reading level of the U.S. LEVEL OF EVIDENCE: N/A.

3.
Hand (N Y) ; : 15589447241247332, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38660977

RESUMO

BACKGROUND: ChatGPT, an artificial intelligence technology, has the potential to be a useful patient aid, though the accuracy and appropriateness of its responses and recommendations on common hand surgical pathologies and procedures must be understood. Comparing the sources referenced and characteristics of responses from ChatGPT and an established search engine (Google) on carpal tunnel surgery will allow for an understanding of the utility of ChatGPT for patient education. METHODS: A Google search of "carpal tunnel release surgery" was performed and "frequently asked questions (FAQs)" were recorded with their answer and source. ChatGPT was then asked to provide answers to the Google FAQs. The FAQs were compared, and answer content was compared using word count, readability analyses, and content source. RESULTS: There was 40% concordance among questions asked by the programs. Google answered each question with one source per answer, whereas ChatGPT's answers were created from two sources per answer. ChatGPT's answers were significantly longer than Google's and multiple readability analysis algorithms found ChatGPT responses to be statistically significantly more difficult to read and at a higher grade level than Google's. ChatGPT always recommended "contacting your surgeon." CONCLUSION: A comparison of ChatGPT's responses to Google's FAQ responses revealed that ChatGPT's answers were more in-depth, from multiple sources, and from a higher proportion of academic Web sites. However, ChatGPT answers were found to be more difficult to understand. Further study is needed to understand if the differences in the responses between programs correlate to a difference in patient comprehension.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA