Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Surg ; 11: 1373843, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38903865

RESUMEN

Purpose: This study aims to evaluate the effectiveness of ChatGPT-4, an artificial intelligence (AI) chatbot, in providing accurate and comprehensible information to patients regarding otosclerosis surgery. Methods: On October 20, 2023, 15 hypothetical questions were posed to ChatGPT-4 to simulate physician-patient interactions about otosclerosis surgery. Responses were evaluated by three independent ENT specialists using the DISCERN scoring system. The readability was evaluated using multiple indices: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (Gunning FOG), Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index (CLI), and Automated Readability Index (ARI). Results: The responses from ChatGPT-4 received DISCERN scores ranging from poor to excellent, with an overall score of 50.7 ± 8.2. The readability analysis indicated that the texts were above the 6th-grade level, suggesting they may not be easily comprehensible to the average reader. There was a significant positive correlation between the referees' scores. Despite providing correct information in over 90% of the cases, the study highlights concerns regarding the potential for incomplete or misleading answers and the high readability level of the responses. Conclusion: While ChatGPT-4 shows potential in delivering health information accurately, its utility is limited by the level of readability of its responses. The study underscores the need for continuous improvement in AI systems to ensure the delivery of information that is both accurate and accessible to patients with varying levels of health literacy. Healthcare professionals should supervise the use of such technologies to enhance patient education and care.

2.
J Craniofac Surg ; 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38861337

RESUMEN

OBJECTIVE: This study aimed to evaluate the utility and efficacy of ChatGPT in addressing questions related to thyroid surgery, taking into account accuracy, readability, and relevance. METHODS: A simulated physician-patient consultation on thyroidectomy surgery was conducted by posing 21 hypothetical questions to ChatGPT. Responses were evaluated using the DISCERN score by 3 independent ear, nose and throat specialists. Readability measures including Flesch Reading Ease), Flesch-Kincaid Grade Level, Gunning Fog Index, Simple Measure of Gobbledygook, Coleman-Liau Index, and Automated Readability Index were also applied. RESULTS: The majority of ChatGPT responses were rated fair or above using the DISCERN system, with an average score of 45.44 ± 11.24. However, the readability scores were consistently higher than the recommended grade 6 level, indicating the information may not be easily comprehensible to the general public. CONCLUSION: While ChatGPT exhibits potential in answering patient queries related to thyroid surgery, its current formulation is not yet optimally tailored for patient comprehension. Further refinements are necessary for its efficient application in the medical domain.

3.
Front Surg ; 11: 1327793, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38327547

RESUMEN

Purpose: This study aimed to assess the readability indices of websites including educational materials on otosclerosis. Methods: We performed a Google search on 19 April 2023 using the term "otosclerosis." The first 50 hits were collected and analyzed. The websites were categorized into two groups: websites for health professionals and general websites for patients. Readability indices were calculated using the website https://www.webfx.com/tools/read-able/. Results: A total of 33 websites were eligible and analyzed (20 health professional-oriented and 13 patient-oriented websites). When patient-oriented websites and health professional-oriented websites were individually analyzed, mean Flesch Reading Ease scores were found to be 52.16 ± 14.34 and 46.62 ± 10.07, respectively. There was no significant difference between the two groups upon statistical analysis. Conclusion: Current patient educational material available online related to otosclerosis is written beyond the recommended sixth-grade reading level. The quality of good websites is worthless to the patients if they cannot comprehend the text.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA