Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 758
Filtrar
1.
Clin Ophthalmol ; 18: 2647-2655, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39323727

RESUMO

Purpose: To compare the accuracy and readability of responses to oculoplastics patient questions provided by Google and ChatGPT. Additionally, to assess the ability of ChatGPT to create customized patient education materials. Methods: We executed a Google search to identify the 3 most frequently asked patient questions (FAQs) related to 10 oculoplastics conditions. FAQs were entered into both the Google search engine and the ChatGPT tool and responses were recorded. Responses were graded for readability using five validated readability indices and for accuracy by six oculoplastics surgeons. ChatGPT was instructed to create patient education materials at various reading levels for 8 oculoplastics procedures. The accuracy and readability of ChatGPT-generated procedural explanations were assessed. Results: ChatGPT responses to patient FAQs were written at a significantly higher average grade level than Google responses (grade 15.6 vs 10.0, p < 0.001). ChatGPT responses (93% accuracy) were significantly more accurate (p < 0.001) than Google responses (78% accuracy) and were preferred by expert panelists (79%). ChatGPT accurately explained oculoplastics procedures at an above average reading level. When instructed to rewrite patient education materials at a lower reading level, grade level was reduced by approximately 4 (15.7 vs 11.7, respectively, p < 0.001) without sacrificing accuracy. Conclusion: ChatGPT has the potential to provide patients with accurate information regarding their oculoplastics conditions. ChatGPT may also be utilized by oculoplastic surgeons as an accurate tool to provide customizable patient education for patients with varying health literacy. A better understanding of oculoplastics conditions and procedures amongst patients can lead to informed eye care decisions.

2.
Ann Surg Open ; 5(3): e465, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39310356

RESUMO

Objective: To assess the accuracy, quality, and readability of patient-focused breast cancer websites using expert evaluation and validated tools. Background: Ensuring access to accurate, high-quality, and readable online health information supports informed decision-making and health equity but has not been recently evaluated. Methods: A qualitative analysis on 50 websites was conducted; the first 10 eligible websites for the following search terms were included: "breast cancer," "breast surgery," "breast reconstructive surgery," "breast chemotherapy," and "breast radiation therapy." Websites were required to be in English and not intended for healthcare professionals. Accuracy was evaluated by 5 breast cancer specialists. Quality was evaluated through the DISCERN questionnaire. Readability was measured using 9 standardized tests. Mean readability was compared with the American Medical Association and National Institutes of Health 6th grade recommendation. Results: Nonprofit hospital websites had the highest accuracy (mean = 4.06, SD = 0.42); however, no statistical differences were observed in accuracy by website affiliation (P = 0.08). The overall mean quality score was 50.8 ("fair"/"good" quality) with no significant differences among website affiliations (P = 0.10). Mean readability was at the 10th grade reading level, the lowest being for commercial websites with a mean 9th grade reading level (SD = 2.38). All websites exceeded the American Medical Association- and National Institutes of Health-recommended reading level by 4.4 levels (P < 0.001). Websites with higher accuracy tended to have lower readability levels, whereas those with lower accuracy had higher readability levels. Conclusion: As breast cancer treatment has become increasingly complex, improving online quality and readability while maintaining high accuracy is essential to promote health equity and empower patients to make informed decisions about their care.

3.
J Surg Res ; 303: 89-94, 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39303650

RESUMO

INTRODUCTION: Online patient educational materials (OPEMs) help patients engage in their health care. The American Medical Association (AMA) recommends OPEM be written at or below the 6th grade reading level. This study assessed the readability of deep venous thrombosis OPEM in English and Spanish. METHODS: Google searches were conducted in English and Spanish using "deep venous thrombosis" and "trombosis venosa profunda," respectively. The top 25 patient-facing results were recorded for each, and categorized into source type (hospital, professional society, other). Readability of English OPEM was measured using several scales including the Flesch Reading Ease Readability Formula and Flesch-Kincaid Grade Level. Readability of Spanish OPEM was measured using the Fernández-Huerta Index and INFLESZ Scale. Readability was compared to the AMA recommendation, between languages, and across source types. RESULTS: Only one (4%) Spanish OPEM was written at an easy level, compared to 7 (28%) English OPEM (P = 0.04). More English (28%) OPEM were easy to read compared to Spanish (4%), with a significant difference in reading difficulty breakdown between languages (P = 0.04). The average readability scores for English and Spanish OPEM across all scales were significantly greater than the recommended level (P < 0.01). Only four total articles (8%) met the AMA recommendation, with no significant difference between English and Spanish OPEM (P = 0.61). CONCLUSIONS: Nearly all English and Spanish deep venous thrombosis OPEM analyzed were above the recommended reading level. English resources had overall easier readability compared to Spanish, which may represent a barrier to care. To limit health disparities, information should be presented at accessible reading levels.

4.
Cleft Palate Craniofac J ; : 10556656241281453, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39246230

RESUMO

OBJECTIVE: The American Medical Association (AMA) recommends patient education materials (PEMs) be written at or below a sixth grade reading level. This study seeks to determine the quality, readability, and content of available alveolar bone grafting (ABG) PEMs and determine if artificial intelligence can improve PEM readability. DESIGN: Review of free online PEMs. SETTING: Online ABG PEMs were retrieved from different authoring body types (hospital/academic center, medical society, or private practice). PATIENTS, PARTICIPANTS: None. INTERVENTIONS: Content was assessed by screening PEMs for specific ABG-related topics. Quality was evaluated with the Patient Education Material Assessment Tool (PEMAT), which has measures of understandability and actionability. Open-access readability software (WebFX) determined readability with Flesch Reading Ease, Flesch-Kincaid Grade Level, and Gunning-Fog Index. PEMs were rewritten with ChatGPT, and readability metrics were reassessed. MAIN OUTCOME MEASURE(S): Quality, readability, and content of ABG PEMs. RESULTS: 34 PEMs were analyzed. Regarding quality, the average PEMAT-understandability score was 67.0 ± 16.2%, almost at the minimum acceptable score of 70.0% (p = 0.281). The average PEMAT-actionability score was low at 33.0 ± 24.1%. Regarding readability, the average Flesch Reading Ease score was 64.6 ± 12.8, categorized as "standard/plain English." The average Flesch-Kincaid Grade Level was 8.0 ± 2.3, significantly higher than AMA recommendations (p < 0.0001). PEM rewriting with ChatGPT improved Flesch-Kincaid Grade Level to 6.1 ± 1.3 (p < 0.0001). CONCLUSIONS: Available ABG PEMs are above the recommended reading level, yet ChatGPT can improve PEM readability. Future studies should improve areas of ABG PEMs that are most lacking, such as actionability.

5.
Laryngoscope Investig Otolaryngol ; 9(5): e70009, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39257728

RESUMO

Objectives: Artificial intelligence is evolving and significantly impacting health care, promising to transform access to medical information. With the rise of medical misinformation and frequent internet searches for health-related advice, there is a growing demand for reliable patient information. This study assesses the effectiveness of ChatGPT in providing information and treatment options for chronic rhinosinusitis (CRS). Methods: Six inputs were entered into ChatGPT regarding the definition, prevalence, causes, symptoms, treatment options, and postoperative complications of CRS. International Consensus Statement on Allergy and Rhinology guidelines for Rhinosinusitis was the gold standard for evaluating the answers. The inputs were categorized into three categories and Flesch-Kincaid readability, ANOVA and trend analysis tests were used to assess them. Results: Although some discrepancies were found regarding CRS, ChatGPT's answers were largely in line with existing literature. Mean Flesch Reading Ease, Flesch-Kincaid Grade Level and passive voice percentage were (40.7%, 12.15%, 22.5%) for basic information and prevalence category, (47.5%, 11.2%, 11.1%) for causes and symptoms category, (33.05%, 13.05%, 22.25%) for treatment and complications, and (40.42%, 12.13%, 18.62%) across all categories. ANOVA indicated no statistically significant differences in readability across the categories (p-values: Flesch Reading Ease = 0.385, Flesch-Kincaid Grade Level = 0.555, Passive Sentences = 0.601). Trend analysis revealed readability varied slightly, with a general increase in complexity. Conclusion: ChatGPT is a developing tool potentially useful for patients and medical professionals to access medical information. However, caution is advised as its answers may not be fully accurate compared to clinical guidelines or suitable for patients with varying educational backgrounds.Level of evidence: 4.

6.
Sensors (Basel) ; 24(17)2024 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-39275402

RESUMO

In the manufacture of ancient books, it was quite common to insert written scraps belonging to earlier library material into bookbindings. For scholars like codicologists and paleographers, it is extremely important to have the possibility of reading the text lying on such scraps without dismantling the book. In this regard, in this paper, we report on the detection of these texts by means of infrared (IR) pulsed thermography (PT), which, in recent years, has been specifically proven to be an effective tool for the investigation of Cultural Heritage. In particular, we present a quantitative analysis based, for the first time, on PT images obtained from books of historical relevance preserved at the Biblioteca Angelica in Rome. The analysis has been carried out by means of a theoretical model for the PT signal, which makes use of two image parameters, namely, the distortion and the contrast, related to the IR readability of the buried texts. As shown in this paper, the good agreement between the experimental data obtained in the historical books and the theoretical analysis proved that the capability of the adopted PT method could be fruitfully applied, in real case studies, to the detection of buried texts and to the quantitative characterization of the parameters affecting their thermal readability.

7.
BMC Health Serv Res ; 24(1): 1124, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39334340

RESUMO

BACKGROUND: The quality and safety of information provided on online platforms for migraine treatment remains uncertain. We evaluated the top 10 trending websites accessed annually by Turkish patients seeking solutions for migraine treatment and assessed information quality, security, and readability in this cross-sectional study. METHODS: A comprehensive search strategy was conducted using Google starting in 2015, considering Türkiye's internet usage trends. Websites were evaluated using the DISCERN measurement tool and Atesman Turkish readability index. RESULTS: Ninety websites were evaluated between 2015 and 2024. According to the DISCERN measurement tool, most websites exhibited low quality and security levels. Readability analysis showed that half of the websites were understandable by readers with 9th - 10th grade educational levels. The author distribution varied, with neurologists being the most common. A significant proportion of the websites were for profit. Treatment of attacks and preventive measures were frequently mentioned, but some important treatments, such as greater occipital nerve blockade, were rarely discussed. CONCLUSION: This study highlights the low quality and reliability of online information websites on migraine treatment in Türkiye. These websites' readability level remains a concern, potentially hindering patients' access to accurate information. This can be a barrier to migraine care for both patients with migraine and the physician. Better supervision and cooperation with reputable medical associations are needed to ensure the dissemination of reliable information to the public.


Assuntos
Compreensão , Informação de Saúde ao Consumidor , Internet , Transtornos de Enxaqueca , Transtornos de Enxaqueca/tratamento farmacológico , Transtornos de Enxaqueca/terapia , Humanos , Turquia , Estudos Transversais , Informação de Saúde ao Consumidor/normas , Reprodutibilidade dos Testes , Letramento em Saúde
9.
Curr Probl Cardiol ; 49(11): 102797, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39159709

RESUMO

BACKGROUND: Patient education plays a crucial role in improving the quality of life for patients with heart failure. As artificial intelligence continues to advance, new chatbots are emerging as valuable tools across various aspects of life. One prominent example is ChatGPT, a widely used chatbot among the public. Our study aims to evaluate the readability of ChatGPT answers for common patients' questions about heart failure. METHODS: We performed a comparative analysis between ChatGPT responses and existing heart failure educational materials from top US cardiology institutes. Validated readability calculators were employed to assess and compare the reading difficulty and grade level of the materials. Furthermore, blind assessment using The Patient Education Materials Assessment Tool (PEMAT) was done by four advanced heart failure attendings to evaluate the readability and actionability of each resource. RESULTS: Our study revealed that responses generated by ChatGPT were longer and more challenging to read compared to other materials. Additionally, these responses were written at a higher educational level (undergraduate and 9-10th grade), similar to those from the Heart Failure Society of America. Despite achieving a competitive PEMAT readability score (75 %), surpassing the American Heart Association score (68 %), ChatGPT's actionability score was the lowest (66.7 %) among all materials included in our study. CONCLUSION: Despite its current limitations, artificial intelligence chatbots has the potential to revolutionize the field of patient education especially given theirs ongoing improvements. However, further research is necessary to ensure the integrity and reliability of these chatbots before endorsing them as reliable resources for patient education.


Assuntos
Insuficiência Cardíaca , Educação de Pacientes como Assunto , Humanos , Cardiologia/educação , Compreensão , Letramento em Saúde , Insuficiência Cardíaca/psicologia , Insuficiência Cardíaca/terapia , Educação de Pacientes como Assunto/métodos , Qualidade de Vida , Estados Unidos
10.
Cureus ; 16(7): e64114, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39119408

RESUMO

INTRODUCTION: ChatGPT (OpenAI, San Francisco, CA, USA) is a novel artificial intelligence (AI) application that is used by millions of people, and the numbers are growing by the day. Because it has the potential to be a source of patient information, the study aimed to evaluate the ability of ChatGPT to answer frequently asked questions (FAQs) about asthma with consistent reliability, acceptability, and easy readability. METHODS: We collected 30 FAQs about asthma from the Global Initiative for Asthma website. ChatGPT was asked each question twice, by two different users, to assess for consistency. The responses were evaluated by five board-certified internal medicine physicians for reliability and acceptability. The consistency of responses was determined by the differences in evaluation between the two answers to the same question. The readability of all responses was measured using the Flesch Reading Ease Scale (FRES), the Flesch-Kincaid Grade Level (FKGL), and the Simple Measure of Gobbledygook (SMOG). RESULTS: Sixty responses were collected for evaluation. Fifty-six (93.33%) of the responses were of good reliability. The average rating of the responses was 3.65 out of 4 total points. 78.3% (n=47) of the responses were found acceptable by the evaluators to be the only answer for an asthmatic patient. Only two (6.67%) of the 30 questions had inconsistent answers. The average readability of all responses was determined to be 33.50±14.37 on the FRES, 12.79±2.89 on the FKGL, and 13.47±2.38 on the SMOG. CONCLUSION: Compared to online websites, we found that ChatGPT can be a reliable and acceptable source of information for asthma patients in terms of information quality. However, all responses were of difficult readability, and none followed the recommended readability levels. Therefore, the readability of this AI application requires improvement to be more suitable for patients.

11.
Cleft Palate Craniofac J ; : 10556656241266368, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39091088

RESUMO

INTRODUCTION: The application of artificial intelligence (AI) in healthcare has expanded in recent years, and these tools such as ChatGPT to generate patient-facing information have garnered particular interest. Online cleft lip and palate (CL/P) surgical information supplied by academic/professional (A/P) sources was therefore evaluated against ChatGPT regarding accuracy, comprehensiveness, and clarity. METHODS: 11 plastic and reconstructive surgeons and 29 non-medical individuals blindly compared responses written by ChatGPT or A/P sources to 30 frequently asked CL/P surgery questions. Surgeons indicated preference, determined accuracy, and scored comprehensiveness and clarity. Non-medical individuals indicated preference. Calculations of readability scores were determined using seven readability formulas. Statistical analysis of CL/P surgical online information was performed using paired t-tests. RESULTS: Surgeons, 60.88% of the time, blindly preferred material generated by ChatGPT over A/P sources. Additionally, surgeons consistently indicated that ChatGPT-generated material was more comprehensive and had greater clarity. No significant difference was found between ChatGPT and resources provided by professional organizations in terms of accuracy. Among individuals with no medical background, ChatGPT-generated materials were preferred 60.46% of the time. For materials from both ChatGPT and A/P sources, readability scores surpassed advised levels for patient proficiency across seven readability formulas. CONCLUSION: As the prominence of ChatGPT-based language tools rises in the healthcare space, potential applications of the tools should be assessed by experts against existing high-quality sources. Our results indicate that ChatGPT is capable of producing high-quality material in terms of accuracy, comprehensiveness, and clarity preferred by both plastic surgeons and individuals with no medical background.

12.
Indian J Crit Care Med ; 28(6): 561-568, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-39130387

RESUMO

Background: End-of-life care (EOLC) is a critical aspect of healthcare, yet accessing reliable information remains challenging, particularly in culturally diverse contexts like India. Objective: This study investigates the potential of artificial intelligence (AI) in addressing the informational gap by analyzing patient information leaflets (PILs) generated by AI chatbots on EOLC. Methodology: Using a comparative research design, PILs generated by ChatGPT and Google Gemini were evaluated for readability, sentiment, accuracy, completeness, and suitability. Readability was assessed using established metrics, sentiment analysis determined emotional tone, accuracy, and completeness were rated by subject experts, and suitability was evaluated using the Patient Education Materials Assessment Tool (PEMAT). Results: Google Gemini PILs exhibited superior readability and actionability compared to ChatGPT PILs. Both conveyed positive sentiments and high levels of accuracy and completeness, with Google Gemini PILs showing slightly lower accuracy scores. Conclusion: The findings highlight the promising role of AI in enhancing patient education in EOLC, with implications for improving care outcomes and promoting informed decision-making in diverse cultural settings. Ongoing refinement and innovation in AI-driven patient education strategies are needed to ensure compassionate and culturally sensitive EOLC. How to cite this article: Gondode PG, Khanna P, Sharma P, Duggal S, Garg N. End-of-life Care Patient Information Leaflets-A Comparative Evaluation of Artificial Intelligence-generated Content for Readability, Sentiment, Accuracy, Completeness, and Suitability: ChatGPT vs Google Gemini. Indian J Crit Care Med 2024;28(6):561-568.

13.
Stud Health Technol Inform ; 316: 1079-1083, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176976

RESUMO

Laboratory (lab) tests can assist diagnosis, treatment, and monitoring illness and health. Lab results are one of the most commonly accessible types of personal health information, yet they can be difficult for consumers (e.g., patients, laypeople, citizens) to understand. Consequently, many consumers turn to digital educational resources (e.g., websites, mobile applications) to make sense of their tests and results. In this study, we compared the understandability and readability of four different consumer targeted webpages with information about a commonly ordered blood test called the Complete Blood Count (CBC). The webpages varied in terms of understandability, and only one met the threshold. None of the web pages provided any information about how to respond to lab results. Although all four webpages were quite readable, some were much longer than others. The length of webpages may impact users' attention, ability to locate information, and determine what is most important. Future work is warranted to better understand users' information needs and the usability and user experience of these types of websites.


Assuntos
Informação de Saúde ao Consumidor , Internet , Humanos , Compreensão , Letramento em Saúde
14.
Cureus ; 16(7): e63865, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39099896

RESUMO

BACKGROUND: Artificial intelligence (AI) is a burgeoning new field that has increased in popularity over the past couple of years, coinciding with the public release of large language model (LLM)-driven chatbots. These chatbots, such as ChatGPT, can be engaged directly in conversation, allowing users to ask them questions or issue other commands. Since LLMs are trained on large amounts of text data, they can also answer questions reliably and factually, an ability that has allowed them to serve as a source for medical inquiries. This study seeks to assess the readability of patient education materials on cardiac catheterization across four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI. METHODOLOGY: A set of 10 questions regarding cardiac catheterization was developed using website-based patient education materials on the topic. We then asked these questions in consecutive order to four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI. The Flesch Reading Ease Score (FRES) was used to assess the readability score. Readability grade levels were assessed using six tools: Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG) Index, Automated Readability Index (ARI), and FORCAST Grade Level. RESULTS: The mean FRES across all four chatbots was 40.2, while overall mean grade levels for the four chatbots were 11.2, 13.7, 13.7, 13.3, 11.2, and 11.6 across the FKGL, GFI, CLI, SMOG, ARI, and FORCAST indices, respectively. Mean reading grade levels across the six tools were 14.8 for ChatGPT, 12.3 for Microsoft Copilot, 13.1 for Google Gemini, and 9.6 for Meta AI. Further, FRES values for the four chatbots were 31, 35.8, 36.4, and 57.7, respectively. CONCLUSIONS: This study shows that AI chatbots are capable of providing answers to medical questions regarding cardiac catheterization. However, the responses across the four chatbots had overall mean reading grade levels at the 11th-13th-grade level, depending on the tool used. This means that the materials were at the high school and even college reading level, which far exceeds the recommended sixth-grade level for patient education materials. Further, there is significant variability in the readability levels provided by different chatbots as, across all six grade-level assessments, Meta AI had the lowest scores and ChatGPT generally had the highest.

15.
Cureus ; 16(7): e63800, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39099997

RESUMO

Introduction The internet is increasingly the first port of call for patients introduced to new treatments. Unfortunately, many websites are of poor quality, thereby limiting patients' ability to make informed health decisions. Within thoracic surgery, the treatment options for pneumothoraces may be less intuitive for patients to understand compared to procedures such as lobectomies and wedge resections. Therefore, patients must receive high-quality information to make informed treatment decisions. No study to date has evaluated online information regarding pneumothorax surgery. Knowledge regarding the same may allow physicians to recommend appropriate websites to patients and supplement remaining knowledge gaps. Objective This study aims to evaluate the content, readability, and reliability of online information regarding pneumothorax surgery. Methods A total of 11 search terms including "pneumothorax surgery," "pleurectomy," and "pleurodesis" were each entered into Google, Bing, and Yahoo. The top 20 websites found through each search were screened, yielding 660 websites. Only free websites designed for patient consumption that provided information on pneumothorax surgery were included. This criterion excluded 581 websites, leaving 79 websites to be evaluated. To evaluate website reliability, the Journal of American Medical Association (JAMA) and DISCERN benchmark criteria were applied. To evaluate the readability, 10 standardized tools were utilized including the Flesch-Kincaid Reading Ease Score. To evaluate website content, a novel, self-designed 10-part questionnaire was utilized to assess whether information deemed essential by the authors was included. It evaluated whether websites comprehensively described the surgery process for patients, including pre- and post-operative care. Website authorship and year of publication were also noted. Results The mean JAMA score was 1.69 ± 1.29 out of 4, with only nine websites achieving all four reliability criteria. The median readability score was 13.42 (IQR: 11.48-16.23), which corresponded to a 13th-14th school grade standard. Only four websites were written at a sixth-grade reading level. In the novel content questionnaire, 31.6% of websites (n = 25) did not mention any side effects of pneumothorax surgery. Similarly, 39.2% (n = 31) did not mention alternative treatment options. There was no correlation between the date of website update and JAMA (r = 0.158, p = 0.123), DISCERN (r = 0.098, p = 0.341), or readability (r = 0.053, p = 0.606) scores. Conclusion Most websites were written above the sixth-grade reading level, as recommended by the US Department of Health and Human Services. Furthermore, the exclusion of essential information regarding pneumothorax surgery from websites highlights the current gaps in online information. These findings emphasize the need to create and disseminate comprehensive, reliable websites on pneumothorax surgery that enable patients to make informed health decisions.

16.
Artigo em Inglês | MEDLINE | ID: mdl-39105460

RESUMO

OBJECTIVE: To use an artificial intelligence (AI)-powered large language model (LLM) to improve readability of patient handouts. STUDY DESIGN: Review of online material modified by AI. SETTING: Academic center. METHODS: Five handout materials obtained from the American Rhinologic Society (ARS) and the American Academy of Facial Plastic and Reconstructive Surgery websites were assessed using validated readability metrics. The handouts were inputted into OpenAI's ChatGPT-4 after prompting: "Rewrite the following at a 6th-grade reading level." The understandability and actionability of both native and LLM-revised versions were evaluated using the Patient Education Materials Assessment Tool (PEMAT). Results were compared using Wilcoxon rank-sum tests. RESULTS: The mean readability scores of the standard (ARS, American Academy of Facial Plastic and Reconstructive Surgery) materials corresponded to "difficult," with reading categories ranging between high school and university grade levels. Conversely, the LLM-revised handouts had an average seventh-grade reading level. LLM-revised handouts had better readability in nearly all metrics tested: Flesch-Kincaid Reading Ease (70.8 vs 43.9; P < .05), Gunning Fog Score (10.2 vs 14.42; P < .05), Simple Measure of Gobbledygook (9.9 vs 13.1; P < .05), Coleman-Liau (8.8 vs 12.6; P < .05), and Automated Readability Index (8.2 vs 10.7; P = .06). PEMAT scores were significantly higher in the LLM-revised handouts for understandability (91 vs 74%; P < .05) with similar actionability (42 vs 34%; P = .15) when compared to the standard materials. CONCLUSION: Patient-facing handouts can be augmented by ChatGPT with simple prompting to tailor information with improved readability. This study demonstrates the utility of LLMs to aid in rewriting patient handouts and may serve as a tool to help optimize education materials. LEVEL OF EVIDENCE: Level VI.

17.
J Surg Res ; 302: 200-207, 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39098118

RESUMO

INTRODUCTION: Presenting health information at a sixth-grade reading level is advised to accommodate the general public's abilities. Breast cancer (BC) is the second-most common malignancy in women, but the readability of online BC information in English and Spanish, the two most commonly spoken languages in the United States, is uncertain. METHODS: Three search engines were queried using: "how to do a breast examination," "when do I need a mammogram," and "what are the treatment options for breast cancer" in English and Spanish. Sixty websites in each language were studied and classified by source type and origin. Three readability frameworks in each language were applied: Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, and Simple Measure of Gobbledygook (SMOG) for English, and Fernández-Huerta, Spaulding, and Spanish adaptation of SMOG for Spanish. Median readability scores were calculated, and corresponding grade level determined. The percentage of websites requiring reading abilities >sixth grade level was calculated. RESULTS: English-language websites were predominantly hospital-affiliated (43.3%), while Spanish websites predominantly originated from foundation/advocacy sources (43.3%). Reading difficulty varied across languages: English websites ranged from 5th-12th grade (Flesch Kincaid Grade Level/Flesch Kincaid Reading Ease: 78.3%/98.3% above sixth grade), while Spanish websites spanned 4th-10th grade (Spaulding/Fernández-Huerta: 95%/100% above sixth grade). SMOG/Spanish adaptation of SMOG scores showed lower reading difficulty for Spanish, with few websites exceeding sixth grade (1.7% and 0% for English and Spanish, respectively). CONCLUSIONS: Online BC resources have reading difficulty levels that exceed the recommended sixth grade, although these results vary depending on readability framework. Efforts should be made to establish readability standards that can be translated into Spanish to enhance accessibility for this patient population.

18.
J Vitreoretin Dis ; 8(4): 421-427, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39148568

RESUMO

Purpose: To evaluate the readability, accountability, accessibility, and source of online patient education materials for treatment of age-related macular degeneration (AMD) and to quantify public interest in Syfovre and geographic atrophy after US Food and Drug Administration (FDA) approval. Methods: Websites were classified into 4 categories by information source. Readability was assessed using 5 validated readability indices. Accountability was assessed using 4 benchmarks of the Journal of the American Medical Association (JAMA). Accessibility was evaluated using 3 established criteria. The Google Trends tool was used to evaluate temporal trends in public interest in "Syfovre" and "geographic atrophy" in the months after FDA approval. Results: Of 100 websites analyzed, 22% were written below the recommended sixth-grade reading level. The mean (±SD) grade level of analyzed articles was 9.76 ± 3.35. Websites averaged 1.40 ± 1.39 (of 4) JAMA accountability metrics. The majority of articles (67%) were from private practice/independent organizations. A significant increase in the public interest in the terms "Syfovre" and "geographic atrophy" after FDA approval was found with the Google Trends tool (P < .001). Conclusions: Patient education materials related to AMD treatment are often written at inappropriate reading levels and lack established accountability and accessibility metrics. Articles from national organizations ranked highest on accessibility metrics but were less visible on a Google search, suggesting the need for visibility-enhancing measures. Patient education materials related to the term "Syfovre" had the highest average reading level and low accountability, suggesting the need to modify resources to best address the needs of an increasingly curious public.

19.
Cureus ; 16(7): e64616, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39149636

RESUMO

Background The internet has become an increasingly popular tool for patients to find information pertaining to medical procedures. Although the information is easily accessible, data shows that many online educational materials pertaining to surgical subspecialties are far above the average reading level in the United States. The aim of this study was to evaluate the English and Spanish online materials for the deep inferior epigastric perforator (DIEP) flap reconstruction procedure. Methods The first eight institutional or organizational websites that provided information on the DIEP procedure in English and Spanish were included. Each website was evaluated using the Patient Education and Materials Assessment Tool (PEMAT), Cultural Sensitivity Assessment Tool (CSAT), and either Simplified Measure of Gobbledygook (SMOG) for English websites or Spanish Orthographic Length (SOL) for Spanish websites. Results The English websites had a statistically lower CSAT score compared to the Spanish websites (p=0.006). However, Spanish websites had a statistically higher percentage of complex words compared to English sources (p<0.001). An analysis of reading grade levels through SMOG and SOL scores revealed that Spanish websites had statistically lower scores (p<0.001). There were no statistically significant differences in the understandability or actionability scores between the English and Spanish websites. Conclusions Online educational materials on the DIEP flap reconstruction procedure should be readable, understandable, actionable, and culturally sensitive. Our analysis revealed that improvements can be made in understandability and actionability on these websites. Plastic surgeons should be aware of what constitutes a great online educational resource and what online educational materials their patients will have access to.

20.
Cureus ; 16(7): e64880, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39156464

RESUMO

BACKGROUND: Osteoporosis is a prevalent metabolic bone disease in the Middle East. Middle Easterners rely on the Internet as a source of information about osteoporosis and its treatment. Adequate awareness can help to prevent osteoporosis and its complications. Websites covering osteoporosis in Arabic must be of good quality and readability to be beneficial for people in the Middle East. METHODS: Two Arabic terms for osteoporosis were searched on Google.com (Google Inc., Mountainview, CA), and the first 100 results for each term were examined for eligibility. Two independent raters evaluated the websites using DISCERN and the Journal of the American Medical Association (JAMA) criteria for quality and reliability. The Flesch Kincaid grade level (FKGL), Simple Measure of Gobbledygook (SMOG), and Flesch Reading Ease (FRE) scale were used to evaluate the readability of each website's content. RESULTS: Twenty-five websites were included and evaluated in our study. The average DISCERN score was 28.36±12.18 out of 80 possible scores. The average JAMA score was 1.05±1.15 out of four total scores. The readability scores of all websites were, on average, 50.71±21.96 on the FRE scale, 9.25±4.89 on the FKGL, and 9.74±2.94 on the SMOG. There was a significant difference (p = 0.026 and 0.044) in the DISCERN and JAMA scores, respectively, between the websites on the first Google page and the websites seen on later pages. CONCLUSION: The study found Arabic websites covering osteoporosis to be of low quality and difficult readability. Because these websites are a major source for patient education, improving their quality and readability is a must. The use of simpler language is needed, as is covering more aspects of the diseases, such as prevention.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA