Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 763
Filtrar
Más filtros

Intervalo de año de publicación
1.
Nano Lett ; 2024 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-39364886

RESUMEN

Multiplexed optical techniques with multichannel patterns provide powerful strategies for high-capacity anti-counterfeiting. However, it is still a big challenge to meet the demands of achieving high encryption levels, excellent readability, and simple preparation simultaneously. Herein, we use a multistep imprinting technique, leveraging surface work-hardening to massively produce multiplexed encrypted patterns with hierarchical structures. These patterns with coupled nano- and microstructures can be instantaneously decoded into different pieces of information at different view angles under white light illumination. By incorporating perpendicular nano- and microgratings, we achieve four-channel encoded patterns, enhancing anti-counterfeiting capacity. This versatile method works on various metal/polymer materials, offering high-density information storage, direct visibility, broad material compatibility, and low-cost mass production. Our high-performance anti-counterfeiting patterns show significant potential in real-world applications.

2.
J Surg Res ; 299: 103-111, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38749313

RESUMEN

INTRODUCTION: The quality and readability of online health information are sometimes suboptimal, reducing their usefulness to patients. Manual evaluation of online medical information is time-consuming and error-prone. This study automates content analysis and readability improvement of private-practice plastic surgery webpages using ChatGPT. METHODS: The first 70 Google search results of "breast implant size factors" and "breast implant size decision" were screened. ChatGPT 3.5 and 4.0 were utilized with two prompts (1: general, 2: specific) to automate content analysis and rewrite webpages with improved readability. ChatGPT content analysis outputs were classified as hallucination (false positive), accurate (true positive or true negative), or omission (false negative) using human-rated scores as a benchmark. Six readability metric scores of original and revised webpage texts were compared. RESULTS: Seventy-five webpages were included. Significant improvements were achieved from baseline in six readability metric scores using a specific-instruction prompt with ChatGPT 3.5 (all P ≤ 0.05). No further improvements in readability scores were achieved with ChatGPT 4.0. Rates of hallucination, accuracy, and omission in ChatGPT content scoring varied widely between decision-making factors. Compared to ChatGPT 3.5, average accuracy rates increased while omission rates decreased with ChatGPT 4.0 content analysis output. CONCLUSIONS: ChatGPT offers an innovative approach to enhancing the quality of online medical information and expanding the capabilities of plastic surgery research and practice. Automation of content analysis is limited by ChatGPT 3.5's high omission rates and ChatGPT 4.0's high hallucination rates. Our results also underscore the importance of iterative prompt design to optimize ChatGPT performance in research tasks.


Asunto(s)
Comprensión , Cirugía Plástica , Humanos , Cirugía Plástica/normas , Internet , Información de Salud al Consumidor/normas
3.
J Surg Res ; 296: 711-719, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38367522

RESUMEN

INTRODUCTION: To evaluate the readability of surgical clinical trial consent forms and compare readability across surgical specialties. METHODS: We conducted a cross-sectional analysis of surgical clinical trial consent forms available on ClinicalTrials.gov to quantitatively evaluate readability, word count, and length variations among different specialties. The analysis was performed between November 2022 and January 2023. A total of 386 surgical clinical trial consent forms across 14 surgical specialties were included. RESULTS: The main outcomes were language complexity (measured using Flesch-Kincaid Grade Level), number of words (measured as word count), time to read (measured at reading speeds of 240 per min), and readability (measured by Flesch Reading Ease Score, Gunning Frog Index, Simple Measures of Gobbledygook Index, FORCAST, and Automated Readability Index). The surgical consent forms were a mean (standard deviation) of 2626 (1668) words long, with a mean of 12:53 min to read at 240 words per min. None of the surgical specialties had an average readability level of sixth grade or lower across all six indices, and only 16 out of 386 (4%) clinical trials met the recommended reading level. Furthermore, there was no significant difference in reading grade level between surgical specialties based on the Flesch-Kincaid Grade Level and Flesch Reading Ease indices. CONCLUSIONS: Our findings suggest that current surgical clinical trial consent documents are too long and complex, exceeding the recommended sixth-grade reading level. Ensuring readable clinical trial consent forms is not only ethically responsible but also crucial for protecting patients' rights and well-being by facilitating informed decision-making.


Asunto(s)
Formularios de Consentimiento , Especialidades Quirúrgicas , Humanos , Comprensión , Estudios Transversales , Consentimiento Informado , Internet
4.
J Surg Res ; 303: 89-94, 2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39303650

RESUMEN

INTRODUCTION: Online patient educational materials (OPEMs) help patients engage in their health care. The American Medical Association (AMA) recommends OPEM be written at or below the 6th grade reading level. This study assessed the readability of deep venous thrombosis OPEM in English and Spanish. METHODS: Google searches were conducted in English and Spanish using "deep venous thrombosis" and "trombosis venosa profunda," respectively. The top 25 patient-facing results were recorded for each, and categorized into source type (hospital, professional society, other). Readability of English OPEM was measured using several scales including the Flesch Reading Ease Readability Formula and Flesch-Kincaid Grade Level. Readability of Spanish OPEM was measured using the Fernández-Huerta Index and INFLESZ Scale. Readability was compared to the AMA recommendation, between languages, and across source types. RESULTS: Only one (4%) Spanish OPEM was written at an easy level, compared to 7 (28%) English OPEM (P = 0.04). More English (28%) OPEM were easy to read compared to Spanish (4%), with a significant difference in reading difficulty breakdown between languages (P = 0.04). The average readability scores for English and Spanish OPEM across all scales were significantly greater than the recommended level (P < 0.01). Only four total articles (8%) met the AMA recommendation, with no significant difference between English and Spanish OPEM (P = 0.61). CONCLUSIONS: Nearly all English and Spanish deep venous thrombosis OPEM analyzed were above the recommended reading level. English resources had overall easier readability compared to Spanish, which may represent a barrier to care. To limit health disparities, information should be presented at accessible reading levels.

5.
J Surg Res ; 293: 727-732, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37862852

RESUMEN

INTRODUCTION: Appropriate education and information are the keystones of patient autonomy. Surgical societies support this goal through online informational publications. Despite these recommendations, many of these sources do not provide the appropriate level of reading for the average patient. Multiple national organizations, including the AMA and NIH, have recommended that such materials be written at or below a 6th-grade level. We therefore aimed to evaluate the readability of patient information publications provided by the American Society of Metabolic and Bariatric Surgery (ASMBS). METHODS: Patient information publications were collected from the ASMBS webpage (https://asmbs.org/patients) and evaluated for readability. Microsoft Office was utilized to calculate Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) scores. Flesch Reading Ease (FRE) is a 0-100 score, with higher scores equating to easier reading (≥80 = 6th-grade reading level). Flesch-Kincaid Grade Level (FKGL) rates text on a US grade school level. Qualitative and univariate analyses were performed. RESULTS: Eleven patient information publications were evaluated. None of the publications achieved an FRE score of 80 or an FKGL of a 6th-grade reading level. The average FRE score was 35.8 (range 14.9-53.6). The average FKGL score was 13.1 (range 10.1-17.5). The publication with the highest FRE and lowest FKGL (best readability) was that for benefits of weight loss. The brochure with the lowest FRE and highest FKGL (worst readability) was that for Medical Tourism. CONCLUSIONS: Although the ASMBS patient information publications are a trusted source of patient literature, none of the 11 publications met the recommended criteria for patient readability. Further refinement of these will be needed to provide the appropriate reading level for the average patient.


Asunto(s)
Comprensión , Alfabetización en Salud , Humanos , Estados Unidos , Escolaridad , Internet
6.
J Surg Res ; 302: 200-207, 2024 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-39098118

RESUMEN

INTRODUCTION: Presenting health information at a sixth-grade reading level is advised to accommodate the general public's abilities. Breast cancer (BC) is the second-most common malignancy in women, but the readability of online BC information in English and Spanish, the two most commonly spoken languages in the United States, is uncertain. METHODS: Three search engines were queried using: "how to do a breast examination," "when do I need a mammogram," and "what are the treatment options for breast cancer" in English and Spanish. Sixty websites in each language were studied and classified by source type and origin. Three readability frameworks in each language were applied: Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, and Simple Measure of Gobbledygook (SMOG) for English, and Fernández-Huerta, Spaulding, and Spanish adaptation of SMOG for Spanish. Median readability scores were calculated, and corresponding grade level determined. The percentage of websites requiring reading abilities >sixth grade level was calculated. RESULTS: English-language websites were predominantly hospital-affiliated (43.3%), while Spanish websites predominantly originated from foundation/advocacy sources (43.3%). Reading difficulty varied across languages: English websites ranged from 5th-12th grade (Flesch Kincaid Grade Level/Flesch Kincaid Reading Ease: 78.3%/98.3% above sixth grade), while Spanish websites spanned 4th-10th grade (Spaulding/Fernández-Huerta: 95%/100% above sixth grade). SMOG/Spanish adaptation of SMOG scores showed lower reading difficulty for Spanish, with few websites exceeding sixth grade (1.7% and 0% for English and Spanish, respectively). CONCLUSIONS: Online BC resources have reading difficulty levels that exceed the recommended sixth grade, although these results vary depending on readability framework. Efforts should be made to establish readability standards that can be translated into Spanish to enhance accessibility for this patient population.

7.
J Surg Res ; 301: 540-546, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39047386

RESUMEN

INTRODUCTION: Parathyroidectomy is recommended for severe secondary hyperparathyroidism (SHPT) due to end-stage kidney disease (ESKD), but surgery is underutilized. High quality and accessible online health information, recommended to be at a 6th-grade reading level, is vital to improve patient health literacy. This study evaluated available online resources for SHPT from ESKD based on information quality and readability. METHODS: Three search engines were queried using the terms "parathyroidectomy for secondary hyperparathyroidism," "parathyroidectomy kidney/renal failure," "parathyroidectomy dialysis patients," "should I have surgery for hyperparathyroidism due to kidney failure?," and "do I need surgery for hyperparathyroidism due to kidney failure if I do not have symptoms?" Websites were categorized by source and origin. Two independent reviewers determined information quality using JAMA (0-4) and DISCERN (1-5) frameworks, and scores were averaged. Cohen's kappa evaluated inter-rater reliability. Readability was determined using the Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, and Simple Measure of Gobbledygook tools. Median readability scores were calculated, and corresponding grade level determined. Websites with reading difficulties >6th grade level were calculated. RESULTS: Thirty one (86.1%) websites originated from the U.S., with most from hospital-associated (63.9%) and foundation/advocacy sources (30.6%). The mean JAMA and DISCERN scores for all websites were 1.3 ± 1.4 and 2.6 ± 0.7, respectively. Readability scores ranged from grade level 5-college level, and most websites scored above the recommended 6th grade level. CONCLUSIONS: Patient-oriented websites tailoring SHPT from ESKD are at a reading level higher than recommended, and the quality of information is low. Efforts must be made to improve the accessibility and quality of information for all patients.


Asunto(s)
Comprensión , Alfabetización en Salud , Hiperparatiroidismo Secundario , Fallo Renal Crónico , Humanos , Alfabetización en Salud/estadística & datos numéricos , Fallo Renal Crónico/terapia , Fallo Renal Crónico/complicaciones , Hiperparatiroidismo Secundario/etiología , Hiperparatiroidismo Secundario/cirugía , Internet , Paratiroidectomía , Educación del Paciente como Asunto , Información de Salud al Consumidor/normas
8.
Headache ; 64(4): 410-423, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38525832

RESUMEN

OBJECTIVE: To assess the readability and the comprehensiveness of patient-reported outcome measures (PROMs) utilized in primary headache disorders literature. BACKGROUND: As the health-care landscape has evolved toward a patient-centric model, numerous PROMs have been developed to capture treatment outcomes in patients with headache disorders. For these PROMs to advance our understanding of headache disorders and their treatment impact, they must be easy to understand (i.e., reading grade level 6 or less) and comprehensively capture what matters to patients with headache. The aim of this study was to (a) assess the readability of PROMs utilized in headache disorders literature, and (b) assess the comprehensiveness of PROMs by mapping their content to a health-related quality of life framework. METHODS: In this scoping review, recently published systematic reviews were used to identify PROMs used in primary headache disorders literature. Readability analysis was performed at the level of individual items and full PROM using established readability metrics. The content of the PROMs was mapped against a health-related quality-of-life framework by two independent reviewers. RESULTS: In total, 22 PROMs (15 headache disorders related, 7 generic) were included. The median reading grade level varied between 7.1 (interquartile range [IQR] 6.3-7.8) and 12.7 (IQR 11.8-13.2). None of the PROMs were below the recommended reading grade level for patient-facing material (grade 6). Three PROMs, the Migraine-Treatment Assessment Questionnaire, the Eurolight, and the European Quality of Life 5 Dimensions 3 Level Version, were between reading grade levels 7 and 8; the remaining 19 PROMs were above reading grade level 8. In total, the PROMs included 425 items. Most items (n = 134, 32%) assessed physical function (e.g., work, activities of daily living). The remaining items assessed physical symptoms (n = 127, 30%; e.g., pain, nausea), treatment effects on symptoms (n = 65, 15%; e.g., accompanying symptoms relief, headache relief), treatment impact (n = 56, 13%; e.g., function, side effects), psychological well-being (n = 41, 10%; e.g., anger, frustration), social well-being (n = 29, 7%; e.g., missing out on social activities, relationships), psychological impact (n = 14, 3%; e.g., feeling [not] in control, feeling like a burden), and sexual well-being (n = 3, 1%; e.g., sexual activity, sexual interest). Some of the items pertained to treatment (n = 27, 6%), of which most were about treatment type and use (n = 12, 3%; e.g., medication, botulinum toxin), treatment access (n = 10, 2%; e.g., health-care utilization, cost of medication), and treatment experience (n = 9, 2%; e.g., treatment satisfaction, confidence in treatment). CONCLUSION: The PROMs used in studies of headache disorders may be challenging for some patients to understand, leading to inaccurate or missing data. Furthermore, no available PROM comprehensively measures the health-related quality-of-life impact of headache disorders or their treatment, resulting in a limited understanding of patient-reported outcomes. The development of an easy-to-understand, comprehensive, and validated headache disorders-specific PROM is warranted.


Asunto(s)
Comprensión , Trastornos de Cefalalgia , Medición de Resultados Informados por el Paciente , Calidad de Vida , Humanos , Trastornos de Cefalalgia/terapia , Trastornos de Cefalalgia/diagnóstico
9.
Surg Endosc ; 38(9): 5259-5265, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39009725

RESUMEN

INTRODUCTION: Health literacy is the ability of individuals to use basic health information and services to make well-informed decisions. Low health literacy among surgical patients has been associated with nonadherence to preoperative and/or discharge instructions as well as poor comprehension of surgery. It likely poses as a barrier to patients considering foregut surgery which requires an understanding of different treatment options and specific diet instructions. The objective of this study was to assess and compare the readability of online patient education materials (PEM) for foregut surgery. METHODS: Using Google, the terms "anti-reflux surgery, "GERD surgery," and "foregut surgery" were searched and a total of 30 webpages from universities and national organizations were selected. The readability of the text was assessed with seven instruments: Flesch Reading Ease formula (FRE), Gunning Fog (GF), Flesch-Kincaid Grade Level (FKGL), Coleman Liau Index (CL), Simple Measure of Gobbledygook (SMOG), Automated Readability Index (ARI), and Linsear Write Formula (LWF). Mean readability scores were calculated with standard deviations. We performed a qualitative analysis gathering characteristics such as, type of information (preoperative or postoperative), organization, use of multimedia, inclusion of a version in another language. RESULTS: The overall average readability of the top PEM for foregut surgery was 12th grade. There was only one resource at the recommended sixth grade reading level. Nearly half of PEM included some form of multimedia. CONCLUSIONS: The American Medical Association and National Institute of Health have recommended that PEMs to be written at the 5th-6th grade level. The majority of online PEM for foregut surgery is above the recommended reading level. This may be a barrier for patients seeking foregut surgery. Surgeons should be aware of the potential gaps in understanding of their patients to help them make informed decisions and improve overall health outcomes.


Asunto(s)
Comprensión , Alfabetización en Salud , Internet , Educación del Paciente como Asunto , Humanos , Educación del Paciente como Asunto/métodos
10.
Surg Endosc ; 38(5): 2522-2532, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38472531

RESUMEN

BACKGROUND: The readability of online bariatric surgery patient education materials (PEMs) often surpasses the recommended 6th grade level. Large language models (LLMs), like ChatGPT and Bard, have the potential to revolutionize PEM delivery. We aimed to evaluate the readability of PEMs produced by U.S. medical institutions compared to LLMs, as well as the ability of LLMs to simplify their responses. METHODS: Responses to frequently asked questions (FAQs) related to bariatric surgery were gathered from top-ranked health institutions. FAQ responses were also generated from GPT-3.5, GPT-4, and Bard. LLMs were then prompted to improve the readability of their initial responses. The readability of institutional responses, initial LLM responses, and simplified LLM responses were graded using validated readability formulas. Accuracy and comprehensiveness of initial and simplified LLM responses were also compared. RESULTS: Responses to 66 FAQs were included. All institutional and initial LLM responses had poor readability, with average reading levels ranging from 9th grade to college graduate. Simplified responses from LLMs had significantly improved readability, with reading levels ranging from 6th grade to college freshman. When comparing simplified LLM responses, GPT-4 responses demonstrated the highest readability, with reading levels ranging from 6th to 9th grade. Accuracy was similar between initial and simplified responses from all LLMs. Comprehensiveness was similar between initial and simplified responses from GPT-3.5 and GPT-4. However, 34.8% of Bard's simplified responses were graded as less comprehensive compared to initial. CONCLUSION: Our study highlights the efficacy of LLMs in enhancing the readability of bariatric surgery PEMs. GPT-4 outperformed other models, generating simplified PEMs from 6th to 9th grade reading levels. Unlike GPT-3.5 and GPT-4, Bard's simplified responses were graded as less comprehensive. We advocate for future studies examining the potential role of LLMs as dynamic and personalized sources of PEMs for diverse patient populations of all literacy levels.


Asunto(s)
Cirugía Bariátrica , Comprensión , Educación del Paciente como Asunto , Humanos , Educación del Paciente como Asunto/métodos , Internet , Alfabetización en Salud , Lenguaje , Estados Unidos
11.
Qual Life Res ; 33(5): 1267-1274, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38441716

RESUMEN

PURPOSE: In this study, we evaluated readability and understandability of nine French-language Patient-Reported Outcome Measures (PROMs) that are currently used in a contemporary longitudinal cohort of breast cancer survivors as part of an effort to improve equity in cancer care and research. METHODS: Readability of PROMs was assessed using the Flesh Reading Ease Score (FRES), the Gunning's Fog Index (FOG), and the FRY graphics. Readability was considered ideal if mean score ≤ 6th-grade level and acceptable if between 6th and 8th grade. Understandability was evaluated using the Patient Education Materials Assessment Tool and defined as ideal if PEMAT ≥ 80%. The Evaluative Linguistic Framework for Questionnaires (ELF-Q) provided additional qualitative elements to assess understandability. Plain-language best practice was met if both readability and understandability were ideal. RESULTS: None of the 9 PROMs evaluated had ideal readability scores and only 1 had an acceptable score. Understandability ranged from 55% to 91%, and only 3 PROMs had ideal scores. ELF-Q identified points for improvement in several understandability dimensions of the PROMs. None of the instruments met the definition of plain-language best practice. CONCLUSION: None of the studied PROMs met the standards of readability and understandability. Future development and translation of PROMs should follow comprehensive linguistic and cultural frameworks to ensure plain-language standards and enhance equitable patient-centered care and research.


Asunto(s)
Comprensión , Medición de Resultados Informados por el Paciente , Humanos , Femenino , Encuestas y Cuestionarios , Neoplasias de la Mama/psicología , Estudios de Cohortes , Supervivientes de Cáncer/psicología , Persona de Mediana Edad , Estudios Longitudinales , Alfabetización en Salud , Supervivencia , Calidad de Vida
12.
Graefes Arch Clin Exp Ophthalmol ; 262(9): 3047-3052, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38639789

RESUMEN

PURPOSE: This study investigated whether websites regarding diabetic retinopathy are readable for patients, and adequately designed to be found by search engines. METHODS: The term "diabetic retinopathy" was queried in the Google search engine. Patient-oriented websites from the first 10 pages were categorized by search result page number and website organization type. Metrics of search engine optimization (SEO) and readability were then calculated. RESULTS: Among the 71 sites meeting inclusion criteria, informational and organizational sites were best optimized for search engines, and informational sites were the most visited. Better optimization as measured by authority score was correlated with lower Flesch Kincaid Grade Level (r = 0.267, P = 0.024). There was a significant increase in Flesch Kincaid Grade Level with successive search result pages (r = 0.275, P = 0.020). Only 2 sites met the 6th grade reading level AMA recommendation by Flesch Kincaid Grade Level; the average reading level was 10.5. There was no significant difference in readability between website categories. CONCLUSION: While the readability of diabetic retinopathy patient information was poor, better readability was correlated to better SEO metrics. While we cannot assess causality, we recommend websites improve their readability, which may increase uptake of their resources.


Asunto(s)
Comprensión , Retinopatía Diabética , Internet , Motor de Búsqueda , Humanos , Retinopatía Diabética/diagnóstico , Educación del Paciente como Asunto , Información de Salud al Consumidor/normas , Alfabetización en Salud
13.
Photodermatol Photoimmunol Photomed ; 40(2): e12958, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38489300

RESUMEN

BACKGROUND/PURPOSE: Vitiligo is a depigmenting disorder that affects up to 2% of the population. Due to the relatively high prevalence of this disease and its psychological impact on patients, decisions concerning treatment can be difficult. As patients increasingly seek health information online, the caliber of online health information (OHI) becomes crucial in patients' decisions regarding their care. We aimed to assess the quality and readability of OHI regarding phototherapy in the management of vitiligo. METHODS: Similar to previously published studies assessing OHI, we used 5 medical search terms as a proxy for online searches made by patients. Results for each search term were assessed using an enhanced DISCERN analysis, Health On the Net code of conduct (HONcode) accreditation guidelines, and several readability indices. The DISCERN analysis is a validated questionnaire used to assess the quality of OHI, while HONcode accreditation is a marker of site reliability. RESULTS: Of the 500 websites evaluated, 174 were HONcode-accredited (35%). Mean DISCERN scores for all websites were 58.9% and 51.7% for website reliability and treatment sections, respectively. Additionally, 0/130 websites analyzed for readability scored at the NIH-recommended sixth-grade reading level. CONCLUSION: These analyses shed light on the shortcomings of OHI regarding phototherapy treatment for vitiligo, which could exacerbate disparities for patients who are already at higher risk of worse health outcomes.


Asunto(s)
Información de Salud al Consumidor , Vitíligo , Humanos , Comprensión , Vitíligo/terapia , Reproducibilidad de los Resultados , Fototerapia , Internet
14.
Lung ; 202(5): 741-751, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39060416

RESUMEN

OBJECTIVES: Readability of patient-facing information of oral antibiotics detailed in the WHO all oral short (6 months, 9 months) has not been described to date. The aim of this study was therefore to examine (i) how readable patient-facing TB antibiotic information is compared to readability reference standards and (ii) if there are differences in readability between high-incidence countries versus low-incidence countries. METHODS: Ten antibiotics, including bedaquiline, clofazimine, ethambutol, ethionamide, isoniazid, levofloxacin, linezolid, moxifloxacin, pretomanid, pyrazinamide, were investigated. TB antibiotic information sources were examined, consisting of 85 Patient Information Leaflets (PILs) and 40 antibiotic web resouces. Of these 85 PILs, 72 were taken from the National Medicines Regulator from six countries (3 TB high-incidence [Rwanda, Malaysia, South Africa] + 3 TB low-incidence [UK, Ireland, Malta] countries). Readability data was grouped into three categories, including (i) high TB-incidence countries (n = 33 information sources), (ii) low TB-incidence countries (n = 39 information sources) and (iii) web information (n = 53). Readability was calculated using Readable software, to obtain four readability scores [(i) Flesch Reading Ease (FRE), (ii) Flesch-Kincaid Grade Level (FKGL), (iii) Gunning Fog Index and (iv) SMOG Index], as well as two text metrics [words/sentence, syllables/word]. RESULTS: Mean readability scores of patient-facing TB antibiotic information for FRE and FKGL, were 47.4 ± 12.6 (sd) (target ≥ 60) and 9.2 ± 2.0 (target ≤ 8.0), respectively. There was no significant difference in readability between low incidence countries and web resources, but there was significantly poorer readability associated with PILs from high incidence countries versus low incidence countries (FRE; p = 0.0056: FKGL; p = 0.0095). CONCLUSIONS: Readability of TB antibiotic PILs is poor. Improving readability of PILs should be an important objective when preparing patient-facing written materials, thereby improving patient health/treatment literacy.


Asunto(s)
Antituberculosos , Comprensión , Educación del Paciente como Asunto , Tuberculosis Resistente a Múltiples Medicamentos , Humanos , Sudáfrica , Administración Oral , Tuberculosis Resistente a Múltiples Medicamentos/tratamiento farmacológico , Educación del Paciente como Asunto/normas , Antituberculosos/administración & dosificación , Antituberculosos/uso terapéutico , Organización Mundial de la Salud , Irlanda , Malasia , Incidencia , Folletos , Antibacterianos/administración & dosificación , Antibacterianos/uso terapéutico , Alfabetización en Salud
15.
Can J Anaesth ; 71(8): 1092-1102, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38773007

RESUMEN

PURPOSE: Guidelines recommend that health-related information for patients should be written at or below the sixth-grade level. We sought to evaluate the readability level and quality of online patient education materials regarding epidural and spinal anesthesia. METHODS: We evaluated webpages with content written specifically regarding either spinal or epidural anesthesia, identified using 11 relevant search terms, with seven commonly used readability formulas: Flesh-Kincaid Grade Level (FKGL), Gunning Fox Index (GFI), Coleman-Liau Index (CLI), Automated Readability Index (ARI), Simple Measure of Gobbledygook (SMOG), Flesch Reading Ease (FRE), and New Dale-Chall (NDC). Two evaluators assessed the quality of the reading materials using the Brief DISCERN tool. RESULTS: We analyzed 261 webpages. The mean (standard deviation) readability scores were: FKGL = 8.8 (1.9), GFI = 11.2 (2.2), CLI = 10.3 (1.9), ARI = 8.1 (2.2), SMOG = 11.6 (1.6), FRE = 55.7 (10.8), and NDC = 5.4 (1.0). The mean grade level was higher than the recommended sixth-grade level when calculated with six of the seven readability formulas. The average Brief DISCERN score was 16.0. CONCLUSION: Readability levels of online patient education materials pertaining to epidural and spinal anesthesia are higher than recommended. When we evaluated the quality of the information using a validated tool, the materials were found to be just below the threshold of what is considered good quality. Authors of educational materials should provide not only readable but also good-quality information to enhance patient understanding.


RéSUMé: OBJECTIF: Les lignes directrices recommandent que les informations relatives à la santé destinées aux patient·es soient rédigées pour un niveau de sixième année ou en dessous. Nous avons cherché à évaluer le niveau de lisibilité et la qualité des matériels d'éducation disponibles en ligne pour les patient·es concernant l'anesthésie péridurale et la rachianesthésie. MéTHODE: Nous avons évalué les pages web dont le contenu était spécifiquement rédigé à propos de l'anesthésie rachidienne ou péridurale, identifiées à l'aide de 11 termes de recherche pertinents, avec sept formules de lisibilité couramment utilisées : Niveau scolaire Flesh-Kincaid (FKGL), Indice Gunning Fox (GFI), Indice Coleman-Liau (CLI), Indice de lisibilité automatisé (ARI), Mesure simple du charabia (SMOG), Facilité de lecture de Flesch (FRE) et New Dale-Chall (NDC). Deux personnes ont évalué la qualité du matériel de lecture à l'aide de l'outil Brief DISCERN. RéSULTATS: Nous avons analysé 261 pages web. Les scores de lisibilité moyens (écart type) étaient les suivants : FKGL = 8,8 (1,9), GFI = 11,2 (2,2), CLI = 10,3 (1,9), ARI = 8,1 (2,2), SMOG = 11,6 (1,6), FRE = 55,7 (10,8) et NDC = 5,4 (1,0). Le niveau de lecture moyen était plus élevé que le niveau recommandé de sixième année lorsqu'il a été calculé à l'aide de six des sept formules de lisibilité. Le score moyen de Brief DISCERN était de 16,0. CONCLUSION: Les niveaux de lisibilité des documents d'éducation en ligne relatifs à l'anesthésie péridurale et à la rachianesthésie destinés aux patient·es sont plus élevés que ceux recommandés. Lorsque nous avons évalué la qualité de l'information à l'aide d'un outil validé, nous avons constaté que les documents se situaient juste en dessous du seuil de ce qui est considéré comme de bonne qualité. Les personnes rédigeant du matériel éducatif doivent fournir des informations non seulement lisibles, mais aussi de bonne qualité afin d'améliorer la compréhension des patient·es.


Asunto(s)
Anestesia Epidural , Anestesia Raquidea , Comprensión , Internet , Educación del Paciente como Asunto , Humanos , Educación del Paciente como Asunto/normas , Educación del Paciente como Asunto/métodos , Anestesia Epidural/normas , Anestesia Epidural/métodos , Alfabetización en Salud
16.
BMC Health Serv Res ; 24(1): 1124, 2024 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-39334340

RESUMEN

BACKGROUND: The quality and safety of information provided on online platforms for migraine treatment remains uncertain. We evaluated the top 10 trending websites accessed annually by Turkish patients seeking solutions for migraine treatment and assessed information quality, security, and readability in this cross-sectional study. METHODS: A comprehensive search strategy was conducted using Google starting in 2015, considering Türkiye's internet usage trends. Websites were evaluated using the DISCERN measurement tool and Atesman Turkish readability index. RESULTS: Ninety websites were evaluated between 2015 and 2024. According to the DISCERN measurement tool, most websites exhibited low quality and security levels. Readability analysis showed that half of the websites were understandable by readers with 9th - 10th grade educational levels. The author distribution varied, with neurologists being the most common. A significant proportion of the websites were for profit. Treatment of attacks and preventive measures were frequently mentioned, but some important treatments, such as greater occipital nerve blockade, were rarely discussed. CONCLUSION: This study highlights the low quality and reliability of online information websites on migraine treatment in Türkiye. These websites' readability level remains a concern, potentially hindering patients' access to accurate information. This can be a barrier to migraine care for both patients with migraine and the physician. Better supervision and cooperation with reputable medical associations are needed to ensure the dissemination of reliable information to the public.


Asunto(s)
Comprensión , Información de Salud al Consumidor , Internet , Trastornos Migrañosos , Trastornos Migrañosos/tratamiento farmacológico , Trastornos Migrañosos/terapia , Humanos , Turquía , Estudios Transversales , Información de Salud al Consumidor/normas , Reproducibilidad de los Resultados , Alfabetización en Salud
17.
Neurosurg Focus ; 57(1): E6, 2024 07.
Artículo en Inglés | MEDLINE | ID: mdl-38950429

RESUMEN

OBJECTIVE: Concussions are self-limited forms of mild traumatic brain injury (TBI). Gradual return to play (RTP) is crucial to minimizing the risk of second impact syndrome. Online patient educational materials (OPEM) are often used to guide decision-making. Previous literature has reported that grade-level readability of OPEM is higher than recommended by the American Medical Association and the National Institutes of Health. The authors evaluated the readability of OPEM on concussion and RTP. METHODS: An online search engine was used to identify websites providing OPEM on concussion and RTP. Text specific to concussion and RTP was extracted from each website and readability was assessed using the following six standardized indices: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Simple Measure of Gobbledygook Index, and Automated Readability Index. One-way ANOVA and Tukey's post hoc test were used to compare readability across sources of information. RESULTS: There were 59 concussion and RTP articles, and readability levels exceeded the recommended 6th grade level, irrespective of the source of information. Academic institutions published OPEM at simpler readability levels (higher FRE scores). Private organizations published OPEM at more complex (higher) grade-level readability levels in comparison with academic and nonprofit institutions (p < 0.05). CONCLUSIONS: The readability of OPEM on RTP after concussions exceeds the literacy of the average American. There is a critical need to modify the concussion and RTP OPEM to improve comprehension by a broad audience.


Asunto(s)
Conmoción Encefálica , Comprensión , Educación del Paciente como Asunto , Conmoción Encefálica/prevención & control , Humanos , Educación del Paciente como Asunto/métodos , Educación del Paciente como Asunto/normas , Internet , Volver al Deporte , Lectura
18.
J Ren Nutr ; 34(2): 170-176, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37839591

RESUMEN

OBJECTIVE: The American Medical Association recommends health information to be written at a 6th grade level reading level. Our aim was to determine whether Artificial Intelligence can outperform the existing health information on kidney stone prevention and treatment. METHODS: The top 50 search results for "Kidney Stone Prevention" and "Kidney Stone Treatment" on Google, Bing, and Yahoo were selected. Duplicate webpages, advertisements, pages intended for health professionals such as science articles, links to videos, paid subscription pages, and links nonrelated to kidney stone prevention and/or treatment were excluded. Included pages were categorized into academic, hospital-affiliated, commercial, nonprofit foundations, and other. Quality and readability of webpages were evaluated using validated tools, and the reading level was descriptively compared with ChatGPT generated health information on kidney stone prevention and treatment. RESULTS: 50 webpages on kidney stone prevention and 49 on stone treatment were included in this study. The reading level was determined to equate to that of a 10th to 12th grade student. Quality was measured as "fair" with no pages scoring "excellent" and only 20% receiving a "good" quality. There was no significant difference between pages from academic, hospital-affiliated, commercial, and nonprofit foundation publications. The text generated by ChatGPT was considerably easier to understand with readability levels measured as low as 5th grade. CONCLUSIONS: The language used in existing information on kidney stone disease is of subpar quality and too complex to understand. Machine learning tools could aid in generating information that is comprehensible by the public.


Asunto(s)
Inteligencia Artificial , Cálculos Renales , Estados Unidos , Humanos , Comprensión , Cálculos Renales/prevención & control , Internet
19.
Vascular ; : 17085381241240550, 2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38500300

RESUMEN

OBJECTIVES: Generative artificial intelligence (AI) has emerged as a promising tool to engage with patients. The objective of this study was to assess the quality of AI responses to common patient questions regarding vascular surgery disease processes. METHODS: OpenAI's ChatGPT-3.5 and Google Bard were queried with 24 mock patient questions spanning seven vascular surgery disease domains. Six experienced vascular surgery faculty at a tertiary academic center independently graded AI responses on their accuracy (rated 1-4 from completely inaccurate to completely accurate), completeness (rated 1-4 from totally incomplete to totally complete), and appropriateness (binary). Responses were also evaluated with three readability scales. RESULTS: ChatGPT responses were rated, on average, more accurate than Bard responses (3.08 ± 0.33 vs 2.82 ± 0.40, p < .01). ChatGPT responses were scored, on average, more complete than Bard responses (2.98 ± 0.34 vs 2.62 ± 0.36, p < .01). Most ChatGPT responses (75.0%, n = 18) and almost half of Bard responses (45.8%, n = 11) were unanimously deemed appropriate. Almost one-third of Bard responses (29.2%, n = 7) were deemed inappropriate by at least two reviewers (29.2%), and two Bard responses (8.4%) were considered inappropriate by the majority. The mean Flesch Reading Ease, Flesch-Kincaid Grade Level, and Gunning Fog Index of ChatGPT responses were 29.4 ± 10.8, 14.5 ± 2.2, and 17.7 ± 3.1, respectively, indicating that responses were readable with a post-secondary education. Bard's mean readability scores were 58.9 ± 10.5, 8.2 ± 1.7, and 11.0 ± 2.0, respectively, indicating that responses were readable with a high-school education (p < .0001 for three metrics). ChatGPT's mean response length (332 ± 79 words) was higher than Bard's mean response length (183 ± 53 words, p < .001). There was no difference in the accuracy, completeness, readability, or response length of ChatGPT or Bard between disease domains (p > .05 for all analyses). CONCLUSIONS: AI offers a novel means of educating patients that avoids the inundation of information from "Dr Google" and the time barriers of physician-patient encounters. ChatGPT provides largely valid, though imperfect, responses to myriad patient questions at the expense of readability. While Bard responses are more readable and concise, their quality is poorer. Further research is warranted to better understand failure points for large language models in vascular surgery patient education.

20.
J Med Internet Res ; 26: e54072, 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39196637

RESUMEN

BACKGROUND: Halitosis, characterized by an undesirable mouth odor, represents a common concern. OBJECTIVE: This study aims to assess the quality and readability of web-based Arabic health information on halitosis as the internet is becoming a prominent global source of medical information. METHODS: A total of 300 Arabic websites were retrieved from Google using 3 commonly used phrases for halitosis in Arabic. The quality of the websites was assessed using benchmark criteria established by the Journal of the American Medical Association, the DISCERN tool, and the presence of the Health on the Net Foundation Code of Conduct (HONcode). The assessment of readability (Flesch Reading Ease [FRE], Simple Measure of Gobbledygook, and Flesch-Kincaid Grade Level [FKGL]) was conducted using web-based readability indexes. RESULTS: A total of 127 websites were examined. Regarding quality assessment, 87.4% (n=111) of websites failed to fulfill any Journal of the American Medical Association requirements, highlighting a lack of authorship (authors' contributions), attribution (references), disclosure (sponsorship), and currency (publication date). The DISCERN tool had a mean score of 34.55 (SD 7.46), with the majority (n=72, 56.6%) rated as moderate quality, 43.3% (n=55) as having a low score, and none receiving a high DISCERN score, indicating a general inadequacy in providing quality health information to make decisions and treatment choices. No website had HONcode certification, emphasizing the concern over the credibility and trustworthiness of these resources. Regarding readability assessment, Arabic halitosis websites had high readability scores, with 90.5% (n=115) receiving an FRE score ≥80, 98.4% (n=125) receiving a Simple Measure of Gobbledygook score <7, and 67.7% (n=86) receiving an FKGL score <7. There were significant correlations between the DISCERN scores and the quantity of words (P<.001) and sentences (P<.001) on the websites. Additionally, there was a significant relationship (P<.001) between the number of sentences and FKGL and FRE scores. CONCLUSIONS: While readability was found to be very good, indicating that the information is accessible to the public, the quality of Arabic halitosis websites was poor, reflecting a significant gap in providing reliable and comprehensive health information. This highlights the need for improving the availability of high-quality materials to ensure Arabic-speaking populations have access to reliable information about halitosis and its treatment options, tying quality and availability together as critical for effective health communication.


Asunto(s)
Comprensión , Halitosis , Internet , Humanos , Halitosis/terapia , Información de Salud al Consumidor/normas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA