Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 811
Filtrar
Más filtros

Intervalo de año de publicación
1.
Respir Res ; 25(1): 334, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39252025

RESUMEN

BACKGROUND: The internet is a common source of health information for patients and caregivers. To date, content and information quality of YouTube videos on sarcoidosis has not been studied. The aim of our study was to investigate the content and quality of information on sarcoidosis provided by YouTube videos. METHODS: Of the first 200 results under the search term "sarcoidosis," all English-language videos with content directed at patients were included. Two independent investigators assessed the content of the videos based on 25 predefined key features (content score with 0-25 points), as well as reliability and quality (HONCode score with 0-8 points, DISCERN score with 1-5 points). Misinformation contained in the videos was described qualitatively. RESULTS: The majority of the 85 included videos were from an academic or governmental source (n = 63, 74%), and median time since upload was 33 months (IQR 10-55). Median video duration was 8 min (IQR 3-13) and had a median of 2,044 views (IQR 504 - 13,203). Quality assessment suggested partially sufficient information: mean HONCode score was 4.4 (SD 0.9) with 91% of videos having a medium quality HONCode evaluation. Mean DISCERN score was 2.3 (SD 0.5). Video content was generally poor with a mean of 10.5 points (SD 0.6). Frequently absent key features included information on the course of disease (6%), presence of substantial geographical variation (7%), and importance of screening for extrapulmonary manifestations (11%). HONCode scores were higher in videos from academic or governmental sources (p = 0.003), particularly regarding "transparency of sponsorship" (p < 0.001). DISCERN and content scores did not differ by video category. CONCLUSIONS: Most YouTube videos present incomplete information reflected in a poor content score, especially regarding screening for extrapulmonary manifestations. Quality was partially sufficient with higher scores in videos from academic or governmental sources, but often missing references and citing specific evidence. Improving patient access to trustworthy and up to date information is needed.


Asunto(s)
Sarcoidosis , Medios de Comunicación Sociales , Grabación en Video , Humanos , Medios de Comunicación Sociales/normas , Grabación en Video/métodos , Grabación en Video/normas , Sarcoidosis/diagnóstico , Educación del Paciente como Asunto/métodos , Educación del Paciente como Asunto/normas , Información de Salud al Consumidor/normas , Información de Salud al Consumidor/métodos , Difusión de la Información/métodos , Internet/normas , Fuentes de Información
2.
Liver Int ; 44(6): 1373-1382, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38441405

RESUMEN

BACKGROUND & AIMS: Short videos, crucial for disseminating health information on metabolic dysfunction-associated steatotic liver disease (MASLD), lack a clear evaluation of quality and reliability. This study aimed to assess the quality and reliability of MASLD-related videos on Chinese platforms. METHODS: Video samples were collected from three platforms (TikTok, Kwai and Bilibili) during the period from November 2019 to July 2023. Two independent reviewers evaluated the integrity of the information contained therein by scoring six key aspects of its content: definition, epidemiology, risk factors, outcomes, diagnosis and treatment. The quality and reliability of the videos were assessed using the Journal of the American Medical Association (JAMA) criteria, the Global Quality Score (GQS) and the modified DISCERN score. RESULTS: A total of 198 videos were included. The video content exhibited an overall unsatisfactory quality, with a primary emphasis on risk factors and treatment, while diagnosis and epidemiology were seldom addressed. Regarding the sources of the videos, the GQS and modified DISCERN scores varied significantly between the platforms (p = .003), although they had generally similar JAMA scores (p = .251). Videos created by medical professionals differed significantly in terms of JAMA scores (p = .046) compared to those created by nonmedical professionals, but there were no statistically significant differences in GQS (p = .923) or modified DISCERN scores (p = .317). CONCLUSIONS: The overall quality and reliability of the videos were poor and varied between platforms and uploaders. Platforms and healthcare professionals should strive to provide more reliable health-related information regarding MASLD.


Asunto(s)
Grabación en Video , Humanos , Reproducibilidad de los Resultados , China/epidemiología , Factores de Riesgo , Enfermedad del Hígado Graso no Alcohólico/epidemiología , Enfermedad del Hígado Graso no Alcohólico/terapia , Hígado Graso/diagnóstico , Hígado Graso/terapia , Información de Salud al Consumidor/normas
3.
J Natl Compr Canc Netw ; 22(2D)2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38749478

RESUMEN

BACKGROUND: Internet-based health education is increasingly vital in patient care. However, the readability of online information often exceeds the average reading level of the US population, limiting accessibility and comprehension. This study investigates the use of chatbot artificial intelligence to improve the readability of cancer-related patient-facing content. METHODS: We used ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer across 34 websites associated with NCCN Member Institutions. Readability was analyzed using Fry Readability Score, Flesch-Kincaid Grade Level, Gunning Fog Index, and Simple Measure of Gobbledygook. The primary outcome was the mean readability score for the original and artificial intelligence (AI)-generated content. As secondary outcomes, we assessed the accuracy, similarity, and quality using F1 scores, cosine similarity scores, and section 2 of the DISCERN instrument, respectively. RESULTS: The mean readability level across the 34 websites was equivalent to a university freshman level (grade 13±1.5). However, after ChatGPT's intervention, the AI-generated outputs had a mean readability score equivalent to a high school freshman education level (grade 9±0.8). The overall F1 score for the rewritten content was 0.87, the precision score was 0.934, and the recall score was 0.814. Compared with their original counterparts, the AI-rewritten content had a cosine similarity score of 0.915 (95% CI, 0.908-0.922). The improved readability was attributed to simpler words and shorter sentences. The mean DISCERN score of the random sample of AI-generated content was equivalent to "good" (28.5±5), with no significant differences compared with their original counterparts. CONCLUSIONS: Our study demonstrates the potential of AI chatbots to improve the readability of patient-facing content while maintaining content quality. The decrease in requisite literacy after AI revision emphasizes the potential of this technology to reduce health care disparities caused by a mismatch between educational resources available to a patient and their health literacy.


Asunto(s)
Inteligencia Artificial , Comprensión , Alfabetización en Salud , Internet , Neoplasias , Humanos , Alfabetización en Salud/métodos , Alfabetización en Salud/normas , Educación del Paciente como Asunto/métodos , Educación del Paciente como Asunto/normas , Información de Salud al Consumidor/normas , Información de Salud al Consumidor/métodos
4.
J Natl Compr Canc Netw ; 22(7): 475-481, 2024 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-39151450

RESUMEN

BACKGROUND: Individuals with a history of cancer increasingly seek health information from online resources, including NCI-designated Cancer Center websites. Centers receive NCI designation because they provide excellent care and engage in cutting-edge research. However, the information presented on these webpages and their accessibility is unknown. An evaluation of the survivorship-focused webpages from NCI-designated Cancer Centers is needed to assess survivorship information and accessibility of these webpages. METHODS: We conducted an evaluation of the survivorship-focused webpages from 64 NCI-designated Cancer Centers. We evaluated where survivorship-focused webpages were housed, if there was a survivorship clinic or program, target audience of the webpage, how cancer survivor was defined, contact methods, and available resources. Accessibility outcomes included readability, font type, font size, color scheme, and alternative text (alt text) descriptors. An artificial intelligence (AI) audit was conducted to assess if the webpage was compliant with national accessibility guidelines. RESULTS: Most cancer centers had a survivorship-focused webpage, with 72% located on the cancer center's website and 28% on a health system website. Survivorship information available varied considerably and was often lacking in detail. Although three-quarters of webpages targeted patients only, variable definitions of cancer survivor were observed. Accessibility issues identified included inconsistent use of alt text descriptors, font size smaller than 15 points, and color schemes without adequate contrast. The average reading-level of information presented was above 12th grade. Only 9% of webpages were compliant with online accessibility guidelines; 72% semicompliant and 21% were noncompliant. CONCLUSIONS: Information presented on NCI-designated Cancer Center survivorship-focused webpages was inconsistent, often lacking, and inaccessible. NCI-designated Cancer Centers are role models for cancer research in the United States and have an obligation to provide survivorship information. Changes to content and website design are needed to provide better information for individuals seeking resources and health information relative to their cancer and care.


Asunto(s)
Instituciones Oncológicas , Internet , National Cancer Institute (U.S.) , Neoplasias , Supervivencia , Humanos , Estados Unidos , Neoplasias/terapia , Neoplasias/mortalidad , Instituciones Oncológicas/normas , Instituciones Oncológicas/organización & administración , Supervivientes de Cáncer/estadística & datos numéricos , Acceso a la Información , Información de Salud al Consumidor/normas , Información de Salud al Consumidor/métodos
5.
J Surg Res ; 299: 103-111, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38749313

RESUMEN

INTRODUCTION: The quality and readability of online health information are sometimes suboptimal, reducing their usefulness to patients. Manual evaluation of online medical information is time-consuming and error-prone. This study automates content analysis and readability improvement of private-practice plastic surgery webpages using ChatGPT. METHODS: The first 70 Google search results of "breast implant size factors" and "breast implant size decision" were screened. ChatGPT 3.5 and 4.0 were utilized with two prompts (1: general, 2: specific) to automate content analysis and rewrite webpages with improved readability. ChatGPT content analysis outputs were classified as hallucination (false positive), accurate (true positive or true negative), or omission (false negative) using human-rated scores as a benchmark. Six readability metric scores of original and revised webpage texts were compared. RESULTS: Seventy-five webpages were included. Significant improvements were achieved from baseline in six readability metric scores using a specific-instruction prompt with ChatGPT 3.5 (all P ≤ 0.05). No further improvements in readability scores were achieved with ChatGPT 4.0. Rates of hallucination, accuracy, and omission in ChatGPT content scoring varied widely between decision-making factors. Compared to ChatGPT 3.5, average accuracy rates increased while omission rates decreased with ChatGPT 4.0 content analysis output. CONCLUSIONS: ChatGPT offers an innovative approach to enhancing the quality of online medical information and expanding the capabilities of plastic surgery research and practice. Automation of content analysis is limited by ChatGPT 3.5's high omission rates and ChatGPT 4.0's high hallucination rates. Our results also underscore the importance of iterative prompt design to optimize ChatGPT performance in research tasks.


Asunto(s)
Comprensión , Cirugía Plástica , Humanos , Cirugía Plástica/normas , Internet , Información de Salud al Consumidor/normas
6.
J Surg Res ; 301: 540-546, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39047386

RESUMEN

INTRODUCTION: Parathyroidectomy is recommended for severe secondary hyperparathyroidism (SHPT) due to end-stage kidney disease (ESKD), but surgery is underutilized. High quality and accessible online health information, recommended to be at a 6th-grade reading level, is vital to improve patient health literacy. This study evaluated available online resources for SHPT from ESKD based on information quality and readability. METHODS: Three search engines were queried using the terms "parathyroidectomy for secondary hyperparathyroidism," "parathyroidectomy kidney/renal failure," "parathyroidectomy dialysis patients," "should I have surgery for hyperparathyroidism due to kidney failure?," and "do I need surgery for hyperparathyroidism due to kidney failure if I do not have symptoms?" Websites were categorized by source and origin. Two independent reviewers determined information quality using JAMA (0-4) and DISCERN (1-5) frameworks, and scores were averaged. Cohen's kappa evaluated inter-rater reliability. Readability was determined using the Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, and Simple Measure of Gobbledygook tools. Median readability scores were calculated, and corresponding grade level determined. Websites with reading difficulties >6th grade level were calculated. RESULTS: Thirty one (86.1%) websites originated from the U.S., with most from hospital-associated (63.9%) and foundation/advocacy sources (30.6%). The mean JAMA and DISCERN scores for all websites were 1.3 ± 1.4 and 2.6 ± 0.7, respectively. Readability scores ranged from grade level 5-college level, and most websites scored above the recommended 6th grade level. CONCLUSIONS: Patient-oriented websites tailoring SHPT from ESKD are at a reading level higher than recommended, and the quality of information is low. Efforts must be made to improve the accessibility and quality of information for all patients.


Asunto(s)
Comprensión , Alfabetización en Salud , Hiperparatiroidismo Secundario , Fallo Renal Crónico , Humanos , Alfabetización en Salud/estadística & datos numéricos , Fallo Renal Crónico/terapia , Fallo Renal Crónico/complicaciones , Hiperparatiroidismo Secundario/etiología , Hiperparatiroidismo Secundario/cirugía , Internet , Paratiroidectomía , Educación del Paciente como Asunto , Información de Salud al Consumidor/normas
7.
J Surg Res ; 300: 93-101, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38805846

RESUMEN

INTRODUCTION: Patients use the internet to learn more about health conditions. Non-English-speaking patients may face additional challenges. The quality of online breast cancer information, the most common cancer in women, is uncertain. This study aims to examine the quality of online breast cancer information for English and non-English-speaking patients. METHODS: Three search engines were queried using the terms: "how to do a breast examination," "when do I need a mammogram," and "what are the treatment options for breast cancer" in English, Spanish, and Chinese. For each language, 60 unique websites were included and classified by type and information source. Two language-fluent reviewers evaluated website quality using the Journal of American Medical Association benchmark criteria (0-4) and the DISCERN tool (1-5), with higher scores representing higher quality. Scores were averaged for each language. Health On the Net code presence was noted. Inter-rater reliability between reviewers was assessed. RESULTS: English and Spanish websites most commonly originated from US sources (92% and 80%, respectively) compared to Chinese websites (33%, P < 0.001). The most common website type was hospital-affiliated for English (43%) and foundation/advocacy for Spanish and Chinese (43% and 45%, respectively). English websites had the highest and Chinese websites the lowest mean the Journal of American Medical Association (2.2 ± 1.4 versus 1.0 ± 0.8, P = 0.002) and DISCERN scores (3.5 ± 0.9 versus 2.3 ± 0.6, P < 0.001). Health On the Net code was present on 16 (8.9%) websites. Inter-rater reliability ranged from moderate to substantial agreement. CONCLUSIONS: The quality of online information on breast cancer across all three languages is poor. Information quality was poorest for Chinese websites. Improvements to enhance the reliability of breast cancer information across languages are needed.


Asunto(s)
Neoplasias de la Mama , Internet , Humanos , Neoplasias de la Mama/diagnóstico , Neoplasias de la Mama/terapia , Femenino , Multilingüismo , Información de Salud al Consumidor/normas , Información de Salud al Consumidor/estadística & datos numéricos , Lenguaje , Traducción
8.
Eur J Vasc Endovasc Surg ; 67(5): 738-745, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38185375

RESUMEN

OBJECTIVE: This study aimed to assess the quality of patient information material regarding elective abdominal aortic aneurysm (AAA) repair on the internet using the Modified Ensuring Quality Information for Patients (MEQIP) tool. METHODS: A qualitative assessment of internet based patient information was performed. The 12 most used search terms relating to AAA repair were identified using Google Trends, with the first 10 pages of websites retrieved for each term searched. Duplicates were removed, and information for patients undergoing elective AAA were selected. Further exclusion criteria were marketing material, academic journals, videos, and non-English language sites. The remaining websites were then MEQIP scored independently by two reviewers, producing a final score by consensus. RESULTS: A total of 1 297 websites were identified, with 235 (18.1%) eligible for analysis. The median MEQIP score was 18 (interquartile range [IQR] 14, 21) out of a possible 36. The highest score was 33. The 99th percentile MEQIP scoring websites scored > 27, with four of these six sites representing online copies of hospital patient information leaflets, however hospital sites overall had lower median MEQIP scores than most other institution types. MEQIP subdomain median scores were: content, 8 (IQR 6, 11); identification, 3 (IQR 1, 3); and structure, 7 (IQR 6, 9). Of the analysed websites, 77.9% originated from the USA (median score 17) and 12.8% originated in the UK (median score 22). Search engine ranking was related to website institution type but had no correlation with MEQIP. CONCLUSION: When assessed by the MEQIP tool, most websites regarding elective AAA repair are of questionable quality. This is in keeping with studies in other surgical and medical fields. Search engine ranking is not a reliable measure of quality of patient information material regarding elective AAA repair. Health practitioners should be aware of this issue as well as the whereabouts of high quality material to which patients can be directed.


Asunto(s)
Aneurisma de la Aorta Abdominal , Información de Salud al Consumidor , Procedimientos Quirúrgicos Electivos , Internet , Educación del Paciente como Asunto , Aneurisma de la Aorta Abdominal/cirugía , Humanos , Procedimientos Quirúrgicos Electivos/normas , Educación del Paciente como Asunto/normas , Información de Salud al Consumidor/normas , Procedimientos Quirúrgicos Vasculares/normas
9.
Colorectal Dis ; 26(5): 1014-1027, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38561871

RESUMEN

AIM: The aim was to examine the quality of online patient information resources for patients considering parastomal hernia treatment. METHODS: A Google search was conducted using lay search terms for patient facing sources on parastomal hernia. The quality of the content was assessed using the validated DISCERN instrument. Readability of written content was established using the Flesch-Kincaid score. Sources were also assessed against the essential content and process standards from the National Institute for Health and Care Excellence (NICE) framework for shared decision making support tools. Content analysis was also undertaken to explore what the sources covered and to identify any commonalities across the content. RESULTS: Fourteen sources were identified and assessed using the identified tools. The mean Flesch-Kincaid reading ease score was 43.61, suggesting that the information was difficult to read. The overall quality of the identified sources was low based on the pooled analysis of the DISCERN and Flesch-Kincaid scores, and when assessed against the criteria in the NICE standards framework for shared decision making tools. Content analysis identified eight categories encompassing 59 codes, which highlighted considerable variation between sources. CONCLUSIONS: The current information available to patients considering parastomal hernia treatment is of low quality and often does not contain enough information on treatment options for patients to be able to make an informed decision about the best treatment for them. There is a need for high-quality information, ideally co-produced with patients, to provide patients with the necessary information to allow them to make informed decisions about their treatment options when faced with a symptomatic parastomal hernia.


Asunto(s)
Internet , Educación del Paciente como Asunto , Humanos , Información de Salud al Consumidor/normas , Estomas Quirúrgicos/efectos adversos , Hernia Incisional/cirugía , Comprensión , Herniorrafia
10.
Support Care Cancer ; 32(8): 540, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39044016

RESUMEN

BACKGROUND: Breast cancer-related lymphedema in the upper limb remains one of the most distressful complications of breast cancer treatment. YouTube is considered a potential digital resource for population health and decision making. However, access to inadequate information or misinformation could have undesirable impacts. This cross-sectional study aimed to evaluate the reliability, quality and content of YouTube videos on lymphedema as an information source for Spanish-speaking breast cancer survivors. METHODS: A search of YouTube was conducted in January 2023 using the key words "breast cancer lymphedema" and "lymphedema arm breast cancer." Reliability and quality of the videos were evaluated using the Discern tool, content, source of production, number of likes, comments, views, duration, Video Power Index, likes ratio, view ratio and age on the platform. RESULTS: Amongst the 300 Spanish language videos identified on YouTube, 35 were selected for analysis based on the inclusion and exclusion criteria. Of the 35 selected videos, 82.9% (n = 29) were developed by healthcare or academic professionals and 17.1% (n = 9) by others. Reliability (p < 0.017) and quality (p < 0.03) were higher in the videos made by professionals. The Discern total score (r = 0.476; p = 0.004), reliability (r = 0.472; p = 0.004) and quality (r = 0.469; p = 0.004) were positively correlated with the duration of the videos. CONCLUSIONS: Our findings provide a strong rationale for educating breast cancer survivors seeking lymphedema information to select videos made by healthcare or academic professionals. Standardised evaluation prior to video publication is needed to ensure that the end-users receive accurate and quality information from YouTube.


Asunto(s)
Neoplasias de la Mama , Supervivientes de Cáncer , Medios de Comunicación Sociales , Grabación en Video , Humanos , Estudios Transversales , Femenino , Neoplasias de la Mama/complicaciones , Reproducibilidad de los Resultados , Linfedema/etiología , Información de Salud al Consumidor/normas , Información de Salud al Consumidor/métodos , Persona de Mediana Edad , Difusión de la Información/métodos , Adulto , Fuentes de Información
11.
Surg Endosc ; 38(5): 2887-2893, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38443499

RESUMEN

INTRODUCTION: Generative artificial intelligence (AI) chatbots have recently been posited as potential sources of online medical information for patients making medical decisions. Existing online patient-oriented medical information has repeatedly been shown to be of variable quality and difficult readability. Therefore, we sought to evaluate the content and quality of AI-generated medical information on acute appendicitis. METHODS: A modified DISCERN assessment tool, comprising 16 distinct criteria each scored on a 5-point Likert scale (score range 16-80), was used to assess AI-generated content. Readability was determined using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) scores. Four popular chatbots, ChatGPT-3.5 and ChatGPT-4, Bard, and Claude-2, were prompted to generate medical information about appendicitis. Three investigators independently scored the generated texts blinded to the identity of the AI platforms. RESULTS: ChatGPT-3.5, ChatGPT-4, Bard, and Claude-2 had overall mean (SD) quality scores of 60.7 (1.2), 62.0 (1.0), 62.3 (1.2), and 51.3 (2.3), respectively, on a scale of 16-80. Inter-rater reliability was 0.81, 0.75, 0.81, and 0.72, respectively, indicating substantial agreement. Claude-2 demonstrated a significantly lower mean quality score compared to ChatGPT-4 (p = 0.001), ChatGPT-3.5 (p = 0.005), and Bard (p = 0.001). Bard was the only AI platform that listed verifiable sources, while Claude-2 provided fabricated sources. All chatbots except for Claude-2 advised readers to consult a physician if experiencing symptoms. Regarding readability, FKGL and FRE scores of ChatGPT-3.5, ChatGPT-4, Bard, and Claude-2 were 14.6 and 23.8, 11.9 and 33.9, 8.6 and 52.8, 11.0 and 36.6, respectively, indicating difficulty readability at a college reading skill level. CONCLUSION: AI-generated medical information on appendicitis scored favorably upon quality assessment, but most either fabricated sources or did not provide any altogether. Additionally, overall readability far exceeded recommended levels for the public. Generative AI platforms demonstrate measured potential for patient education and engagement about appendicitis.


Asunto(s)
Apendicitis , Inteligencia Artificial , Humanos , Comprensión , Internet , Información de Salud al Consumidor/normas , Educación del Paciente como Asunto/métodos
12.
Graefes Arch Clin Exp Ophthalmol ; 262(9): 3047-3052, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38639789

RESUMEN

PURPOSE: This study investigated whether websites regarding diabetic retinopathy are readable for patients, and adequately designed to be found by search engines. METHODS: The term "diabetic retinopathy" was queried in the Google search engine. Patient-oriented websites from the first 10 pages were categorized by search result page number and website organization type. Metrics of search engine optimization (SEO) and readability were then calculated. RESULTS: Among the 71 sites meeting inclusion criteria, informational and organizational sites were best optimized for search engines, and informational sites were the most visited. Better optimization as measured by authority score was correlated with lower Flesch Kincaid Grade Level (r = 0.267, P = 0.024). There was a significant increase in Flesch Kincaid Grade Level with successive search result pages (r = 0.275, P = 0.020). Only 2 sites met the 6th grade reading level AMA recommendation by Flesch Kincaid Grade Level; the average reading level was 10.5. There was no significant difference in readability between website categories. CONCLUSION: While the readability of diabetic retinopathy patient information was poor, better readability was correlated to better SEO metrics. While we cannot assess causality, we recommend websites improve their readability, which may increase uptake of their resources.


Asunto(s)
Comprensión , Retinopatía Diabética , Internet , Motor de Búsqueda , Humanos , Retinopatía Diabética/diagnóstico , Educación del Paciente como Asunto , Información de Salud al Consumidor/normas , Alfabetización en Salud
13.
Dermatol Surg ; 50(10): 904-907, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-38843457

RESUMEN

BACKGROUND: As internet access continues to expand, online health care information is increasingly influencing patient decisions. Mohs micrographic surgery (MMS) is commonly used in the field of dermatology but may be unfamiliar to many patients. OBJECTIVE: The purpose of this study was to identify and analyze online educational resources regarding MMS and learn how to optimize the understanding and informational content of MMS for patients and their families. MATERIALS AND METHODS: Thirty-two websites were evaluated for authorship, quality, and readability using DISCERN, JAMA Benchmark Criteria, and Flesch-Kincaid tests. RESULTS: Physician-authored content showed a trend toward higher quality ( p = .058). Google scored higher in specific DISCERN questions when overlapping websites were excluded. Bing scored higher in JAMA criteria ( p = .03) in criteria such as authorship and currency. Higher DISCERN scores correlated with lower readability. CONCLUSION: Physician involvement improves content quality, raising questions about physicians' responsibility in online resource creation. Correlations between content quality and readability highlight potential challenges for certain demographics. Balancing medical accuracy with comprehensibility is crucial for equitable patient education. This study underscores the need to refine online resources, ensuring accurate, transparent, and accessible health care information.


Asunto(s)
Comprensión , Internet , Cirugía de Mohs , Educación del Paciente como Asunto , Humanos , Cirugía de Mohs/educación , Educación del Paciente como Asunto/normas , Información de Salud al Consumidor/normas , Alfabetización en Salud
14.
BMC Public Health ; 24(1): 2620, 2024 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-39334090

RESUMEN

BACKGROUND: Considering the adverse clinical consequences of pathologic tachycardia and the potential anxiety caused by physiological tachycardia in some heathy individuals, it is imperative to disseminate health information related to tachycardia for promotion in early diagnosis and appropriate management. YouTube has been increasingly used to access health care information. The aim of this study is to assess the quality and reliability of English YouTube videos focusing on tachycardia and further delve into strategies to enhance the quality of online health resources. METHODS: We conducted a search using the specific key words "tachycardia" in YouTube online library on December 2, 2023. The first 150 videos, ranked by "relevance", were initially recorded. After exclusions, a total of 113 videos were included. All videos were extracted for characteristics and categorized based on different topics, sources or contents. Two independent raters assessed the videos using Journal of American Medical Association (JAMA) benchmark criteria, Modified DISCERN (mDISCERN) tool, Global Quality Scale (GQS) and Tachycardia-Specific Scale (TSS), followed by statistical analyses. All continuous data in the study were presented as median (interquartile range). RESULTS: The videos had a median JAMA score of 2.00 (1.00), mDISCERN of 3.00 (1.00), GQS of 2.00 (1.00), and TSS of 6.00 (4.50). There were significant differences in JAMA (P < 0.001), mDISCERN (P = 0.004), GQS (P = 0.001) and TSS (P < 0.001) scores among different sources. mDISCERN (P = 0.002), GQS (P < 0.001) and TSS (P = 0.030) scores significantly differed among various contents. No significant differences were observed in any of the scores among video topics. Spearman correlation analysis revealed that VPI exhibited significant correlations with quality and reliability. Multiple linear regression analysis suggested that longer video duration, sources of academics and healthcare professionals were independent predictors of higher reliability and quality, while content of ECG-specific information was an independent predictor of lower quality. CONCLUSIONS: The reliability and educational quality of current tachycardia-related videos on YouTube are low. Longer video duration, sources of academics and healthcare professionals were closely associated with higher video reliability and quality. Improving the quality of internet medical information and optimizing online patient education necessitates collaborative efforts.


Asunto(s)
Medios de Comunicación Sociales , Taquicardia , Grabación en Video , Humanos , Reproducibilidad de los Resultados , Taquicardia/diagnóstico , Información de Salud al Consumidor/normas , Difusión de la Información/métodos , Internet
15.
BMC Public Health ; 24(1): 1594, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38877432

RESUMEN

BACKGROUND: YouTube, a widely recognized global video platform, is inaccessible in China, whereas Bilibili and TikTok are popular platforms for long and short videos, respectively. There are many videos related to laryngeal carcinoma on these platforms. This study aims to identify upload sources, contents, and feature information of these videos on YouTube, Bilibili, and TikTok, and further evaluate the video quality. METHODS: On January 1, 2024, we searched the top 100 videos by default sort order (300 videos in total) with the terms "laryngeal carcinoma" and "throat cancer" on YouTube, "" on Bilibili and TikTok. Videos were screened for relevance and similarity. Video characteristics were documented, and quality was assessed by using the Patient Education Materials Assessment Tool (PEMAT), Video Information and Quality Index (VIQI), Global Quality Score (GQS), and modified DISCERN (mDISCERN). RESULTS: The analysis included 99 YouTube videos, 76 from Bilibili, and 73 from TikTok. Median video lengths were 193 s (YouTube), 136 s (Bilibili), and 42 s (TikTok). TikTok videos demonstrated higher audience interaction. Bilibili had the lowest ratio of original contents (69.7%). Treatment was the most popular topic on YouTube and Bilibili, while that was the prognosis on TikTok. Solo narration was the most common video style across all platforms. Video uploaders were predominantly non-profit organizations (YouTube), self-media (Bilibili), and doctors (TikTok), with TikTok authors having the highest certification rate (83.3%). Video quality, assessed using PEMAT, VIQI, GQS, and mDISCERN, varied across platforms, with YouTube generally showing the highest scores. Videos from professional authors performed better than videos from non-professionals based on the GQS and mDISCERN scores. Spearman correlation analysis showed no strong relationships between the video quality and the audience interaction. CONCLUSIONS: Videos on social media platforms can help the public learn about the knowledge of laryngeal cancer to some extent. TikTok achieves the best flow, but videos on YouTube are of the best quality. However, the video quality across all platforms still needs enhancement. We need more professional uploaders to ameliorate the video quality related to laryngeal carcinoma. Content creators also should be aware of the certification, the originality, and the style of video shooting. As for the platforms, refining the algorithm will allow users to receive more high-quality videos.


Asunto(s)
Neoplasias Laríngeas , Medios de Comunicación Sociales , Grabación en Video , Humanos , Medios de Comunicación Sociales/estadística & datos numéricos , Estudios Transversales , China , Difusión de la Información/métodos , Información de Salud al Consumidor/normas
16.
BMC Health Serv Res ; 24(1): 1124, 2024 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-39334340

RESUMEN

BACKGROUND: The quality and safety of information provided on online platforms for migraine treatment remains uncertain. We evaluated the top 10 trending websites accessed annually by Turkish patients seeking solutions for migraine treatment and assessed information quality, security, and readability in this cross-sectional study. METHODS: A comprehensive search strategy was conducted using Google starting in 2015, considering Türkiye's internet usage trends. Websites were evaluated using the DISCERN measurement tool and Atesman Turkish readability index. RESULTS: Ninety websites were evaluated between 2015 and 2024. According to the DISCERN measurement tool, most websites exhibited low quality and security levels. Readability analysis showed that half of the websites were understandable by readers with 9th - 10th grade educational levels. The author distribution varied, with neurologists being the most common. A significant proportion of the websites were for profit. Treatment of attacks and preventive measures were frequently mentioned, but some important treatments, such as greater occipital nerve blockade, were rarely discussed. CONCLUSION: This study highlights the low quality and reliability of online information websites on migraine treatment in Türkiye. These websites' readability level remains a concern, potentially hindering patients' access to accurate information. This can be a barrier to migraine care for both patients with migraine and the physician. Better supervision and cooperation with reputable medical associations are needed to ensure the dissemination of reliable information to the public.


Asunto(s)
Comprensión , Información de Salud al Consumidor , Internet , Trastornos Migrañosos , Trastornos Migrañosos/tratamiento farmacológico , Trastornos Migrañosos/terapia , Humanos , Turquía , Estudios Transversales , Información de Salud al Consumidor/normas , Reproducibilidad de los Resultados , Alfabetización en Salud
17.
J Med Internet Res ; 26: e54072, 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39196637

RESUMEN

BACKGROUND: Halitosis, characterized by an undesirable mouth odor, represents a common concern. OBJECTIVE: This study aims to assess the quality and readability of web-based Arabic health information on halitosis as the internet is becoming a prominent global source of medical information. METHODS: A total of 300 Arabic websites were retrieved from Google using 3 commonly used phrases for halitosis in Arabic. The quality of the websites was assessed using benchmark criteria established by the Journal of the American Medical Association, the DISCERN tool, and the presence of the Health on the Net Foundation Code of Conduct (HONcode). The assessment of readability (Flesch Reading Ease [FRE], Simple Measure of Gobbledygook, and Flesch-Kincaid Grade Level [FKGL]) was conducted using web-based readability indexes. RESULTS: A total of 127 websites were examined. Regarding quality assessment, 87.4% (n=111) of websites failed to fulfill any Journal of the American Medical Association requirements, highlighting a lack of authorship (authors' contributions), attribution (references), disclosure (sponsorship), and currency (publication date). The DISCERN tool had a mean score of 34.55 (SD 7.46), with the majority (n=72, 56.6%) rated as moderate quality, 43.3% (n=55) as having a low score, and none receiving a high DISCERN score, indicating a general inadequacy in providing quality health information to make decisions and treatment choices. No website had HONcode certification, emphasizing the concern over the credibility and trustworthiness of these resources. Regarding readability assessment, Arabic halitosis websites had high readability scores, with 90.5% (n=115) receiving an FRE score ≥80, 98.4% (n=125) receiving a Simple Measure of Gobbledygook score <7, and 67.7% (n=86) receiving an FKGL score <7. There were significant correlations between the DISCERN scores and the quantity of words (P<.001) and sentences (P<.001) on the websites. Additionally, there was a significant relationship (P<.001) between the number of sentences and FKGL and FRE scores. CONCLUSIONS: While readability was found to be very good, indicating that the information is accessible to the public, the quality of Arabic halitosis websites was poor, reflecting a significant gap in providing reliable and comprehensive health information. This highlights the need for improving the availability of high-quality materials to ensure Arabic-speaking populations have access to reliable information about halitosis and its treatment options, tying quality and availability together as critical for effective health communication.


Asunto(s)
Comprensión , Halitosis , Internet , Humanos , Halitosis/terapia , Información de Salud al Consumidor/normas
18.
J Med Internet Res ; 26: e48257, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39265162

RESUMEN

BACKGROUND: Health information consumers increasingly rely on question-and-answer (Q&A) communities to address their health concerns. However, the quality of questions posted significantly impacts the likelihood and relevance of received answers. OBJECTIVE: This study aims to improve our understanding of the quality of health questions within web-based Q&A communities. METHODS: We develop a novel framework for defining and measuring question quality within web-based health communities, incorporating content- and language-based variables. This framework leverages k-means clustering and establishes automated metrics to assess overall question quality. To validate our framework, we analyze questions related to kidney disease from expert-curated and community-based Q&A platforms. Expert evaluations confirm the validity of our quality construct, while regression analysis helps identify key variables. RESULTS: High-quality questions were more likely to include demographic and medical information than lower-quality questions (P<.001). In contrast, asking questions at the various stages of disease development was less likely to reflect high-quality questions (P<.001). Low-quality questions were generally shorter with lengthier sentences than high-quality questions (P<.01). CONCLUSIONS: Our findings empower consumers to formulate more effective health information questions, ultimately leading to better engagement and more valuable insights within web-based Q&A communities. Furthermore, our findings provide valuable insights for platform developers and moderators seeking to enhance the quality of user interactions and foster a more trustworthy and informative environment for health information exchange.


Asunto(s)
Información de Salud al Consumidor , Humanos , Información de Salud al Consumidor/normas , Lenguaje , Internet , Encuestas y Cuestionarios/normas
19.
Lasers Med Sci ; 39(1): 183, 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39014050

RESUMEN

Just as tattoos continue to increase in popularity, many people with tattoos also seek removal, often due to career concerns. Prospective clients interested in laser tattoo removal may do research about the procedure online, as the internet increasingly becomes a resource to get preliminary health information. However, it is important that the online health information on the topic be of high quality and be accessible to all patients. We analyzed 77 websites from a Google search query using the terms "Laser tattoo removal patient Information" and "Laser tattoo removal patient Instructions" to assess this. The websites were evaluated for their readability using multiple validated indices and comprehensiveness. We found that websites had a broad readability range, from elementary to college, though most were above the recommended eighth-grade reading level. Less than half of the websites adequately discussed the increased risk of pigmentary complications in the skin of color clients or emphasized the importance of consulting with a board-certified dermatologist/plastic surgeon before the procedure. Over 90% of the websites noted that multiple laser treatments are likely needed for complete clearance of tattoos. The findings from our study underscore a significant gap in the accessibility and quality of online information for patients considering laser tattoo removal, particularly in addressing specific risks for patients with darker skin tones and emphasizing the need for consulting a board-certified physician before undergoing the procedure. It is important that online resources for laser tattoo removal be appropriately written to allow better decision-making, expectations, and future satisfaction for potential clients interested in the procedure.


Asunto(s)
Comprensión , Internet , Tatuaje , Humanos , Información de Salud al Consumidor/normas , Educación del Paciente como Asunto , Terapia por Láser/métodos , Alfabetización en Salud
20.
Eye Contact Lens ; 50(6): 243-248, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38477759

RESUMEN

OBJECTIVES: To determine the compliance of online vendors to the UK Opticians Act 1989 Section 27 requirements and safety regulations for cosmetic contact lens (CCL) sales and the quality of online CCL health information. METHODS: The top 50 websites selling CCLs on each three search engines, namely Google, Yahoo, and Bing, were selected. Duplicates were removed, and the remaining websites were systematically analyzed in February 2023. UK legal authorization for CCL sales was assessed using the Opticians Act Section 27 and safety regulations determined by the presence of Conformité Européene (CE) marking. The quality and reliability of online information was graded using the DISCERN (16-80) and JAMA (0-4) scores by two independent reviewers. RESULTS: Forty-seven eligible websites were analyzed. Only six (12.7%) met the UK legal authorization for CCL sales. Forty-nine different brands of CCLs were sold on these websites, of which 13 (26.5%) had no CE marking. The mean DISCERN and JAMA benchmark scores were 26 ± 12.2 and 1.3 ± 0.6, respectively (intraclass correlation scores: 0.99 for both). CONCLUSIONS: A significant number of websites provide consumers with easy, unsafe, and unregulated access to CCLs. Most online stores do not meet the requirements set out in the Opticians Act for CCL sales in the United Kingdom. A significant number of CCLs lack CE marking, while the average quality of information on websites selling CCLs is poor. Together, these pose a risk to consumers purchasing CCLs from unregulated websites, and therefore, further stringent regulations on the online sales of these products are needed.


Asunto(s)
Información de Salud al Consumidor , Internet , Humanos , Reino Unido , Información de Salud al Consumidor/normas , Cosméticos/normas , Lentes de Contacto , Seguridad de Productos para el Consumidor/legislación & jurisprudencia , Seguridad de Productos para el Consumidor/normas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA