Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Cureus ; 16(2): e54929, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38544628

ABSTRACT

Introduction Colorectal cancer (CRC) remains a significant public health challenge globally, with its pathogenesis involving the transformation of benign adenomas into malignant carcinomas. Despite advancements in screening and early detection significantly improving outcomes, the rise of digital platforms like YouTube for disseminating health information presents new challenges. Concerns over the accuracy and reliability of content underline the necessity for rigorous evaluation of these digital health education tools. Methods Our study was conducted at Nassau University Medical Center, East Meadow, New York. We meticulously analyzed YouTube videos on "colon cancer screening awareness," employing strict selection criteria to ensure both relevance and quality, focusing on English-language content with pertinent audio. Videos were evaluated for their quantitative and qualitative attributes-views, subscriber counts, likes/dislikes, comments, and content type, classifying them as scholarly or personal. We assessed video credibility through scientific accuracy using the DISCERN instrument, Global Quality Score (GQS), and Patient Education Materials Assessment Tool (PEMAT), ensuring consistency in quality and reliability evaluation among seven researchers via the intraclass correlation coefficient. These tools - DISCERN for assessing reliability and quality, GQS for evaluating overall quality, and PEMAT for understandability and actionability - facilitated a comprehensive evaluation framework. Our analysis, leveraging descriptive and inferential statistics, scrutinized differences in content quality between academic and private institutions, employing t-tests to identify statistically significant disparities. The study utilized Microsoft Excel (version 16.73, Microsoft Corporation, Redmond, Washington, United States) and IBM SPSS Statistics for Windows, version 29.0 (released 2022; IBM Corp., Armonk, New York, United States). for robust data processing and analysis, confirming the educational value and trustworthiness of the examined YouTube content. Results Our study of 156 YouTube videos on educational content, split between academic (68 videos) and private sources (88 videos), revealed significant quality differences. Using the DISCERN, PEMAT, and GQS metrics, academic videos consistently outperformed private ones, with significant margins: DISCERN (54.61 vs. 34.76), PEMAT (3.02 vs. 2.11), and GQS (3.90 vs. 2.02), supported by low p-values indicating a statistically significant superiority. These findings suggest that the source of content-academic versus private-plays a crucial role in determining the quality and reliability of educational materials on platforms like YouTube, highlighting the academic sector's commitment to higher educational standards. Conclusion The study emphasizes the critical role of credible sources in enhancing the quality of health education content on YouTube, particularly concerning CRC screening. The superiority of academic institutions in providing high-quality content suggests a need for viewers to critically assess the source of information. It also calls for enhanced regulatory oversight and measures to ensure the accuracy and reliability of health information online.

2.
Cureus ; 16(1): e51848, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38327910

ABSTRACT

Introduction Artificial intelligence (AI) integration in healthcare, specifically in gastroenterology, has opened new avenues for enhanced patient care and medical decision-making. This study aims to assess the reliability and accuracy of two prominent AI tools, ChatGPT 4.0 and Google Bard, in answering gastroenterology-related queries, thereby evaluating their potential utility in medical settings. Methods The study employed a structured approach where typical gastroenterology questions were input into ChatGPT 4.0 and Google Bard. Independent reviewers evaluated responses using a Likert scale and cross-referenced them with guidelines from authoritative gastroenterology bodies. Statistical analysis, including the Mann-Whitney U test, was conducted to assess the significance of differences in ratings. Results ChatGPT 4.0 demonstrated higher reliability and accuracy in its responses than Google Bard, as indicated by higher mean ratings and statistically significant p-values in hypothesis testing. However, limitations in the data structure, such as the inability to conduct detailed correlation analysis, were noted. Conclusion The study concludes that ChatGPT 4.0 outperforms Google Bard in providing reliable and accurate responses to gastroenterology-related queries. This finding underscores the potential of AI tools like ChatGPT in enhancing healthcare delivery. However, the study also highlights the need for a broader and more diverse assessment of AI capabilities in healthcare to leverage their potential in clinical practice fully.

SELECTION OF CITATIONS
SEARCH DETAIL
...