Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Healthcare (Basel) ; 12(15)2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39120251

ABSTRACT

BACKGROUND: In recent years, the integration of large language models (LLMs) into healthcare has emerged as a revolutionary approach to enhancing doctor-patient communication, particularly in the management of diseases such as prostate cancer. METHODS: Our paper evaluated the effectiveness of three prominent LLMs-ChatGPT (3.5), Gemini (Pro), and Co-Pilot (the free version)-against the official Romanian Patient's Guide on prostate cancer. Employing a randomized and blinded method, our study engaged eight medical professionals to assess the responses of these models based on accuracy, timeliness, comprehensiveness, and user-friendliness. RESULTS: The primary objective was to explore whether LLMs, when operating in Romanian, offer comparable or superior performance to the Patient's Guide, considering their potential to personalize communication and enhance the informational accessibility for patients. Results indicated that LLMs, particularly ChatGPT, generally provided more accurate and user-friendly information compared to the Guide. CONCLUSIONS: The findings suggest a significant potential for LLMs to enhance healthcare communication by providing accurate and accessible information. However, variability in performance across different models underscores the need for tailored implementation strategies. We highlight the importance of integrating LLMs with a nuanced understanding of their capabilities and limitations to optimize their use in clinical settings.

2.
Cureus ; 16(7): e63865, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39099896

ABSTRACT

BACKGROUND: Artificial intelligence (AI) is a burgeoning new field that has increased in popularity over the past couple of years, coinciding with the public release of large language model (LLM)-driven chatbots. These chatbots, such as ChatGPT, can be engaged directly in conversation, allowing users to ask them questions or issue other commands. Since LLMs are trained on large amounts of text data, they can also answer questions reliably and factually, an ability that has allowed them to serve as a source for medical inquiries. This study seeks to assess the readability of patient education materials on cardiac catheterization across four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI. METHODOLOGY: A set of 10 questions regarding cardiac catheterization was developed using website-based patient education materials on the topic. We then asked these questions in consecutive order to four of the most common chatbots: ChatGPT, Microsoft Copilot, Google Gemini, and Meta AI. The Flesch Reading Ease Score (FRES) was used to assess the readability score. Readability grade levels were assessed using six tools: Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG) Index, Automated Readability Index (ARI), and FORCAST Grade Level. RESULTS: The mean FRES across all four chatbots was 40.2, while overall mean grade levels for the four chatbots were 11.2, 13.7, 13.7, 13.3, 11.2, and 11.6 across the FKGL, GFI, CLI, SMOG, ARI, and FORCAST indices, respectively. Mean reading grade levels across the six tools were 14.8 for ChatGPT, 12.3 for Microsoft Copilot, 13.1 for Google Gemini, and 9.6 for Meta AI. Further, FRES values for the four chatbots were 31, 35.8, 36.4, and 57.7, respectively. CONCLUSIONS: This study shows that AI chatbots are capable of providing answers to medical questions regarding cardiac catheterization. However, the responses across the four chatbots had overall mean reading grade levels at the 11th-13th-grade level, depending on the tool used. This means that the materials were at the high school and even college reading level, which far exceeds the recommended sixth-grade level for patient education materials. Further, there is significant variability in the readability levels provided by different chatbots as, across all six grade-level assessments, Meta AI had the lowest scores and ChatGPT generally had the highest.

3.
J Eval Clin Pract ; 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38959373

ABSTRACT

RATIONALE: Artificial Intelligence (AI) large language models (LLM) are tools capable of generating human-like text responses to user queries across topics. The use of these language models in various medical contexts is currently being studied. However, the performance and content quality of these language models have not been evaluated in specific medical fields. AIMS AND OBJECTIVES: This study aimed to compare the performance of AI LLMs ChatGPT, Gemini and Copilot in providing information to parents about chronic kidney diseases (CKD) and compare the information accuracy and quality with that of a reference source. METHODS: In this study, 40 frequently asked questions about CKD were identified. The accuracy and quality of the answers were evaluated with reference to the Kidney Disease: Improving Global Outcomes guidelines. The accuracy of the responses generated by LLMs was assessed using F1, precision and recall scores. The quality of the responses was evaluated using a five-point global quality score (GQS). RESULTS: ChatGPT and Gemini achieved high F1 scores of 0.89 and 1, respectively, in the diagnosis and lifestyle categories, demonstrating significant success in generating accurate responses. Furthermore, ChatGPT and Gemini were successful in generating accurate responses with high precision values in the diagnosis and lifestyle categories. In terms of recall values, all LLMs exhibited strong performance in the diagnosis, treatment and lifestyle categories. Average GQ scores for the responses generated were 3.46 ± 0.55, 1.93 ± 0.63 and 2.02 ± 0.69 for Gemini, ChatGPT 3.5 and Copilot, respectively. In all categories, Gemini performed better than ChatGPT and Copilot. CONCLUSION: Although LLMs provide parents with high-accuracy information about CKD, their use is limited compared with that of a reference source. The limitations in the performance of LLMs can lead to misinformation and potential misinterpretations. Therefore, patients and parents should exercise caution when using these models.

4.
Cureus ; 16(6): e62471, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39015855

ABSTRACT

PURPOSE: To evaluate the efficiency of three artificial intelligence (AI) chatbots (ChatGPT-3.5 (OpenAI, San Francisco, California, United States), Bing Copilot (Microsoft Corporation, Redmond, Washington, United States), Google Gemini (Google LLC, Mountain View, California, United States)) in assisting the ophthalmologist in the diagnostic approach and management of challenging ophthalmic cases and compare their performance with that of a practicing human ophthalmic specialist. The secondary aim was to assess the short- and medium-term consistency of ChatGPT's responses. METHODS: Eleven ophthalmic case scenarios of variable complexity were presented to the AI chatbots and to an ophthalmic specialist in a stepwise fashion. Advice regarding the initial differential diagnosis, the final diagnosis, further investigation, and management was asked for. One month later, the same process was repeated twice on the same day for ChatGPT only. RESULTS: The individual diagnostic performance of all three AI chatbots was inferior to that of the ophthalmic specialist; however, they provided useful complementary input in the diagnostic algorithm. This was especially true for ChatGPT and Bing Copilot. ChatGPT exhibited reasonable short- and medium-term consistency, with the mean Jaccard similarity coefficient of responses varying between 0.58 and 0.76. CONCLUSION: AI chatbots may act as useful assisting tools in the diagnosis and management of challenging ophthalmic cases; however, their responses should be scrutinized for potential inaccuracies, and by no means can they replace consultation with an ophthalmic specialist.

5.
Bioengineering (Basel) ; 11(7)2024 Jun 27.
Article in English | MEDLINE | ID: mdl-39061736

ABSTRACT

This study assesses the effectiveness of chatbots powered by Large Language Models (LLMs)-ChatGPT 3.5, CoPilot, and Gemini-in delivering prostate cancer information, compared to the official Patient's Guide. Using 25 expert-validated questions, we conducted a comparative analysis to evaluate accuracy, timeliness, completeness, and understandability through a Likert scale. Statistical analyses were used to quantify the performance of each model. Results indicate that ChatGPT 3.5 consistently outperformed the other models, establishing itself as a robust and reliable source of information. CoPilot also performed effectively, albeit slightly less so than ChatGPT 3.5. Despite the strengths of the Patient's Guide, the advanced capabilities of LLMs like ChatGPT significantly enhance educational tools in healthcare. The findings underscore the need for ongoing innovation and improvement in AI applications within health sectors, especially considering the ethical implications underscored by the forthcoming EU AI Act. Future research should focus on investigating potential biases in AI-generated responses and their impact on patient outcomes.

6.
BMC Med Inform Decis Mak ; 24(1): 211, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39075513

ABSTRACT

BACKGROUND: To evaluate the accuracy, reliability, quality, and readability of responses generated by ChatGPT-3.5, ChatGPT-4, Gemini, and Copilot in relation to orthodontic clear aligners. METHODS: Frequently asked questions by patients/laypersons about clear aligners on websites were identified using the Google search tool and these questions were posed to ChatGPT-3.5, ChatGPT-4, Gemini, and Copilot AI models. Responses were assessed using a five-point Likert scale for accuracy, the modified DISCERN scale for reliability, the Global Quality Scale (GQS) for quality, and the Flesch Reading Ease Score (FRES) for readability. RESULTS: ChatGPT-4 responses had the highest mean Likert score (4.5 ± 0.61), followed by Copilot (4.35 ± 0.81), ChatGPT-3.5 (4.15 ± 0.75) and Gemini (4.1 ± 0.72). The difference between the Likert scores of the chatbot models was not statistically significant (p > 0.05). Copilot had a significantly higher modified DISCERN and GQS score compared to both Gemini, ChatGPT-4 and ChatGPT-3.5 (p < 0.05). Gemini's modified DISCERN and GQS score was statistically higher than ChatGPT-3.5 (p < 0.05). Gemini also had a significantly higher FRES compared to both ChatGPT-4, Copilot and ChatGPT-3.5 (p < 0.05). The mean FRES was 38.39 ± 11.56 for ChatGPT-3.5, 43.88 ± 10.13 for ChatGPT-4 and 41.72 ± 10.74 for Copilot, indicating that the responses were difficult to read according to the reading level. The mean FRES for Gemini is 54.12 ± 10.27, indicating that Gemini's responses are more readable than other chatbots. CONCLUSIONS: All chatbot models provided generally accurate, moderate reliable and moderate to good quality answers to questions about the clear aligners. Furthermore, the readability of the responses was difficult. ChatGPT, Gemini and Copilot have significant potential as patient information tools in orthodontics, however, to be fully effective they need to be supplemented with more evidence-based information and improved readability.


Subject(s)
Artificial Intelligence , Orthodontics , Humans , Orthodontics/standards , Patient Education as Topic/methods , Patient Education as Topic/standards , Reproducibility of Results
7.
Cureus ; 16(5): e59457, 2024 May.
Article in English | MEDLINE | ID: mdl-38826991

ABSTRACT

Background The rapid advancements in natural language processing have brought about the widespread use of large language models (LLMs) across various medical domains. However, their effectiveness in specialized fields, such as naturopathy, remains relatively unexplored. Objective The study aimed to assess the capability of freely available LLM chatbots in providing naturopathy consultations for various types of diseases and disorders. Methods Five free LLMs (viz., Gemini, Copilot, ChatGPT, Claude, and Perplexity) were used to converse with 20 clinical cases (simulation of real-world scenarios). Each case had the case details and questions pertinent to naturopathy. The responses were presented to three naturopathy doctors with > 5 years of practice. The answers were rated by them on a five-point Likert-like scale for language fluency, coherence, accuracy, and relevancy. The average of these four attributes is termed perfection in his study. Results The overall score of the LLMs were Gemini 3.81±0.23, Copilot 4.34±0.28, ChatGPT 4.43±0.2, Claude 3.8±0.26, and Perplexity 3.91±0.28 (ANOVA F [3.034, 57.64] = 33.47, P <0.0001. Together, they showed overall ~80% perfection in consultation. The average measure intraclass correlation coefficient among the LLMs for the overall score was 0.463 (95% CI = -0.028 to 0.76), P = 0.03. Conclusion Although the LLM chatbots could help in providing naturopathy and yoga treatment consultation with approximately an overall fair level of perfection, their solution to the user varies across different chatbots and there was very low reliability among them.

8.
Cureus ; 16(4): e57795, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38721180

ABSTRACT

Artificial Intelligence (AI) in healthcare marks a new era of innovation and efficiency, characterized by the emergence of sophisticated language models such as ChatGPT (OpenAI, San Francisco, CA, USA), Gemini Advanced (Google LLC, Mountain View, CA, USA), and Co-pilot (Microsoft Corp, Redmond, WA, USA). This review explores the transformative impact of these AI technologies on various facets of healthcare, from enhancing patient care and treatment protocols to revolutionizing medical research and tackling intricate health science challenges. ChatGPT, with its advanced natural language processing capabilities, leads the way in providing personalized mental health support and improving chronic condition management. Gemini Advanced extends the boundary of AI in healthcare through data analytics, facilitating early disease detection and supporting medical decision-making. Co-pilot, by integrating seamlessly with healthcare systems, optimizes clinical workflows and encourages a culture of innovation among healthcare professionals. Additionally, the review highlights the significant contributions of AI in accelerating medical research, particularly in genomics and drug discovery, thus paving the path for personalized medicine and more effective treatments. The pivotal role of AI in epidemiology, especially in managing infectious diseases such as COVID-19, is also emphasized, demonstrating its value in enhancing public health strategies. However, the integration of AI technologies in healthcare comes with challenges. Concerns about data privacy, security, and the need for comprehensive cybersecurity measures are discussed, along with the importance of regulatory compliance and transparent consent management to uphold ethical standards and patient autonomy. The review points out the necessity for seamless integration, interoperability, and the maintenance of AI systems' reliability and accuracy to fully leverage AI's potential in advancing healthcare.

9.
Cureus ; 16(5): e59960, 2024 May.
Article in English | MEDLINE | ID: mdl-38726360

ABSTRACT

Background Large language models (LLMs), such as ChatGPT-4, Gemini, and Microsoft Copilot, have been instrumental in various domains, including healthcare, where they enhance health literacy and aid in patient decision-making. Given the complexities involved in breast imaging procedures, accurate and comprehensible information is vital for patient engagement and compliance. This study aims to evaluate the readability and accuracy of the information provided by three prominent LLMs, ChatGPT-4, Gemini, and Microsoft Copilot, in response to frequently asked questions in breast imaging, assessing their potential to improve patient understanding and facilitate healthcare communication. Methodology We collected the most common questions on breast imaging from clinical practice and posed them to LLMs. We then evaluated the responses in terms of readability and accuracy. Responses from LLMs were analyzed for readability using the Flesch Reading Ease and Flesch-Kincaid Grade Level tests and for accuracy through a radiologist-developed Likert-type scale. Results The study found significant variations among LLMs. Gemini and Microsoft Copilot scored higher on readability scales (p < 0.001), indicating their responses were easier to understand. In contrast, ChatGPT-4 demonstrated greater accuracy in its responses (p < 0.001). Conclusions While LLMs such as ChatGPT-4 show promise in providing accurate responses, readability issues may limit their utility in patient education. Conversely, Gemini and Microsoft Copilot, despite being less accurate, are more accessible to a broader patient audience. Ongoing adjustments and evaluations of these models are essential to ensure they meet the diverse needs of patients, emphasizing the need for continuous improvement and oversight in the deployment of artificial intelligence technologies in healthcare.

10.
Patient Educ Couns ; 126: 108307, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38743965

ABSTRACT

OBJECTIVE: Evaluate Artificial Intelligence (AI) language models (ChatGPT-4, BARD, Microsoft Copilot) in simplifying radiology reports, assessing readability, understandability, actionability, and urgency classification. METHODS: This study evaluated the effectiveness of these AI models in translating radiology reports into patient-friendly language and providing understandable and actionable suggestions and urgency classifications. Thirty radiology reports were processed using AI tools, and their outputs were assessed for readability (Flesch Reading Ease, Flesch-Kincaid Grade Level), understandability (PEMAT), and the accuracy of urgency classification. ANOVA and Chi-Square tests were performed to compare the models' performances. RESULTS: All three AI models successfully transformed medical jargon into more accessible language, with BARD showing superior readability scores. In terms of understandability, all models achieved scores above 70%, with ChatGPT-4 and BARD leading (p < 0.001, both). However, the AI models varied in accuracy of urgency recommendations, with no significant statistical difference (p = 0.284). CONCLUSION: AI language models have proven effective in simplifying radiology reports, thereby potentially improving patient comprehension and engagement in their health decisions. However, their accuracy in assessing the urgency of medical conditions based on radiology reports suggests a need for further refinement. PRACTICE IMPLICATIONS: Incorporating AI in radiology communication can empower patients, but further development is crucial for comprehensive and actionable patient support.


Subject(s)
Artificial Intelligence , Comprehension , Humans , Radiology , Language
11.
J Med Syst ; 48(1): 38, 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38568432

ABSTRACT

The aim of the study is to evaluate and compare the quality and readability of responses generated by five different artificial intelligence (AI) chatbots-ChatGPT, Bard, Bing, Ernie, and Copilot-to the top searched queries of erectile dysfunction (ED). Google Trends was used to identify ED-related relevant phrases. Each AI chatbot received a specific sequence of 25 frequently searched terms as input. Responses were evaluated using DISCERN, Ensuring Quality Information for Patients (EQIP), and Flesch-Kincaid Grade Level (FKGL) and Reading Ease (FKRE) metrics. The top three most frequently searched phrases were "erectile dysfunction cause", "how to erectile dysfunction," and "erectile dysfunction treatment." Zimbabwe, Zambia, and Ghana exhibited the highest level of interest in ED. None of the AI chatbots achieved the necessary degree of readability. However, Bard exhibited significantly higher FKRE and FKGL ratings (p = 0.001), and Copilot achieved better EQIP and DISCERN ratings than the other chatbots (p = 0.001). Bard exhibited the simplest linguistic framework and posed the least challenge in terms of readability and comprehension, and Copilot's text quality on ED was superior to the other chatbots. As new chatbots are introduced, their understandability and text quality increase, providing better guidance to patients.


Subject(s)
Artificial Intelligence , Erectile Dysfunction , Male , Humans , Software , Benchmarking , Linguistics
12.
Sci Rep ; 14(1): 8233, 2024 04 08.
Article in English | MEDLINE | ID: mdl-38589613

ABSTRACT

With the release of ChatGPT at the end of 2022, a new era of thinking and technology use has begun. Artificial intelligence models (AIs) like Gemini (Bard), Copilot (Bing), and ChatGPT-3.5 have the potential to impact every aspect of our lives, including laboratory data interpretation. To assess the accuracy of ChatGPT-3.5, Copilot, and Gemini responses in evaluating biochemical data. Ten simulated patients' biochemical laboratory data, including serum urea, creatinine, glucose, cholesterol, triglycerides, low-density lipoprotein (LDL-c), and high-density lipoprotein (HDL-c), in addition to HbA1c, were interpreted by three AIs: Copilot, Gemini, and ChatGPT-3.5, followed by evaluation with three raters. The study was carried out using two approaches. The first encompassed all biochemical data. The second contained only kidney function data. The first approach indicated Copilot to have the highest level of accuracy, followed by Gemini and ChatGPT-3.5. Friedman and Dunn's post-hoc test revealed that Copilot had the highest mean rank; the pairwise comparisons revealed significant differences for Copilot vs. ChatGPT-3.5 (P = 0.002) and Gemini (P = 0.008). The second approach exhibited Copilot to have the highest accuracy of performance. The Friedman test with Dunn's post-hoc analysis showed Copilot to have the highest mean rank. The Wilcoxon Signed-Rank Test demonstrated an indistinguishable response (P = 0.5) of Copilot when all laboratory data were applied vs. the application of only kidney function data. Copilot is more accurate in interpreting biochemical data than Gemini and ChatGPT-3.5. Its consistent responses across different data subsets highlight its reliability in this context.


Subject(s)
Artificial Intelligence , Humans , Pilot Projects , Reproducibility of Results , Blood Urea Nitrogen , Creatinine
13.
Actas Urol Esp (Engl Ed) ; 48(5): 398-403, 2024 Jun.
Article in English, Spanish | MEDLINE | ID: mdl-38373482

ABSTRACT

INTRODUCTION AND OBJECTIVE: Generative artificial intelligence makes it possible to ask about medical pathologies in dialog boxes. Our objective was to analyze the quality of information about the most common urological pathologies provided by ChatGPT (OpenIA), BARD (Google), and Copilot (Microsoft). METHODS: We analyzed information on the following pathologies and their treatments as provided by AI: prostate cancer, kidney cancer, bladder cancer, urinary lithiasis, and benign prostatic hypertrophy (BPH). Questions in English and Spanish were posed in dialog boxes; the answers were collected and analyzed with DISCERN questionnaires and the overall appropriateness of the response. Surgical procedures were performed with an informed consent questionnaire. RESULTS: The responses from the three chatbots explained the pathology, detailed risk factors, and described treatments. The difference is that BARD and Copilot provide external information citations, which ChatGPT does not. The highest DISCERN scores, in absolute numbers, were obtained in Copilot; however, on the appropriacy scale it was noted that their responses were not the most appropriate. The best surgical treatment scores were obtained by BARD, followed by ChatGPT, and finally Copilot. CONCLUSIONS: The answers obtained from generative AI on urological diseases depended on the formulation of the question. The information provided had significant biases, depending on pathology, language, and above all, the dialog box consulted.


Subject(s)
Language , Urologic Diseases , Humans , Artificial Intelligence , Surveys and Questionnaires , Internet
SELECTION OF CITATIONS
SEARCH DETAIL