Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Cureus ; 16(2): e53441, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38435177

RESUMO

Introduction Uncontrolled hypertension significantly contributes to the development and deterioration of various medical conditions, such as myocardial infarction, chronic kidney disease, and cerebrovascular events. Despite being the most common preventable risk factor for all-cause mortality, only a fraction of affected individuals maintain their blood pressure in the desired range. In recent times, there has been a growing reliance on online platforms for medical information. While providing a convenient source of information, differentiating reliable from unreliable information can be daunting for the layperson, and false information can potentially hinder timely diagnosis and management of medical conditions. The surge in accessibility of generative artificial intelligence (GeAI) technology has led to increased use in obtaining health-related information. This has sparked debates among healthcare providers about the potential for misuse and misinformation while recognizing the role of GeAI in improving health literacy. This study aims to investigate the accuracy of AI-generated information specifically related to hypertension. Additionally, it seeks to explore the reproducibility of information provided by GeAI. Method A nonhuman-subject qualitative study was devised to evaluate the accuracy of information provided by ChatGPT regarding hypertension and its secondary complications. Frequently asked questions on hypertension were compiled by three study staff, internal medicine residents at an ACGME-accredited program, and then reviewed by a physician experienced in treating hypertension, resulting in a final set of 100 questions. Each question was posed to ChatGPT three times, once by each study staff, and the majority response was then assessed against the recommended guidelines. A board-certified internal medicine physician with over eight years of experience further reviewed the responses and categorized them into two classes based on their clinical appropriateness: appropriate (in line with clinical recommendations) and inappropriate (containing errors). Descriptive statistical analysis was employed to assess ChatGPT responses for accuracy and reproducibility. Result Initially, a pool of 130 questions was gathered, of which a final set of 100 questions was selected for the purpose of this study. When assessed against acceptable standard responses, ChatGPT responses were found to be appropriate in 92.5% of cases and inappropriate in 7.5%. Furthermore, ChatGPT had a reproducibility score of 93%, meaning that it could consistently reproduce answers that conveyed similar meanings across multiple runs. Conclusion ChatGPT showcased commendable accuracy in addressing commonly asked questions about hypertension. These results underscore the potential of GeAI in providing valuable information to patients. However, continued research and refinement are essential to evaluate further the reliability and broader applicability of ChatGPT within the medical field.

2.
Cureus ; 15(11): e48919, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38024047

RESUMO

Introduction and aim The surging incidence of type 2 diabetes has become a growing concern for the healthcare sector. This chronic ailment, characterized by its complex blend of genetic and lifestyle determinants, has witnessed a notable increase in recent times, exerting substantial pressure on healthcare resources. As more individuals turn to online platforms for health guidance and embrace the utilization of Chat Generative Pre-trained Transformer (ChatGPT; San Francisco, CA: OpenAI), a text-generating AI (TGAI), to get insights into their well-being, evaluating its effectiveness and reliability becomes crucial. This research primarily aimed to evaluate the correctness of TGAI responses to type 2 diabetes (T2DM) inquiries via ChatGPT. Furthermore, this study aimed to examine the consistency of TGAI in addressing common queries on T2DM complications for patient education. Material and methods Questions on T2DM were formulated by experienced physicians and screened by research personnel before querying ChatGPT. Each question was posed thrice, and the collected answers were summarized. Responses were then sorted into three distinct categories as follows: (a) appropriate, (b) inappropriate, and (c) unreliable by two seasoned physicians. In instances of differing opinions, a third physician was consulted to achieve consensus. Results From the initial set of 110 T2DM questions, 40 were dismissed by experts for relevance, resulting in a final count of 70. An overwhelming 98.5% of the AI's answers were judged as appropriate, thus underscoring its reliability over traditional online search engines. Nonetheless, a 1.5% rate of inappropriate responses underlines the importance of ongoing AI improvements and strict adherence to medical protocols. Conclusion TGAI provides medical information of high quality and reliability. This study underscores TGAI's impressive effectiveness in delivering reliable information about T2DM, with 98.5% of responses aligning with the standard of care. These results hold promise for integrating AI platforms as supplementary tools to enhance patient education and outcomes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA