Your browser doesn't support javascript.
loading
ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders.
Mondal, Himel; Dash, Ipsita; Mondal, Shaikat; Behera, Joshil Kumar.
Afiliação
  • Mondal H; Physiology, All India Institute of Medical Sciences, Deoghar, IND.
  • Dash I; Biochemistry, Saheed Laxman Nayak Medical College and Hospital, Koraput, IND.
  • Mondal S; Physiology, Raiganj Government Medical College and Hospital, Raiganj, IND.
  • Behera JK; Physiology, Nagaland Institute of Medical Science and Research, Kohima, IND.
Cureus ; 15(11): e48296, 2023 Nov.
Article em En | MEDLINE | ID: mdl-38058315
Background Lifestyle-related diseases and disorders have become a significant global health burden. However, the majority of the population ignores or do not consult doctors for such disease or disorders. Artificial intelligence (AI)-based large language model (LLM) like ChatGPT (GPT3.5) is capable of generating customized queries of a user. Hence, it can act as a virtual telehealth agent. Its capability to answer lifestyle-related diseases or disorders has not been explored. Objective This study aimed to evaluate the effectiveness of ChatGPT, an LLM, in providing answers to queries related to lifestyle-related diseases or disorders. Methods A set of 20 lifestyle-related disease or disorder cases covering a wide range of topics such as obesity, diabetes, cardiovascular health, and mental health were prepared with four questions. The case and questions were presented to ChatGPT and asked for the answers to those questions. Two physicians rated the content on a three-point Likert-like scale ranging from accurate (2), partially accurate (1), and inaccurate (0). Further, the content was rated as adequate (2), inadequate (1), and misguiding (0) for testing the applicability of the guides for patients. The readability of the text was analyzed by the Flesch-Kincaid Ease Score (FKES).  Results Among 20 cases, the average score of accuracy was 1.83±0.37 and guidance was 1.9±0.21. Both the scores were higher than the hypothetical median of 1.5 (p=0.004 and p<0.0001, respectively). ChatGPT answered the questions with a natural tone in 11 cases and nine with a positive tone. The text was understandable for college graduates with a mean FKES of 27.8±5.74. Conclusion The analysis of content accuracy revealed that ChatGPT provided reasonably accurate information in the majority of the cases, successfully addressing queries related to lifestyle-related diseases or disorders. Hence, initial guidance can be obtained by patients when they get little time to consult a doctor or wait for an appointment to consult a doctor for suggestions about their condition.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article