Your browser doesn't support javascript.
loading
Can large language models provide secondary reliable opinion on treatment options for dermatological diseases?
Iqbal, Usman; Lee, Leon Tsung-Ju; Rahmanti, Annisa Ristya; Celi, Leo Anthony; Li, Yu-Chuan Jack.
Afiliação
  • Iqbal U; School of Population Health, Faculty of Medicine and Health, University of New South Wales (UNSW), Sydney, NSW 2052, Australia.
  • Lee LT; Department of Health, Tasmania 7000, Australia.
  • Rahmanti AR; Global Health and Health Security Department, College of Public Health, Taipei Medical University, Taipei 110, Taiwan.
  • Celi LA; Graduate Institute of Clinical Medicine, Taipei Medical University, Taipei 110, Taiwan.
  • Li YJ; Department of Dermatology, Taipei Medical University Hospital, Taipei Medical University, Taipei 110, Taiwan.
J Am Med Inform Assoc ; 31(6): 1341-1347, 2024 May 20.
Article em En | MEDLINE | ID: mdl-38578616
ABSTRACT

OBJECTIVE:

To investigate the consistency and reliability of medication recommendations provided by ChatGPT for common dermatological conditions, highlighting the potential for ChatGPT to offer second opinions in patient treatment while also delineating possible limitations. MATERIALS AND

METHODS:

In this mixed-methods study, we used survey questions in April 2023 for drug recommendations generated by ChatGPT with data from secondary databases, that is, Taiwan's National Health Insurance Research Database and an US medical center database, and validated by dermatologists. The methodology included preprocessing queries, executing them multiple times, and evaluating ChatGPT responses against the databases and dermatologists. The ChatGPT-generated responses were analyzed statistically in a disease-drug matrix, considering disease-medication associations (Q-value) and expert evaluation.

RESULTS:

ChatGPT achieved a high 98.87% dermatologist approval rate for common dermatological medication recommendations. We evaluated its drug suggestions using the Q-value, showing that human expert validation agreement surpassed Q-value cutoff-based agreement. Varying cutoff values for disease-medication associations, a cutoff of 3 achieved 95.14% accurate prescriptions, 5 yielded 85.42%, and 10 resulted in 72.92%. While ChatGPT offered accurate drug advice, it occasionally included incorrect ATC codes, leading to issues like incorrect drug use and type, nonexistent codes, repeated errors, and incomplete medication codes.

CONCLUSION:

ChatGPT provides medication recommendations as a second opinion in dermatology treatment, but its reliability and comprehensiveness need refinement for greater accuracy. In the future, integrating a medical domain-specific knowledge base for training and ongoing optimization will enhance the precision of ChatGPT's results.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Dermatopatias Limite: Humans País/Região como assunto: Asia Idioma: En Revista: J Am Med Inform Assoc Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Austrália

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Dermatopatias Limite: Humans País/Região como assunto: Asia Idioma: En Revista: J Am Med Inform Assoc Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Austrália