Your browser doesn't support javascript.
loading
Artificial Intelligence and Public Health: Evaluating ChatGPT Responses to Vaccination Myths and Misconceptions.
Deiana, Giovanna; Dettori, Marco; Arghittu, Antonella; Azara, Antonio; Gabutti, Giovanni; Castiglia, Paolo.
Afiliação
  • Deiana G; Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy.
  • Dettori M; Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy.
  • Arghittu A; Department of Medical, Surgical and Experimental Sciences, University Hospital of Sassari, 07100 Sassari, Italy.
  • Azara A; Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy.
  • Gabutti G; Department of Restorative, Pediatric and Preventive Dentistry, University of Bern, 3012 Bern, Switzerland.
  • Castiglia P; Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy.
Vaccines (Basel) ; 11(7)2023 Jul 07.
Article em En | MEDLINE | ID: mdl-37515033
Artificial intelligence (AI) tools, such as ChatGPT, are the subject of intense debate regarding their possible applications in contexts such as health care. This study evaluates the Correctness, Clarity, and Exhaustiveness of the answers provided by ChatGPT on the topic of vaccination. The World Health Organization's 11 "myths and misconceptions" about vaccinations were administered to both the free (GPT-3.5) and paid version (GPT-4.0) of ChatGPT. The AI tool's responses were evaluated qualitatively and quantitatively, in reference to those myth and misconceptions provided by WHO, independently by two expert Raters. The agreement between the Raters was significant for both versions (p of K < 0.05). Overall, ChatGPT responses were easy to understand and 85.4% accurate although one of the questions was misinterpreted. Qualitatively, the GPT-4.0 responses were superior to the GPT-3.5 responses in terms of Correctness, Clarity, and Exhaustiveness (Δ = 5.6%, 17.9%, 9.3%, respectively). The study shows that, if appropriately questioned, AI tools can represent a useful aid in the health care field. However, when consulted by non-expert users, without the support of expert medical advice, these tools are not free from the risk of eliciting misleading responses. Moreover, given the existing social divide in information access, the improved accuracy of answers from the paid version raises further ethical issues.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article