Your browser doesn't support javascript.
loading
Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis.
Menz, Bradley D; Kuderer, Nicole M; Bacchi, Stephen; Modi, Natansh D; Chin-Yee, Benjamin; Hu, Tiancheng; Rickard, Ceara; Haseloff, Mark; Vitry, Agnes; McKinnon, Ross A; Kichenadasse, Ganessan; Rowland, Andrew; Sorich, Michael J; Hopkins, Ashley M.
Affiliation
  • Menz BD; College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.
  • Kuderer NM; Advanced Cancer Research Group, Kirkland, WA, USA.
  • Bacchi S; College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.
  • Modi ND; Northern Adelaide Local Health Network, Lyell McEwin Hospital, Adelaide, Australia.
  • Chin-Yee B; College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.
  • Hu T; Schulich School of Medicine and Dentistry, Western University, London, Canada.
  • Rickard C; Department of History and Philosophy of Science, University of Cambridge, Cambridge, UK.
  • Haseloff M; Language Technology Lab, University of Cambridge, Cambridge, UK.
  • Vitry A; Consumer Advisory Group, Clinical Cancer Epidemiology Group, College of Medicine and Public Health, Flinders University, Adelaide, Australia.
  • McKinnon RA; Consumer Advisory Group, Clinical Cancer Epidemiology Group, College of Medicine and Public Health, Flinders University, Adelaide, Australia.
  • Kichenadasse G; Consumer Advisory Group, Clinical Cancer Epidemiology Group, College of Medicine and Public Health, Flinders University, Adelaide, Australia.
  • Rowland A; University of South Australia, Clinical and Health Sciences, Adelaide, Australia.
  • Sorich MJ; College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.
  • Hopkins AM; College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.
BMJ ; 384: e078538, 2024 03 20.
Article in En | MEDLINE | ID: mdl-38508682
ABSTRACT

OBJECTIVES:

To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities.

DESIGN:

Repeated cross sectional analysis.

SETTING:

Publicly accessible LLMs.

METHODS:

In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. MAIN OUTCOME

MEASURES:

The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation.

RESULTS:

Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported.

CONCLUSIONS:

This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Skin Neoplasms / Camelids, New World Limits: Animals / Humans Language: En Journal: BMJ / BMJ (Online) Journal subject: MEDICINA Year: 2024 Document type: Article Affiliation country: Country of publication:

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Skin Neoplasms / Camelids, New World Limits: Animals / Humans Language: En Journal: BMJ / BMJ (Online) Journal subject: MEDICINA Year: 2024 Document type: Article Affiliation country: Country of publication: