Your browser doesn't support javascript.
loading
Gender Bias in Artificial Intelligence-Written Letters of Reference.
Farlow, Janice L; Abouyared, Marianne; Rettig, Eleni M; Kejner, Alexandra; Patel, Rusha; Edwards, Heather A.
Affiliation
  • Farlow JL; Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, Indiana, USA.
  • Abouyared M; Department of Otolaryngology-Head and Neck Surgery, University of California Davis, Sacramento, California, USA.
  • Rettig EM; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA.
  • Kejner A; Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, USA.
  • Patel R; Department of Otolaryngology-Head and Neck Surgery, University of Oklahoma College of Medicine, Oklahoma City, Oklahoma, USA.
  • Edwards HA; Department of Otolaryngolog-Head and Neck Surgery, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA.
Article in En | MEDLINE | ID: mdl-38716794
ABSTRACT

OBJECTIVE:

Letters of reference (LORs) play an important role in postgraduate residency applications. Human-written LORs have been shown to carry implicit gender bias, such as using more agentic versus communal words for men, and more frequent doubt-raisers and references to appearance and personal life for women. This can result in inequitable access to residency opportunities for women. Given the known gendered language often unconsciously inserted into human-written LORs, we sought to identify whether LORs generated by artificial intelligence exhibit gender bias. STUDY

DESIGN:

Observational study.

SETTING:

Multicenter academic collaboration.

METHODS:

Prompts describing identical men and women applying for Otolaryngology residency positions were created and provided to ChatGPT to generate LORs. These letters were analyzed using a gender-bias calculator which assesses the proportion of male- versus female-associated words.

RESULTS:

Regardless of the gender, school, research, or other activities, all LORs generated by ChatGPT showed a bias toward male-associated words. There was no significant difference between the percentage of male-biased words in letters written for women versus men (39.15 vs 37.85, P = .77). There were significant differences in gender bias found by each of the other discrete variables (school, research, and other activities) chosen.

CONCLUSION:

While ChatGPT-generated LORs all showed a male bias in the language used, there was no gender bias difference in letters produced using traditionally masculine versus feminine names and pronouns. Other variables did induce gendered language, however. ChatGPT is a promising tool for LOR drafting, but users must be aware of potential biases introduced or propagated through these technologies.
Key words

Full text: 1 Database: MEDLINE Language: En Year: 2024 Type: Article

Full text: 1 Database: MEDLINE Language: En Year: 2024 Type: Article