Your browser doesn't support javascript.
loading
Association of reviewer experience with discriminating human-written versus ChatGPT-written abstracts.
Levin, Gabriel; Pareja, Rene; Viveros-Carreño, David; Sanchez Diaz, Emmanuel; Yates, Elise Mann; Zand, Behrouz; Ramirez, Pedro T.
Afiliação
  • Levin G; Division of Gynecologic Oncology, Jewish General Hospital, McGill University, Montreal, Quebec, Canada Gabriel.levin2@mail.mcgill.ca.
  • Pareja R; Gynecologic Oncology, Clinica ASTORGA, Medellin, and Instituto Nacional de Cancerología, Bogotá, Colombia.
  • Viveros-Carreño D; Unidad Ginecología Oncológica, Grupo de Investigación GIGA, Centro de Tratamiento e Investigación sobre Cáncer Luis Carlos Sarmiento Angulo - CTIC, Bogotá, Colombia.
  • Sanchez Diaz E; Department of Gynecologic Oncology, Clínica Universitaria Colombia, Bogotá, Colombia.
  • Yates EM; Universidad Pontificia Bolivariana Clinica Universitaria Bolivariana, Medellin, Colombia.
  • Zand B; Obstetrics and Gynecology, Houston Methodist Hospital, Houston, Texas, USA.
  • Ramirez PT; Gynecologic Oncology, Houston Methodist, Shenandoah, Texas, USA.
Int J Gynecol Cancer ; 34(5): 669-674, 2024 May 06.
Article em En | MEDLINE | ID: mdl-38627032
ABSTRACT

OBJECTIVE:

To determine if reviewer experience impacts the ability to discriminate between human-written and ChatGPT-written abstracts.

METHODS:

Thirty reviewers (10 seniors, 10 juniors, and 10 residents) were asked to differentiate between 10 ChatGPT-written and 10 human-written (fabricated) abstracts. For the study, 10 gynecologic oncology abstracts were fabricated by the authors. For each human-written abstract we generated a ChatGPT matching abstract by using the same title and the fabricated results of each of the human generated abstracts. A web-based questionnaire was used to gather demographic data and to record the reviewers' evaluation of the 20 abstracts. Comparative statistics and multivariable regression were used to identify factors associated with a higher correct identification rate.

RESULTS:

The 30 reviewers discriminated 20 abstracts, giving a total of 600 abstract evaluations. The reviewers were able to correctly identify 300/600 (50%) of the abstracts 139/300 (46.3%) of the ChatGPT-generated abstracts and 161/300 (53.7%) of the human-written abstracts (p=0.07). Human-written abstracts had a higher rate of correct identification (median (IQR) 56.7% (49.2-64.1%) vs 45.0% (43.2-48.3%), p=0.023). Senior reviewers had a higher correct identification rate (60%) than junior reviewers and residents (45% each; p=0.043 and p=0.002, respectively). In a linear regression model including the experience level of the reviewers, familiarity with artificial intelligence (AI) and the country in which the majority of medical training was achieved (English speaking vs non-English speaking), the experience of the reviewer (ß=10.2 (95% CI 1.8 to 18.7)) and familiarity with AI (ß=7.78 (95% CI 0.6 to 15.0)) were independently associated with the correct identification rate (p=0.019 and p=0.035, respectively). In a correlation analysis the number of publications by the reviewer was positively correlated with the correct identification rate (r28)=0.61, p<0.001.

CONCLUSION:

A total of 46.3% of abstracts written by ChatGPT were detected by reviewers. The correct identification rate increased with reviewer and publication experience.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Indexação e Redação de Resumos Limite: Female / Humans Idioma: En Revista: Int J Gynecol Cancer Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Indexação e Redação de Resumos Limite: Female / Humans Idioma: En Revista: Int J Gynecol Cancer Ano de publicação: 2024 Tipo de documento: Article