Your browser doesn't support javascript.
loading
Assessing the Reproducibility of the Structured Abstracts Generated by ChatGPT and Bard Compared to Human-Written Abstracts in the Field of Spine Surgery: Comparative Analysis.
Kim, Hong Jin; Yang, Jae Hyuk; Chang, Dong-Gune; Lenke, Lawrence G; Pizones, Javier; Castelein, René; Watanabe, Kota; Trobisch, Per D; Mundis, Gregory M; Suh, Seung Woo; Suk, Se-Il.
Afiliación
  • Kim HJ; Department of Orthopedic Surgery, Inje University Sanggye Paik Hospital, College of Medicine, Inje University, Seoul, Republic of Korea.
  • Yang JH; Department of Orthopedic Surgery, Korea University Anam Hospital, College of Medicine, Korea University, Seoul, Republic of Korea.
  • Chang DG; Department of Orthopedic Surgery, Inje University Sanggye Paik Hospital, College of Medicine, Inje University, Seoul, Republic of Korea.
  • Lenke LG; Department of Orthopedic Surgery, The Daniel and Jane Och Spine Hospital, Columbia University, New York, NY, United States.
  • Pizones J; Department of Orthopedic Surgery, Hospital Universitario La Paz, Madrid, Spain.
  • Castelein R; Department of Orthopedic Surgery, University Medical Centre Utrecht, Utrecht, Netherlands.
  • Watanabe K; Department of Orthopedic Surgery, Keio University School of Medicine, Tokyo, Japan.
  • Trobisch PD; Department of Spine Surgery, Eifelklinik St. Brigida, Simmerath, Germany.
  • Mundis GM; Department of Orthopaedic Surgery, Scripps Clinic, La Jolla, CA, United States.
  • Suh SW; Department of Orthopedic Surgery, Korea University Guro Hospital, College of Medicine, Korea University, Seoul, Republic of Korea.
  • Suk SI; Department of Orthopedic Surgery, Inje University Sanggye Paik Hospital, College of Medicine, Inje University, Seoul, Republic of Korea.
J Med Internet Res ; 26: e52001, 2024 Jun 26.
Article en En | MEDLINE | ID: mdl-38924787
ABSTRACT

BACKGROUND:

Due to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as "Gemini"; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy.

OBJECTIVE:

The objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery.

METHODS:

In total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors.

RESULTS:

The proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P<.001). However, a higher proportion of Bard abstracts (49/54, 90.7%) had word counts that met journal guidelines compared with ChatGPT abstracts (30/60, 50%; P<.001). The similarity index was significantly lower among ChatGPT-generated abstracts (20.7%) compared with Bard-generated abstracts (32.1%; P<.001). The AI-detection program predicted that 21.7% (13/60) of the human group, 63.3% (38/60) of the ChatGPT group, and 87% (47/54) of the Bard group were possibly generated by AI, with an area under the curve value of 0.863 (P<.001). The mean detection rate by human reviewers was 53.8% (SD 11.2%), achieving a sensitivity of 56.3% and a specificity of 48.4%. A total of 56.3% (63/112) of the actual human-written abstracts and 55.9% (62/128) of AI-generated abstracts were recognized as human-written and AI-generated by human reviewers, respectively.

CONCLUSIONS:

Both ChatGPT and Bard can be used to help write abstracts, but most AI-generated abstracts are currently considered unethical due to high plagiarism and AI-detection rates. ChatGPT-generated abstracts appear to be superior to Bard-generated abstracts in meeting journal formatting guidelines. Because humans are unable to accurately distinguish abstracts written by humans from those produced by AI programs, it is crucial to exercise special caution and examine the ethical boundaries of using AI programs, including ChatGPT and Bard.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Columna Vertebral / Indización y Redacción de Resúmenes Límite: Humans Idioma: En Revista: J Med Internet Res Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article Pais de publicación: Canadá

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Columna Vertebral / Indización y Redacción de Resúmenes Límite: Humans Idioma: En Revista: J Med Internet Res Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article Pais de publicación: Canadá