Your browser doesn't support javascript.
loading
Improving Readability and Automating Content Analysis of Plastic Surgery Webpages With ChatGPT.
Fanning, James E; Escobar-Domingo, Maria J; Foppiani, Jose; Lee, Daniela; Miller, Amitai S; Janis, Jeffrey E; Lee, Bernard T.
Affiliation
  • Fanning JE; Division of Plastic and Reconstructive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts.
  • Escobar-Domingo MJ; Division of Plastic and Reconstructive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts.
  • Foppiani J; Division of Plastic and Reconstructive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts.
  • Lee D; Division of Plastic and Reconstructive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts.
  • Miller AS; Division of Plastic and Reconstructive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts.
  • Janis JE; Department of Plastic and Reconstructive Surgery, Ohio State University Wexner Medical Center, Columbus, Ohio.
  • Lee BT; Division of Plastic and Reconstructive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts. Electronic address: blee3@bidmc.harvard.edu.
J Surg Res ; 299: 103-111, 2024 Jul.
Article in En | MEDLINE | ID: mdl-38749313
ABSTRACT

INTRODUCTION:

The quality and readability of online health information are sometimes suboptimal, reducing their usefulness to patients. Manual evaluation of online medical information is time-consuming and error-prone. This study automates content analysis and readability improvement of private-practice plastic surgery webpages using ChatGPT.

METHODS:

The first 70 Google search results of "breast implant size factors" and "breast implant size decision" were screened. ChatGPT 3.5 and 4.0 were utilized with two prompts (1 general, 2 specific) to automate content analysis and rewrite webpages with improved readability. ChatGPT content analysis outputs were classified as hallucination (false positive), accurate (true positive or true negative), or omission (false negative) using human-rated scores as a benchmark. Six readability metric scores of original and revised webpage texts were compared.

RESULTS:

Seventy-five webpages were included. Significant improvements were achieved from baseline in six readability metric scores using a specific-instruction prompt with ChatGPT 3.5 (all P ≤ 0.05). No further improvements in readability scores were achieved with ChatGPT 4.0. Rates of hallucination, accuracy, and omission in ChatGPT content scoring varied widely between decision-making factors. Compared to ChatGPT 3.5, average accuracy rates increased while omission rates decreased with ChatGPT 4.0 content analysis output.

CONCLUSIONS:

ChatGPT offers an innovative approach to enhancing the quality of online medical information and expanding the capabilities of plastic surgery research and practice. Automation of content analysis is limited by ChatGPT 3.5's high omission rates and ChatGPT 4.0's high hallucination rates. Our results also underscore the importance of iterative prompt design to optimize ChatGPT performance in research tasks.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Surgery, Plastic / Comprehension Limits: Humans Language: En Journal: J Surg Res Year: 2024 Document type: Article Country of publication: Estados Unidos

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Surgery, Plastic / Comprehension Limits: Humans Language: En Journal: J Surg Res Year: 2024 Document type: Article Country of publication: Estados Unidos