Your browser doesn't support javascript.
loading
Consulting the Digital Doctor: Efficacy of ChatGPT-3.5 in Answering Questions Related to Diabetic Foot Ulcer Care.
Rohrich, Rachel N; Li, Karen R; Lava, Christian X; Snee, Isabel; Alahmadi, Sami; Youn, Richard C; Steinberg, John S; Atves, Jayson M; Attinger, Christopher E; Evans, Karen K.
Afiliación
  • Rohrich RN; Department of Plastic and Reconstructive Surgery, MedStar Georgetown University Hospital, Washington DC.
  • Li KR; Department of Plastic and Reconstructive Surgery, MedStar Georgetown University Hospital, Washington DC.
  • Lava CX; Georgetown University School of Medicine, Washington DC.
  • Snee I; Department of Plastic and Reconstructive Surgery, MedStar Georgetown University Hospital, Washington DC.
  • Alahmadi S; Georgetown University School of Medicine, Washington DC.
  • Youn RC; Georgetown University School of Medicine, Washington DC.
  • Steinberg JS; Georgetown University School of Medicine, Washington DC.
  • Atves JM; Department of Plastic and Reconstructive Surgery, MedStar Georgetown University Hospital, Washington DC.
  • Attinger CE; Department of Podiatric Surgery, MedStar Georgetown University Hospital, Washington DC.
  • Evans KK; Department of Podiatric Surgery, MedStar Georgetown University Hospital, Washington DC.
Adv Skin Wound Care ; 38(9): E74-E80, 2025 Oct 01.
Article en En | MEDLINE | ID: mdl-40539754
BACKGROUND: Diabetic foot ulcer (DFU) care is a challenge in reconstructive surgery. Artificial intelligence (AI) tools represent a new resource for patients with DFUs to seek information. OBJECTIVE: To evaluate the efficacy of ChatGPT-3.5 in responding to frequently asked questions related to DFU care. METHODS: Researchers posed 11 DFU care questions to ChatGPT-3.5 in December 2023. Questions were divided into topic categories of wound care, concerning symptoms, and surgical management. Four plastic surgeons in the authors' wound care department evaluated responses on a 10-point Likert-type scale for accuracy, comprehensiveness, and danger, in addition to providing qualitative feedback. Readability was assessed using 10 readability indexes. RESULTS: ChatGPT-3.5 answered questions with a mean accuracy of 8.7±0.3, comprehensiveness of 8.0±0.7, and danger of 2.2±0.6. ChatGPT-3.5 answered at the mean grade level of 11.9±1.8. Physician reviewers complimented the simplicity of the responses (n=11/11) and the AI's ability to provide general information (n=4/11). Three responses presented incorrect information, and the majority of responses (n=10/11) left out key information, such as deep vein thrombosis symptoms and comorbid conditions impacting limb salvage. CONCLUSIONS: The researchers observed that ChatGPT-3.5 provided misinformation, omitted crucial details, and responded at nearly 4 grade levels higher than the American average. However, ChatGPT-3.5 was sufficient in its ability to provide general information, which may enable patients with DFUs to make more informed decisions and better engage in their care. Physicians must proactively address the potential benefits and limitations of AI.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Asunto principal: Inteligencia Artificial / Pie Diabético Tipo de estudio: Qualitative_research Límite: Female / Humans / Male Idioma: En Revista: Adv skin wound care Asunto de la revista: ENFERMAGEM Año: 2025 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Asunto principal: Inteligencia Artificial / Pie Diabético Tipo de estudio: Qualitative_research Límite: Female / Humans / Male Idioma: En Revista: Adv skin wound care Asunto de la revista: ENFERMAGEM Año: 2025 Tipo del documento: Article