Your browser doesn't support javascript.
loading
TrachGPT: Appraisal of tracheostomy care recommendations from an artificial intelligent Chatbot.
Ayo-Ajibola, Oluwatobiloba; Davis, Ryan J; Lin, Matthew E; Vukkadala, Neelaysh; O'Dell, Karla; Swanson, Mark S; Johns, Michael M; Shuman, Elizabeth A.
Afiliação
  • Ayo-Ajibola O; Keck School of Medicine of the University of Southern California Los Angeles California USA.
  • Davis RJ; Keck School of Medicine of the University of Southern California Los Angeles California USA.
  • Lin ME; Department of Head and Neck Surgery University of California Los Angeles Los Angeles California USA.
  • Vukkadala N; Department of Head and Neck Surgery University of California Los Angeles Los Angeles California USA.
  • O'Dell K; Department of Otolaryngology-Head and Neck Surgery University of Southern California Caruso Los Angeles California USA.
  • Swanson MS; Department of Otolaryngology-Head and Neck Surgery University of Southern California Caruso Los Angeles California USA.
  • Johns MM; Department of Otolaryngology-Head and Neck Surgery University of Southern California Caruso Los Angeles California USA.
  • Shuman EA; Department of Otolaryngology-Head and Neck Surgery University of Southern California Caruso Los Angeles California USA.
Laryngoscope Investig Otolaryngol ; 9(4): e1300, 2024 Aug.
Article em En | MEDLINE | ID: mdl-39015552
ABSTRACT

Objective:

Safe home tracheostomy care requires engagement and troubleshooting by patients, who may turn to online, AI-generated information sources. This study assessed the quality of ChatGPT responses to such queries.

Methods:

In this cross-sectional study, ChatGPT was prompted with 10 hypothetical tracheostomy care questions in three domains (complication management, self-care advice, and lifestyle adjustment). Responses were graded by four otolaryngologists for appropriateness, accuracy, and overall score. The readability of responses was evaluated using the Flesch Reading Ease (FRE) and Flesch-Kincaid Reading Grade Level (FKRGL). Descriptive statistics and ANOVA testing were performed with statistical significance set to p < .05.

Results:

On a scale of 1-5, with 5 representing the greatest appropriateness or overall score and a 4-point scale with 4 representing the highest accuracy, the responses exhibited moderately high appropriateness (mean = 4.10, SD = 0.90), high accuracy (mean = 3.55, SD = 0.50), and moderately high overall scores (mean = 4.02, SD = 0.86). Scoring between response categories (self-care recommendations, complication recommendations, lifestyle adjustments, and special device considerations) revealed no significant scoring differences. Suboptimal responses lacked nuance and contained incorrect information and recommendations. Readability indicated college and advanced levels for FRE (Mean = 39.5, SD = 7.17) and FKRGL (Mean = 13.1, SD = 1.47), higher than the sixth-grade level recommended for patient-targeted resources by the NIH.

Conclusion:

While ChatGPT-generated tracheostomy care responses may exhibit acceptable appropriateness, incomplete or misleading information may have dire clinical consequences. Further, inappropriately high reading levels may limit patient comprehension and accessibility. At this point in its technological infancy, AI-generated information should not be solely relied upon as a direct patient care resource.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Laryngoscope Investig Otolaryngol Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Laryngoscope Investig Otolaryngol Ano de publicação: 2024 Tipo de documento: Article