Your browser doesn't support javascript.
loading
A framework for human evaluation of large language models in healthcare derived from literature review.
Tam, Thomas Yu Chow; Sivarajkumar, Sonish; Kapoor, Sumit; Stolyar, Alisa V; Polanska, Katelyn; McCarthy, Karleigh R; Osterhoudt, Hunter; Wu, Xizhi; Visweswaran, Shyam; Fu, Sunyang; Mathur, Piyush; Cacciamani, Giovanni E; Sun, Cong; Peng, Yifan; Wang, Yanshan.
Afiliación
  • Tam TYC; Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
  • Sivarajkumar S; Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA.
  • Kapoor S; Department of Critical Care Medicine, University of Pittsburgh Medical Center, Pittsburgh, PA, USA.
  • Stolyar AV; Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
  • Polanska K; Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
  • McCarthy KR; Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
  • Osterhoudt H; Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
  • Wu X; Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
  • Visweswaran S; Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, USA.
  • Fu S; Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA.
  • Mathur P; Department of Clinical and Health Informatics, Center for Translational AI Excellence and Applications in Medicine, University of Texas Health Science Center at Houston, Houston, TX, USA.
  • Cacciamani GE; Department of Anesthesiology, Cleveland Clinic, Cleveland, OH, USA.
  • Sun C; BrainX AI ReSearch, BrainX LLC, Cleveland, OH, USA.
  • Peng Y; Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
  • Wang Y; Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
NPJ Digit Med ; 7(1): 258, 2024 Sep 28.
Article en En | MEDLINE | ID: mdl-39333376
ABSTRACT
With generative artificial intelligence (GenAI), particularly large language models (LLMs), continuing to make inroads in healthcare, assessing LLMs with human evaluations is essential to assuring safety and effectiveness. This study reviews existing literature on human evaluation methodologies for LLMs in healthcare across various medical specialties and addresses factors such as evaluation dimensions, sample types and sizes, selection, and recruitment of evaluators, frameworks and metrics, evaluation process, and statistical analysis type. Our literature review of 142 studies shows gaps in reliability, generalizability, and applicability of current human evaluation practices. To overcome such significant obstacles to healthcare LLM developments and deployments, we propose QUEST, a comprehensive and practical framework for human evaluation of LLMs covering three phases of workflow Planning, Implementation and Adjudication, and Scoring and Review. QUEST is designed with five proposed evaluation principles Quality of Information, Understanding and Reasoning, Expression Style and Persona, Safety and Harm, and Trust and Confidence.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: NPJ Digit Med Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: NPJ Digit Med Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Reino Unido