Your browser doesn't support javascript.
loading
A question-answering framework for automated abstract screening using large language models.
Akinseloyin, Opeoluwa; Jiang, Xiaorui; Palade, Vasile.
Affiliation
  • Akinseloyin O; Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 2TT, United Kingdom.
  • Jiang X; Information School, The University of Sheffield, Sheffield S10 2AH, United Kingdom.
  • Palade V; Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 2TT, United Kingdom.
J Am Med Inform Assoc ; 31(9): 1939-1952, 2024 Sep 01.
Article in En | MEDLINE | ID: mdl-39042516
ABSTRACT

OBJECTIVE:

This paper aims to address the challenges in abstract screening within systematic reviews (SR) by leveraging the zero-shot capabilities of large language models (LLMs).

METHODS:

We employ LLM to prioritize candidate studies by aligning abstracts with the selection criteria outlined in an SR protocol. Abstract screening was transformed into a novel question-answering (QA) framework, treating each selection criterion as a question addressed by LLM. The framework involves breaking down the selection criteria into multiple questions, properly prompting LLM to answer each question, scoring and re-ranking each answer, and combining the responses to make nuanced inclusion or exclusion decisions. RESULTS AND

DISCUSSION:

Large-scale validation was performed on the benchmark of CLEF eHealth 2019 Task 2 Technology-Assisted Reviews in Empirical Medicine. Focusing on GPT-3.5 as a case study, the proposed QA framework consistently exhibited a clear advantage over traditional information retrieval approaches and bespoke BERT-family models that were fine-tuned for prioritizing candidate studies (ie, from the BERT to PubMedBERT) across 31 datasets of 4 categories of SRs, underscoring their high potential in facilitating abstract screening. The experiments also showcased the viability of using selection criteria as a query for reference prioritization. The experiments also showcased the viability of the framework using different LLMs.

CONCLUSION:

Investigation justified the indispensable value of leveraging selection criteria to improve the performance of automated abstract screening. LLMs demonstrated proficiency in prioritizing candidate studies for abstract screening using the proposed QA framework. Significant performance improvements were obtained by re-ranking answers using the semantic alignment between abstracts and selection criteria. This further highlighted the pertinence of utilizing selection criteria to enhance abstract screening.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Natural Language Processing Limits: Humans Language: En Journal: J Am Med Inform Assoc Journal subject: INFORMATICA MEDICA Year: 2024 Document type: Article Affiliation country:

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Natural Language Processing Limits: Humans Language: En Journal: J Am Med Inform Assoc Journal subject: INFORMATICA MEDICA Year: 2024 Document type: Article Affiliation country: