Your browser doesn't support javascript.
loading
Assessing citation integrity in biomedical publications: corpus annotation and NLP models.
Sarol, Maria Janina; Ming, Shufan; Radhakrishna, Shruthan; Schneider, Jodi; Kilicoglu, Halil.
Afiliación
  • Sarol MJ; Informatics Programs, University of Illinois Urbana-Champaign, Champaign, IL 61820, United States.
  • Ming S; School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL 61820, United States.
  • Radhakrishna S; Department of Computer Science, University of Illinois Urbana-Champaign, Champaign, IL 61801, United States.
  • Schneider J; School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL 61820, United States.
  • Kilicoglu H; School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL 61820, United States.
Bioinformatics ; 40(7)2024 Jul 01.
Article en En | MEDLINE | ID: mdl-38924508
ABSTRACT
MOTIVATION Citations have a fundamental role in scholarly communication and assessment. Citation accuracy and transparency is crucial for the integrity of scientific evidence. In this work, we focus on quotation errors, errors in citation content that can distort the scientific evidence and that are hard to detect for humans. We construct a corpus and propose natural language processing (NLP) methods to identify such errors in biomedical publications.

RESULTS:

We manually annotated 100 highly-cited biomedical publications (reference articles) and citations to them. The annotation involved labeling citation context in the citing article, relevant evidence sentences in the reference article, and the accuracy of the citation. A total of 3063 citation instances were annotated (39.18% with accuracy errors). For NLP, we combined a sentence retriever with a fine-tuned claim verification model to label citations as ACCURATE, NOT_ACCURATE, or IRRELEVANT. We also explored few-shot in-context learning with generative large language models. The best performing model-which uses citation sentences as citation context, the BM25 model with MonoT5 reranker for retrieving top-20 sentences, and a fine-tuned MultiVerS model for accuracy label classification-yielded 0.59 micro-F1 and 0.52 macro-F1 score. GPT-4 in-context learning performed better in identifying accurate citations, but it lagged for erroneous citations (0.65 micro-F1, 0.45 macro-F1). Citation quotation errors are often subtle, and it is currently challenging for NLP models to identify erroneous citations. With further improvements, the models could serve to improve citation quality and accuracy. AVAILABILITY AND IMPLEMENTATION We make the corpus and the best-performing NLP model publicly available at https//github.com/ScienceNLP-Lab/Citation-Integrity/.
Asunto(s)

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Procesamiento de Lenguaje Natural Límite: Humans Idioma: En Revista: Bioinformatics Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Procesamiento de Lenguaje Natural Límite: Humans Idioma: En Revista: Bioinformatics Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos