Your browser doesn't support javascript.
loading
Peer review analyze: A novel benchmark resource for computational analysis of peer reviews.
Ghosal, Tirthankar; Kumar, Sandeep; Bharti, Prabhat Kumar; Ekbal, Asif.
Afiliación
  • Ghosal T; Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic.
  • Kumar S; Indian Institute of Technology Patna, Bihta, Bihar, India.
  • Bharti PK; Indian Institute of Technology Patna, Bihta, Bihar, India.
  • Ekbal A; Indian Institute of Technology Patna, Bihta, Bihar, India.
PLoS One ; 17(1): e0259238, 2022.
Article en En | MEDLINE | ID: mdl-35085252
ABSTRACT
Peer Review is at the heart of scholarly communications and the cornerstone of scientific publishing. However, academia often criticizes the peer review system as non-transparent, biased, arbitrary, a flawed process at the heart of science, leading to researchers arguing with its reliability and quality. These problems could also be due to the lack of studies with the peer-review texts for various proprietary and confidentiality clauses. Peer review texts could serve as a rich source of Natural Language Processing (NLP) research on understanding the scholarly communication landscape, and thereby build systems towards mitigating those pertinent problems. In this work, we present a first of its kind multi-layered dataset of 1199 open peer review texts manually annotated at the sentence level (∼ 17k sentences) across the four layers, viz. Paper Section Correspondence, Paper Aspect Category, Review Functionality, and Review Significance. Given a text written by the reviewer, we annotate to which sections (e.g., Methodology, Experiments, etc.), what aspects (e.g., Originality/Novelty, Empirical/Theoretical Soundness, etc.) of the paper does the review text correspond to, what is the role played by the review text (e.g., appreciation, criticism, summary, etc.), and the importance of the review statement (major, minor, general) within the review. We also annotate the sentiment of the reviewer (positive, negative, neutral) for the first two layers to judge the reviewer's perspective on the different sections and aspects of the paper. We further introduce four novel tasks with this dataset, which could serve as an indicator of the exhaustiveness of a peer review and can be a step towards the automatic judgment of review quality. We also present baseline experiments and results for the different tasks for further investigations. We believe our dataset would provide a benchmark experimental testbed for automated systems to leverage on current NLP state-of-the-art techniques to address different issues with peer review quality, thereby ushering increased transparency and trust on the holy grail of scientific research validation. Our dataset and associated codes are available at https//www.iitp.ac.in/~ai-nlp-ml/resources.html#Peer-Review-Analyze.
Asunto(s)

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Benchmarking Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2022 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Benchmarking Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2022 Tipo del documento: Article