Your browser doesn't support javascript.
loading
How do authors' perceptions of their papers compare with co-authors' perceptions and peer-review decisions?
Rastogi, Charvi; Stelmakh, Ivan; Beygelzimer, Alina; Dauphin, Yann N; Liang, Percy; Wortman Vaughan, Jennifer; Xue, Zhenyu; Daumé Iii, Hal; Pierson, Emma; Shah, Nihar B.
Afiliação
  • Rastogi C; Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America.
  • Stelmakh I; New Economic School, Moscow, Russia.
  • Beygelzimer A; Yahoo! Research, New York, New York, United States of America.
  • Dauphin YN; Google Deepmind, Montreal, Canada.
  • Liang P; Department of Computer Science, Stanford University, Stanford, California, United States of America.
  • Wortman Vaughan J; Microsoft Research, New York, New York, United States of America.
  • Xue Z; Independent Researcher, Shanghai, China.
  • Daumé Iii H; Department of Computer Science, University of Maryland, College Park, Maryland, United States of America.
  • Pierson E; Jacobs Technion-Cornell Institute, Cornell Tech, New York, New York, United States of America.
  • Shah NB; Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America.
PLoS One ; 19(4): e0300710, 2024.
Article em En | MEDLINE | ID: mdl-38598482
ABSTRACT
How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the authors on three questions (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews. The salient results are (1) Authors had roughly a three-fold overestimate of the acceptance probability of their papers The median prediction was 70% for an approximately 25% acceptance rate. (2) Female authors exhibited a marginally higher (statistically significant) miscalibration than male authors; predictions of authors invited to serve as meta-reviewers or reviewers were similarly calibrated, but better than authors who were not invited to review. (3) Authors' relative ranking of scientific contribution of two submissions they made generally agreed with their predicted acceptance probabilities (93% agreement), but there was a notable 7% responses where authors predicted a worse outcome for their better paper. (4) The author-provided rankings disagreed with the peer-review decisions about a third of the time; when co-authors ranked their jointly authored papers, co-authors disagreed at a similar rate-about a third of the time. (5) At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process. The stakeholders in peer review should take these findings into account in setting their expectations from peer review.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Revisão por Pares / Revisão da Pesquisa por Pares Limite: Female / Humans / Male Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Revisão por Pares / Revisão da Pesquisa por Pares Limite: Female / Humans / Male Idioma: En Ano de publicação: 2024 Tipo de documento: Article