Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 7.758
Filter
2.
Epidemiol Prev ; 48(2): 149-157, 2024.
Article in Italian | MEDLINE | ID: mdl-38770732

ABSTRACT

BACKGROUND: the peer-review process, which is the foundation of modern scientific production, represents one of its essential elements. However, despite numerous benefits, it presents several critical issues. OBJECTIVES: to collect the opinions of a group of researchers from the epidemiological scientific community on peer-review processes. DESIGN: cross-sectional study using a questionnaire evaluation. SETTING AND PARTICIPANTS: a 29-question survey was administered to 516 healthcare professionals through the SurveyMonkey platform. The questions focused on the individual characteristics of the respondents and their perceived satisfaction with some characteristics of the review process as well as their propensity of changing some aspects of it. In addition, three open-ended questions were included, allowing respondents to provide comments on the role that reviewers and the review process should play. Descriptive statistics were produced in terms of absolute frequencies and percentages for the information collected through the questionnaire. Secondly, a multiple logistic regression analysis was conducted to assess the willingness to change certain aspects of peer review, adjusting for covariates such as age, sex, being the author of at least one scientific work, being a reviewer of at least one scientific work, and belonging to a specific discipline. The results are expressed as odds ratios (ORs) and their 95% confidence intervals (95%CI). Text analysis and representation using word cloud were also used for an open-ended question. MAIN OUTCOMES MEASURES: level of satisfaction regarding some characteristics of the peer-review process. RESULTS: a total of 516 participants completed the questionnaire. Specifically, 87.2% (N. 450) of the participants were the authors of at least one scientific publication, 78.7% were first authors at least once (N. 406), and 71.5% acted as reviewers within the peer-review process (N. 369). The results obtained from the multiple logistic regression models did not highlight any significant differences in terms of propensity to change for age and sex categories, except for a lower propensity of the under 35 age group towards unmasking, defined as the presence of reviewers and editorial boards names on the publish article (OR <35 years vs 45-54 years: 0.51; 95%CI 0.29-0.89) and a higher propensity for post-formatting proposals, defined as the possibility of formatting the article following journal guidelines after the acceptance, among those under 45 (OR <35 years vs 45-54 years: 1.73; 95%CI 0.90-3.31; OR 35-44 years vs 45-54 years: 2.02; 95%CI 1.10-3.72). Finally, approximately 50% of respondents found it appropriate to receive credits for the revision work performed, while approximately 30% found it appropriate to receive a discount on publication fees for the same journal in which they acted as reviewers. CONCLUSIONS: the peer-review process is considered essential, but imperfect, by the professionals who participated in the questionnaire, thus providing a clear picture of the value that peer-review adds rigorously to each scientific work and the need to continue constructive dialogue on this topic within the scientific community.


Subject(s)
Peer Review, Research , Cross-Sectional Studies , Humans , Surveys and Questionnaires , Female , Male , Adult , Middle Aged , Internet , Peer Review
3.
Croat Med J ; 65(2): 93-100, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38706235

ABSTRACT

AIM: To evaluate the quality of ChatGPT-generated case reports and assess the ability of ChatGPT to peer review medical articles. METHODS: This study was conducted from February to April 2023. First, ChatGPT 3.0 was used to generate 15 case reports, which were then peer-reviewed by expert human reviewers. Second, ChatGPT 4.0 was employed to peer review 15 published short articles. RESULTS: ChatGPT was capable of generating case reports, but these reports exhibited inaccuracies, particularly when it came to referencing. The case reports received mixed ratings from peer reviewers, with 33.3% of professionals recommending rejection. The reports' overall merit score was 4.9±1.8 out of 10. The review capabilities of ChatGPT were weaker than its text generation abilities. The AI as a peer reviewer did not recognize major inconsistencies in articles that had undergone significant content changes. CONCLUSION: While ChatGPT demonstrated proficiency in generating case reports, there were limitations in terms of consistency and accuracy, especially in referencing.


Subject(s)
Peer Review , Humans , Peer Review/standards , Writing/standards , Peer Review, Research/standards
4.
WMJ ; 123(2): 70-73, 2024 May.
Article in English | MEDLINE | ID: mdl-38718228
5.
Am J Health Syst Pharm ; 81(10): 403-408, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38712845
6.
J Food Sci ; 89(5): 2525-2526, 2024 May.
Article in English | MEDLINE | ID: mdl-38761161
7.
9.
PLoS One ; 19(4): e0300710, 2024.
Article in English | MEDLINE | ID: mdl-38598482

ABSTRACT

How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews. The salient results are: (1) Authors had roughly a three-fold overestimate of the acceptance probability of their papers: The median prediction was 70% for an approximately 25% acceptance rate. (2) Female authors exhibited a marginally higher (statistically significant) miscalibration than male authors; predictions of authors invited to serve as meta-reviewers or reviewers were similarly calibrated, but better than authors who were not invited to review. (3) Authors' relative ranking of scientific contribution of two submissions they made generally agreed with their predicted acceptance probabilities (93% agreement), but there was a notable 7% responses where authors predicted a worse outcome for their better paper. (4) The author-provided rankings disagreed with the peer-review decisions about a third of the time; when co-authors ranked their jointly authored papers, co-authors disagreed at a similar rate-about a third of the time. (5) At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process. The stakeholders in peer review should take these findings into account in setting their expectations from peer review.


Subject(s)
Peer Review, Research , Peer Review , Male , Female , Humans , Surveys and Questionnaires
13.
Int J Gynecol Cancer ; 34(5): 669-674, 2024 May 06.
Article in English | MEDLINE | ID: mdl-38627032

ABSTRACT

OBJECTIVE: To determine if reviewer experience impacts the ability to discriminate between human-written and ChatGPT-written abstracts. METHODS: Thirty reviewers (10 seniors, 10 juniors, and 10 residents) were asked to differentiate between 10 ChatGPT-written and 10 human-written (fabricated) abstracts. For the study, 10 gynecologic oncology abstracts were fabricated by the authors. For each human-written abstract we generated a ChatGPT matching abstract by using the same title and the fabricated results of each of the human generated abstracts. A web-based questionnaire was used to gather demographic data and to record the reviewers' evaluation of the 20 abstracts. Comparative statistics and multivariable regression were used to identify factors associated with a higher correct identification rate. RESULTS: The 30 reviewers discriminated 20 abstracts, giving a total of 600 abstract evaluations. The reviewers were able to correctly identify 300/600 (50%) of the abstracts: 139/300 (46.3%) of the ChatGPT-generated abstracts and 161/300 (53.7%) of the human-written abstracts (p=0.07). Human-written abstracts had a higher rate of correct identification (median (IQR) 56.7% (49.2-64.1%) vs 45.0% (43.2-48.3%), p=0.023). Senior reviewers had a higher correct identification rate (60%) than junior reviewers and residents (45% each; p=0.043 and p=0.002, respectively). In a linear regression model including the experience level of the reviewers, familiarity with artificial intelligence (AI) and the country in which the majority of medical training was achieved (English speaking vs non-English speaking), the experience of the reviewer (ß=10.2 (95% CI 1.8 to 18.7)) and familiarity with AI (ß=7.78 (95% CI 0.6 to 15.0)) were independently associated with the correct identification rate (p=0.019 and p=0.035, respectively). In a correlation analysis the number of publications by the reviewer was positively correlated with the correct identification rate (r28)=0.61, p<0.001. CONCLUSION: A total of 46.3% of abstracts written by ChatGPT were detected by reviewers. The correct identification rate increased with reviewer and publication experience.


Subject(s)
Abstracting and Indexing , Humans , Abstracting and Indexing/standards , Female , Peer Review, Research , Writing/standards , Gynecology , Surveys and Questionnaires , Publishing/statistics & numerical data
16.
J Prim Care Community Health ; 15: 21501319241252235, 2024.
Article in English | MEDLINE | ID: mdl-38682542

ABSTRACT

Journal editors depend on peer reviewers to make decisions about submitted manuscripts. These reviewers help evaluate the methods, the results, the discussion of the results, and the overall organization and presentation of the manuscript. In addition, reviewers can help identify important mistakes and possible misconduct. Editors frequently have difficulty obtaining enough peer reviews which are submitted in a timely manner. This increases the workload of editors and journal managers and potentially delays the publication of clinical and research studies. This commentary discusses of the importance of peer reviews and make suggestions which potentially can increase the participation of academic faculty and researchers in this important activity.


Subject(s)
Editorial Policies , Peer Review, Research , Periodicals as Topic , Humans , Peer Review, Research/standards , Peer Review , Publishing/standards
17.
Trends Ecol Evol ; 39(4): 311-314, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38472078

ABSTRACT

Empirical studies on peer review bias are primarily conducted by people from privileged groups and with affiliations with the journals studied. Data access is one major barrier to conducting peer review research. Accordingly, we propose pathways to broaden access to peer review data to people from more diverse backgrounds.


Subject(s)
Periodicals as Topic , Humans , Peer Review , Peer Review, Research
18.
Radiol Imaging Cancer ; 6(2): e240054, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38488497
SELECTION OF CITATIONS
SEARCH DETAIL
...