Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 890
Filter
1.
Croat Med J ; 65(2): 93-100, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38706235

ABSTRACT

AIM: To evaluate the quality of ChatGPT-generated case reports and assess the ability of ChatGPT to peer review medical articles. METHODS: This study was conducted from February to April 2023. First, ChatGPT 3.0 was used to generate 15 case reports, which were then peer-reviewed by expert human reviewers. Second, ChatGPT 4.0 was employed to peer review 15 published short articles. RESULTS: ChatGPT was capable of generating case reports, but these reports exhibited inaccuracies, particularly when it came to referencing. The case reports received mixed ratings from peer reviewers, with 33.3% of professionals recommending rejection. The reports' overall merit score was 4.9±1.8 out of 10. The review capabilities of ChatGPT were weaker than its text generation abilities. The AI as a peer reviewer did not recognize major inconsistencies in articles that had undergone significant content changes. CONCLUSION: While ChatGPT demonstrated proficiency in generating case reports, there were limitations in terms of consistency and accuracy, especially in referencing.


Subject(s)
Peer Review , Humans , Peer Review/standards , Writing/standards , Peer Review, Research/standards
3.
Z Evid Fortbild Qual Gesundhwes ; 186: 18-26, 2024 May.
Article in German | MEDLINE | ID: mdl-38580502

ABSTRACT

BACKGROUND: Quality measurement in the German statutory program for quality in health care follows a two-step process. For selected areas of health care, quality is measured via performance indicators (first step). Providers failing to achieve benchmarks in these indicators subsequently enter into a peer review process (second step) and are asked by the respective regional authority to provide a written statement regarding their indicator results. The statements are then evaluated by peers, with the goal to assess the provider's quality of care. In the past, similar peer review-based approaches to the measurement of health care quality in other countries have shown a tendency to lack reliability. So far, the reliability of this component of the German statutory program for quality in health care has not been investigated. METHOD: Using logistic regression models, the influence of the respective regional authority on the peer review component of health care quality measurement in Germany was investigated using three exemplary indicators and data from 2016. RESULTS: Both the probability that providers are asked to provide a statement as well as the results produced by the peer review process significantly depend on the regional authority in charge. This dependence cannot be fully explained by differences in the indicator results or by differences in case volume. CONCLUSIONS: The present results are in accordance with earlier findings, which show low reliability for peer review-based approaches to quality measurement. Thus, different results produced by the peer review component of the quality measurement process may in part be due to differences in the way the review process is conducted. This heterogeneity among the regional authorities limits the reliability of this process. In order to increase reliability, the peer review process should be standardized to a higher degree, with clear review criteria, and the peers should undergo comprehensive training for the review process. Alternatively, the future peer review component could be adapted to focus rather on identification of improvement strategies than on reliable provider comparisons.


Subject(s)
National Health Programs , Peer Review, Health Care , Quality Assurance, Health Care , Quality Indicators, Health Care , Germany , Humans , Quality Assurance, Health Care/standards , Reproducibility of Results , Quality Indicators, Health Care/standards , National Health Programs/standards , Peer Review, Health Care/standards , Benchmarking/standards , Peer Review/standards
7.
Australas Psychiatry ; 32(3): 247-251, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38327220

ABSTRACT

OBJECTIVE: This paper aims to provide an introductory resource for beginner peer reviewers in psychiatry and the broader biomedical science field. It will provide a concise overview of the peer review process, alongside some reviewing tips and tricks. CONCLUSION: The peer review process is a fundamental aspect of biomedical science publishing. The model of peer review offered varies between journals and usually relies on a pool of volunteers with differing levels of expertise and scope. The aim of peer review is to collaboratively leverage reviewers' collective knowledge with the objective of increasing the quality and merit of published works. The limitations, methodology and need for transparency in the peer review process are often poorly understood. Although imperfect, the peer review process provides some degree of scientific rigour by emphasising the need for an ethical, comprehensive and systematic approach to reviewing articles. Contributions from junior reviewers can add significant value to manuscripts.


Subject(s)
Biomedical Research , Peer Review, Research , Humans , Biomedical Research/standards , Peer Review, Research/standards , Psychiatry/standards , Peer Review/standards , Peer Review/methods , Periodicals as Topic/standards
9.
JAMA Netw Open ; 6(12): e2347607, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-38095896

ABSTRACT

Importance: High-quality peer reviews are often thought to be essential to ensuring the integrity of the scientific publication process, but measuring peer review quality is challenging. Although imperfect, review word count could potentially serve as a simple, objective metric of review quality. Objective: To determine the prevalence of very short peer reviews and how often they inform editorial decisions on research articles in 3 leading general medical journals. Design, Setting, and Participants: This cross-sectional study compiled a data set of peer reviews from published, full-length original research articles from 3 general medical journals (The BMJ, PLOS Medicine, and BMC Medicine) between 2003 and 2022. Eligible articles were those with peer review data; all peer reviews used to make the first editorial decision (ie, accept vs revise and resubmit) were included. Main Outcomes and Measures: Prevalence of very short reviews was the primary outcome, which was defined as a review of fewer than 200 words. In secondary analyses, thresholds of fewer than 100 words and fewer than 300 words were used. Results were disaggregated by journal and year. The proportion of articles for which the first editorial decision was made based on a set of peer reviews in which very short reviews constituted 100%, 50% or more, 33% or more, and 20% or more of the reviews was calculated. Results: In this sample of 11 466 reviews (including 6086 in BMC Medicine, 3816 in The BMJ, and 1564 in PLOS Medicine) corresponding to 4038 published articles, the median (IQR) word count per review was 425 (253-575) words, and the mean (SD) word count was 520.0 (401.0) words. The overall prevalence of very short (<200 words) peer reviews was 1958 of 11 466 reviews (17.1%). Across the 3 journals, 843 of 4038 initial editorial decisions (20.9%) were based on review sets containing 50% or more very short reviews. The prevalence of very short reviews and share of editorial decisions based on review sets containing 50% or more very short reviews was highest for BMC Medicine (693 of 2585 editorial decisions [26.8%]) and lowest for The BMJ (76 of 1040 editorial decisions [7.3%]). Conclusion and Relevance: In this study of 3 leading general medical journals, one-fifth of initial editorial decisions for published articles were likely based at least partially on reviews of such short length that they were unlikely to be of high quality. Future research could determine whether monitoring peer review length improves the quality of peer reviews and which interventions, such as incentives and norm-based interventions, may elicit more detailed reviews.


Subject(s)
Peer Review , Periodicals as Topic , Humans , Cross-Sectional Studies , Peer Review/standards , Periodicals as Topic/standards , Prevalence , Publications
10.
Am J Physiol Regul Integr Comp Physiol ; 325(4): R309-R326, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37519254

ABSTRACT

In part 1 of this Perspective, I discussed general principles of scientific peer review in the biomedical sciences aimed at early-stage investigators (i.e., graduate students, postdoctoral fellows, and junior faculty). Here in part 2, I share my thoughts specifically on the topic of peer review of manuscripts. I begin by defining manuscript peer review and discussing the goals and importance of the concept. I then describe the organizational structure of the process, including the two distinct stages involved. Next, I emphasize several important considerations for manuscript reviewers, both general points and key considerations when evaluating specific types of papers, including original research manuscripts, reviews, methods articles, and opinion pieces. I then advance some practical suggestions for developing the written critique document, offer advice for making an overall recommendation to the editor (i.e., accept, revise, reject), and describe the unique issues involved when assessing a revised manuscript. Finally, I comment on how best to gain experience in the essential academic research skill of manuscript peer review. In part 3 of the series, I will discuss the topic of reviewing grant applications submitted to research funding agencies.


Subject(s)
Peer Review , Publishing , Humans , Publishing/standards , Peer Review/standards , Research Personnel
11.
Asian Pac J Cancer Prev ; 22(12): 3735-3740, 2022 01 02.
Article in English | MEDLINE | ID: mdl-34973682

ABSTRACT

The journal of APJCP (Asian Pacific Journal of Cancer Prevention) focuses to gather relevant and up-to-date novel information's related to cancer sciences. The research methodologies and approaches adopted by the researcher are prone to variation which may be desirable in the context of novel scientific findings however, the reproducibility for these studies needs to be unified and assured. The reproducibility issues are highly concerned when preclinical studies are reported in cancer, for natural products in particular. The natural products and medicinal plants are prone to a wide variation in terms of phytochemistry and phyto-pharmacology, ultimately affecting the end results for cancer studies. Hence the need for specific guidelines to adopt a best-practice in cancer research are utmost essential. The current AIMRDA guidelines aims to develop a consensus-based tool in order to enhance the quality and assure the reproducibility of studies reporting natural products in cancer prevention. A core working committee of the experts developed an initial draft for the guidelines where more focus was kept for the inclusion of specific items not covered in previous published tools. The initial draft was peer-reviewed, experts-views provided, and improved by a scientific committee comprising of field research experts, editorial experts of different journals, and academics working in different organization worldwide. The feedback from continuous online meetings, mail communications, and webinars resulted a final draft in the shape of a checklist tool, covering the best practices related to the field of natural products research in cancer prevention and treatment. It is mandatory for the authors to read and follow the AIMRDA tool, and be aware of the good-practices to be followed in cancer research prior to any submission to APJCP. Though the tool is developed based on experts in the field, it needs to be further updated and validated in practice via implementation in the field.


Subject(s)
Antineoplastic Agents , Biological Products , Editorial Policies , Peer Review/standards , Research Design/standards , Consensus , Humans , Reproducibility of Results
15.
J Nurs Meas ; 29(2): 227-238, 2021 Aug 01.
Article in English | MEDLINE | ID: mdl-34326204

ABSTRACT

BACKGROUND AND PURPOSE: The Advanced Practice Nurse (APN) Council refined the APN peer review to an objective, data-driven process. The purpose of the study was to assess the interrater reliability of APN peer reviews using the APN Rubric based on Hamric, Spross & Hanson's Model of Advanced Practice Nursing. METHODS: A quantitative single-site study with a convenience sample of 80 APN Portfolios. RESULTS: Analysis of six core competencies (direct clinical practice, leadership, consultation/collaboration, coaching/guiding, research, and ethical decision-making) within the APN Rubric demonstrated substantial and near perfect agreement levels in the APN peer review process. CONCLUSIONS: The application of APN core competencies within the peer review process demonstrated high consistency, thereby increasing the significance and objectivity of peer review outcomes.


Subject(s)
Advanced Practice Nursing/statistics & numerical data , Advanced Practice Nursing/standards , Clinical Competence/statistics & numerical data , Clinical Competence/standards , Nurse Practitioners/statistics & numerical data , Nurse Practitioners/standards , Peer Review/standards , Practice Guidelines as Topic , Adult , Female , Humans , Male , Middle Aged , Reproducibility of Results
16.
J Clin Epidemiol ; 136: 157-167, 2021 08.
Article in English | MEDLINE | ID: mdl-33979663

ABSTRACT

OBJECTIVES: To evaluate the impact of guidance and training on the inter-rater reliability (IRR), inter-consensus reliability (ICR) and evaluator burden of the Risk of Bias (RoB) in Non-randomized Studies (NRS) of Interventions (ROBINS-I) tool, and the RoB instrument for NRS of Exposures (ROB-NRSE). STUDY DESIGN AND SETTING: In a before-and-after study, seven reviewers appraised the RoB using ROBINS-I (n = 44) and ROB-NRSE (n = 44), before and after guidance and training. We used Gwet's AC1 statistic to calculate IRR and ICR. RESULTS: After guidance and training, the IRR and ICR of the overall bias domain of ROBINS-I and ROB-NRSE improved significantly; with many individual domains showing either a significant (IRR and ICR of ROB-NRSE; ICR of ROBINS-I), or nonsignificant improvement (IRR of ROBINS-I). Evaluator burden significantly decreased after guidance and training for ROBINS-I, whereas for ROB-NRSE there was a slight nonsignificant increase. CONCLUSION: Overall, there was benefit for guidance and training for both tools. We highly recommend guidance and training to reviewers prior to RoB assessments and that future research investigate aspects of guidance and training that are most effective.


Subject(s)
Biomedical Research/standards , Epidemiologic Research Design , Observer Variation , Peer Review/standards , Research Design/standards , Research Personnel/education , Adult , Biomedical Research/statistics & numerical data , Canada , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Psychometrics/methods , Reproducibility of Results , Research Design/statistics & numerical data , United Kingdom
20.
Endocrinology ; 162(3)2021 03 01.
Article in English | MEDLINE | ID: mdl-33516156

ABSTRACT

This Perspective presents comments intended for junior researchers by Carol A. Lange, Editor-in-Chief, Endocrinology, and Stephen R. Hammes, former Editor-in-Chief, Molecular Endocrinology, and former co-Editor-in-Chief, Endocrinology. PRINCIPAL POINTS: 1. Know when you are ready and identify your target audience.2. Select an appropriate journal.3. Craft your title and abstract to capture your key words and deliver your message.4. Tell a clear and impactful story.5. Review, polish, and perfect your manuscript.


Subject(s)
Peer Review, Research , Publishing , Writing , Biomedical Research/methods , Biomedical Research/standards , Editorial Policies , Humans , Journal Impact Factor , Peer Review/methods , Peer Review/standards , Peer Review, Research/standards , Publishing/standards , Vocabulary, Controlled , Writing/standards
SELECTION OF CITATIONS
SEARCH DETAIL