Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.089
Filtrar
1.
J Dent Res ; : 220345241247028, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38993043

RESUMEN

Adequate and transparent reporting is necessary for critically appraising published research. Yet, ample evidence suggests that the design, conduct, analysis, interpretation, and reporting of oral health research could be greatly improved. Accordingly, the Task Force on Design and Analysis in Oral Health Research-statisticians and trialists from academia and industry-identified the minimum information needed to report and evaluate observational studies and clinical trials in oral health: the OHStat Guidelines. Drafts were circulated to the editors of 85 oral health journals and to Task Force members and sponsors and discussed at a December 2020 workshop attended by 49 researchers. The guidelines were subsequently revised by the Task Force's writing group. The guidelines draw heavily from the Consolidated Standards for Reporting Trials (CONSORT), Strengthening the Reporting of Observational Studies in Epidemiology (STROBE), and CONSORT harms guidelines and incorporate the SAMPL guidelines for reporting statistics, the CLIP principles for documenting images, and the GRADE indicating the quality of evidence. The guidelines also recommend reporting estimates in clinically meaningful units using confidence intervals, rather than relying on P values. In addition, OHStat introduces 7 new guidelines that concern the text itself, such as checking the congruence between abstract and text, structuring the discussion, and listing conclusions to make them more specific. OHStat does not replace other reporting guidelines; it incorporates those most relevant to dental research into a single document. Manuscripts using the OHStat guidelines will provide more information specific to oral health research.

2.
JDR Clin Trans Res ; : 23800844241247029, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38993046

RESUMEN

Adequate and transparent reporting is necessary for critically appraising research. Yet, evidence suggests that the design, conduct, analysis, interpretation, and reporting of oral health research could be greatly improved. Accordingly, the Task Force on Design and Analysis in Oral Health Research-statisticians and trialists from academia and industry-empaneled a group of authors to develop methodological and statistical reporting guidelines identifying the minimum information needed to document and evaluate observational studies and clinical trials in oral health: the OHstat Guidelines. Drafts were circulated to the editors of 85 oral health journals and to Task Force members and sponsors and discussed at a December 2020 workshop attended by 49 researchers. The final version was subsequently approved by the Task Force in September 2021, submitted for journal review in 2022, and revised in 2023. The checklist consists of 48 guidelines: 5 for introductory information, 17 for methods, 13 for statistical analysis, 6 for results, and 7 for interpretation; 7 are specific to clinical trials. Each of these guidelines identifies relevant information, explains its importance, and often describes best practices. The checklist was published in multiple journals. The article was published simultaneously in JDR Clinical and Translational Research, the Journal of the American Dental Association, and the Journal of Oral and Maxillofacial Surgery. Completed checklists should accompany manuscripts submitted for publication to these and other oral health journals to help authors, journal editors, and reviewers verify that the manuscript provides the information necessary to adequately document and evaluate the research.

3.
J Endod ; 2024 Jul 13.
Artículo en Inglés | MEDLINE | ID: mdl-39007795

RESUMEN

Adequate and transparent reporting is necessary for critically appraising published research. Yet, ample evidence suggests that the design, conduct, analysis, interpretation, and reporting of oral health research could be greatly improved. Accordingly, the Task Force on Design and Analysis in Oral Health Research-statisticians and trialists from academia and industry-identified the minimum information needed to report and evaluate observational studies and clinical trials in oral health: the OHStat Guidelines. Drafts were circulated to the editors of 85 oral health journals and to Task Force members and sponsors and discussed at a December 2020 workshop attended by 49 researchers. The guidelines were subsequently revised by the Task Force's writing group. The guidelines draw heavily from the Consolidated Standards for Reporting Trials (CONSORT), Strengthening the Reporting of Observational Studies in Epidemiology (STROBE), and CONSORT harms guidelines and incorporate the SAMPL guidelines for reporting statistics, the CLIP principles for documenting images, and the GRADE indicating the quality of evidence. The guidelines also recommend reporting estimates in clinically meaningful units using confidence intervals, rather than relying on P values. In addition, OHStat introduces 7 new guidelines that concern the text itself, such as checking the congruence between abstract and text, structuring the discussion, and listing conclusions to make them more specific. OHStat does not replace other reporting guidelines; it incorporates those most relevant to dental research into a single document. Manuscripts using the OHStat guidelines will provide more information specific to oral health research.

6.
Account Res ; : 1-19, 2024 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-38972046

RESUMEN

The exponential growth of MDPI and Frontiers over the last decade has been powered by their extensive use of special issues. The "special issue-ization" of journal publishing has been particularly associated with new publishers and seen as potentially "questionable." Through an extended case-study analysis of three journals owned by one of the "big five" commercial publishers, this paper explores the risks that this growing use of special issues presents to research integrity. All three case-study journals show sudden and marked changes in their publication patterns. An analysis of special issue editorials and retraction notes was used to determine the specifics of special issues and reasons for retractions. Descriptive statistics were used to analyse data. Findings suggest that these commercial publishers are also promoting special issues and that article retractions are often connected to guest editor manipulation. This underlies the threat that "special issue-ization" presents to research integrity. It highlights the risks posed by the guest editor model, and the importance of extending this analysis to long-existing commercial publishers. The paper emphasizes the need for an in-depth examination of the underlying structures and political economy of science, and a discussion of the rise of gaming and manipulation within higher education systems.

7.
Cas Lek Cesk ; 162(7-8): 294-297, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38981715

RESUMEN

The advent of large language models (LLMs) based on neural networks marks a significant shift in academic writing, particularly in medical sciences. These models, including OpenAI's GPT-4, Google's Bard, and Anthropic's Claude, enable more efficient text processing through transformer architecture and attention mechanisms. LLMs can generate coherent texts that are indistinguishable from human-written content. In medicine, they can contribute to the automation of literature reviews, data extraction, and hypothesis formulation. However, ethical concerns arise regarding the quality and integrity of scientific publications and the risk of generating misleading content. This article provides an overview of how LLMs are changing medical writing, the ethical dilemmas they bring, and the possibilities for detecting AI-generated text. It concludes with a focus on the potential future of LLMs in academic publishing and their impact on the medical community.


Asunto(s)
Redes Neurales de la Computación , Humanos , Procesamiento de Lenguaje Natural , Lenguaje , Edición/ética
14.
Artículo en Inglés | MEDLINE | ID: mdl-38828653
16.
Nature ; 631(8019): 241-243, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38871875
17.
Artículo en Inglés | MEDLINE | ID: mdl-38879443

RESUMEN

OBJECTIVE: Investigate the use of advanced natural language processing models to streamline the time-consuming process of writing and revising scholarly manuscripts. MATERIALS AND METHODS: For this purpose, we integrate large language models into the Manubot publishing ecosystem to suggest revisions for scholarly texts. Our AI-based revision workflow employs a prompt generator that incorporates manuscript metadata into templates, generating section-specific instructions for the language model. The model then generates revised versions of each paragraph for human authors to review. We evaluated this methodology through 5 case studies of existing manuscripts, including the revision of this manuscript. RESULTS: Our results indicate that these models, despite some limitations, can grasp complex academic concepts and enhance text quality. All changes to the manuscript are tracked using a version control system, ensuring transparency in distinguishing between human- and machine-generated text. CONCLUSIONS: Given the significant time researchers invest in crafting prose, incorporating large language models into the scholarly writing process can significantly improve the type of knowledge work performed by academics. Our approach also enables scholars to concentrate on critical aspects of their work, such as the novelty of their ideas, while automating tedious tasks like adhering to specific writing styles. Although the use of AI-assisted tools in scientific authoring is controversial, our approach, which focuses on revising human-written text and provides change-tracking transparency, can mitigate concerns regarding AI's role in scientific writing.

18.
Account Res ; : 1-12, 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38919031

RESUMEN

The frequency of scientific retractions has grown substantially in recent years. However, thus far there is no standardized retraction notice format to which journals and their publishers adhere voluntarily, let alone compulsorily. We developed a rubric specifying seven criteria in order to judge whether retraction notices are easily and freely accessible, informative, and transparent. We mined the Retraction Watch database and evaluated a total of 768 retraction notices from two publishers (Springer and Wiley) over three years (2010, 2015, and 2020). Per our rubric, both publishers tended to score higher on measures of openness/availability, accessibility, and clarity as to why a paper was retracted than they did in: acknowledging institutional investigations; confirming whether there was consensus among authors; and specifying which parts of any given paper warranted retraction. Springer retraction notices appeared to improve over time with respect to the rubric's seven criteria. We observed some discrepancies among raters, indicating the difficulty in developing a robust objective rubric for evaluating retraction notices.

19.
J Clin Epidemiol ; 173: 111427, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38880438

RESUMEN

OBJECTIVES: Retraction is intended to be a mechanism to correct the published body of knowledge when necessary due to fraudulent, fatally flawed, or ethically unacceptable publications. However, the success of this mechanism requires that retracted publications be consistently identified as such and that retraction notices contain sufficient information to understand what is being retracted and why. Our study investigated how clearly and consistently retracted publications in public health are being presented to researchers. STUDY DESIGN AND SETTING: This is a cross-sectional study, using 441 retracted research publications in the field of public health. Records were retrieved for each of these publications from 11 resources, while retraction notices were retrieved from publisher websites and full-text aggregators. The identification of the retracted status of the publication was assessed using criteria from the Committee on Publication Ethics and the National Library of Medicine. The completeness of the associated retraction notices was assessed using criteria from Committee on Publication Ethics and Retraction Watch. RESULTS: Two thousand eight hundred forty-one records for retracted publications were retrieved, of which less than half indicated that the article had been retracted. Less than 5% of publications were identified as retracted through all resources through which they were available. Within single resources, if and how retracted publications were identified varied. Retraction notices were frequently incomplete, with no notices meeting all the criteria. CONCLUSIONS: The observed inconsistencies and incomplete notices pose a threat to the integrity of scientific publishing and highlight the need to better align with existing best practices to ensure more effective and transparent dissemination of information on retractions.

20.
mBio ; : e0146724, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38888330

RESUMEN

During the initial months of the coronavirus disease 2019 pandemic, mBio experienced a large increase in the number of submissions, a phenomenon that was also observed for journals of different fields. Since most research laboratories were closed, this increase cannot reflect increased research activity. In this editorial, we propose that the increase in submissions reflected the release of a backlog of unpublished work following a reduction in work-related engagements including scientific travel, which in turn provides an estimate of the productivity costs of such activities on research output.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...