Your browser doesn't support javascript.
loading
CONSORT-TM: Text classification models for assessing the completeness of randomized controlled trial publications.
Jiang, Lan; Lan, Mengfei; Menke, Joe D; Vorland, Colby J; Kilicoglu, Halil.
Afiliación
  • Jiang L; School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL, USA.
  • Lan M; School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL, USA.
  • Menke JD; School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL, USA.
  • Vorland CJ; Indiana University, School of Public Health, Bloomington, IN, USA.
  • Kilicoglu H; School of Information Sciences, University of Illinois Urbana-Champaign, Champaign, IL, USA.
medRxiv ; 2024 Apr 01.
Article en En | MEDLINE | ID: mdl-38633775
ABSTRACT

Objective:

To develop text classification models for determining whether the checklist items in the CONSORT reporting guidelines are reported in randomized controlled trial publications. Materials and

Methods:

Using a corpus annotated at the sentence level with 37 fine-grained CONSORT items, we trained several sentence classification models (PubMedBERT fine-tuning, BioGPT fine-tuning, and in-context learning with GPT-4) and compared their performance. To address the problem of small training dataset, we used several data augmentation methods (EDA, UMLS-EDA, text generation and rephrasing with GPT-4) and assessed their impact on the fine-tuned PubMedBERT model. We also fine-tuned PubMedBERT models limited to checklist items associated with specific sections (e.g., Methods) to evaluate whether such models could improve performance compared to the single full model. We performed 5-fold cross-validation and report precision, recall, F1 score, and area under curve (AUC).

Results:

Fine-tuned PubMedBERT model that takes as input the sentence and the surrounding sentence representations and uses section headers yielded the best overall performance (0.71 micro-F1, 0.64 macro-F1). Data augmentation had limited positive effect, UMLS-EDA yielding slightly better results than data augmentation using GPT-4. BioGPT fine-tuning and GPT-4 in-context learning exhibited suboptimal results. Methods-specific model yielded higher performance for methodology items, other section-specific models did not have significant impact.

Conclusion:

Most CONSORT checklist items can be recognized reasonably well with the fine-tuned PubMedBERT model but there is room for improvement. Improved models can underpin the journal editorial workflows and CONSORT adherence checks and can help authors in improving the reporting quality and completeness of their manuscripts.
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: MedRxiv Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: MedRxiv Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos