Your browser doesn't support javascript.
loading
On the effectiveness of compact biomedical transformers.
Rohanian, Omid; Nouriborji, Mohammadmahdi; Kouchaki, Samaneh; Clifton, David A.
Afiliação
  • Rohanian O; Department of Engineering Science, University of Oxford, Oxford, UK.
  • Nouriborji M; NLPie Research, Oxford, UK.
  • Kouchaki S; NLPie Research, Oxford, UK.
  • Clifton DA; Department of Electrical and Electronic Engineering, University of Surrey, Guildford, UK.
Bioinformatics ; 39(3)2023 03 01.
Article em En | MEDLINE | ID: mdl-36825820
ABSTRACT
MOTIVATION Language models pre-trained on biomedical corpora, such as BioBERT, have recently shown promising results on downstream biomedical tasks. Many existing pre-trained models, on the other hand, are resource-intensive and computationally heavy owing to factors such as embedding size, hidden dimension and number of layers. The natural language processing community has developed numerous strategies to compress these models utilizing techniques such as pruning, quantization and knowledge distillation, resulting in models that are considerably faster, smaller and subsequently easier to use in practice. By the same token, in this article, we introduce six lightweight models, namely, BioDistilBERT, BioTinyBERT, BioMobileBERT, DistilBioBERT, TinyBioBERT and CompactBioBERT which are obtained either by knowledge distillation from a biomedical teacher or continual learning on the Pubmed dataset. We evaluate all of our models on three biomedical tasks and compare them with BioBERT-v1.1 to create the best efficient lightweight models that perform on par with their larger counterparts.

RESULTS:

We trained six different models in total, with the largest model having 65 million in parameters and the smallest having 15 million; a far lower range of parameters compared with BioBERT's 110M. Based on our experiments on three different biomedical tasks, we found that models distilled from a biomedical teacher and models that have been additionally pre-trained on the PubMed dataset can retain up to 98.8% and 98.6% of the performance of the BioBERT-v1.1, respectively. Overall, our best model below 30 M parameters is BioMobileBERT, while our best models over 30 M parameters are DistilBioBERT and CompactBioBERT, which can keep up to 98.2% and 98.8% of the performance of the BioBERT-v1.1, respectively. AVAILABILITY AND IMPLEMENTATION Codes are available at https//github.com/nlpie-research/Compact-Biomedical-Transformers. Trained models can be accessed at https//huggingface.co/nlpie.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2023 Tipo de documento: Article