Your browser doesn't support javascript.
loading
Continually adapting pre-trained language model to universal annotation of single-cell RNA-seq data.
Wan, Hui; Yuan, Musu; Fu, Yiwei; Deng, Minghua.
Afiliación
  • Wan H; School of Mathematical Sciences, Peking University, Beijing, China, 100871.
  • Yuan M; Center for Quantitative Biology, Peking University, Beijing, China, 100871.
  • Fu Y; School of Mathematical Sciences, Peking University, Beijing, China, 100871.
  • Deng M; School of Mathematical Sciences, Peking University, Beijing, China, 100871.
Brief Bioinform ; 25(2)2024 Jan 22.
Article en En | MEDLINE | ID: mdl-38388681
ABSTRACT
MOTIVATION Cell-type annotation of single-cell RNA-sequencing (scRNA-seq) data is a hallmark of biomedical research and clinical application. Current annotation tools usually assume the simultaneous acquisition of well-annotated data, but without the ability to expand knowledge from new data. Yet, such tools are inconsistent with the continuous emergence of scRNA-seq data, calling for a continuous cell-type annotation model. In addition, by their powerful ability of information integration and model interpretability, transformer-based pre-trained language models have led to breakthroughs in single-cell biology research. Therefore, the systematic combining of continual learning and pre-trained language models for cell-type annotation tasks is inevitable.

RESULTS:

We herein propose a universal cell-type annotation tool, called CANAL, that continuously fine-tunes a pre-trained language model trained on a large amount of unlabeled scRNA-seq data, as new well-labeled data emerges. CANAL essentially alleviates the dilemma of catastrophic forgetting, both in terms of model inputs and outputs. For model inputs, we introduce an experience replay schema that repeatedly reviews previous vital examples in current training stages. This is achieved through a dynamic example bank with a fixed buffer size. The example bank is class-balanced and proficient in retaining cell-type-specific information, particularly facilitating the consolidation of patterns associated with rare cell types. For model outputs, we utilize representation knowledge distillation to regularize the divergence between previous and current models, resulting in the preservation of knowledge learned from past training stages. Moreover, our universal annotation framework considers the inclusion of new cell types throughout the fine-tuning and testing stages. We can continuously expand the cell-type annotation library by absorbing new cell types from newly arrived, well-annotated training datasets, as well as automatically identify novel cells in unlabeled datasets. Comprehensive experiments with data streams under various biological scenarios demonstrate the versatility and high model interpretability of CANAL.

AVAILABILITY:

An implementation of CANAL is available from https//github.com/aster-ww/CANAL-torch. CONTACT dengmh@pku.edu.cn. SUPPLEMENTARY INFORMATION Supplementary data are available at Journal Name online.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Programas Informáticos / Perfilación de la Expresión Génica Idioma: En Revista: Brief Bioinform Asunto de la revista: BIOLOGIA / INFORMATICA MEDICA Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Programas Informáticos / Perfilación de la Expresión Génica Idioma: En Revista: Brief Bioinform Asunto de la revista: BIOLOGIA / INFORMATICA MEDICA Año: 2024 Tipo del documento: Article