Deciphering the language of antibodies using self-supervised learning.
Patterns (N Y)
; 3(7): 100513, 2022 Jul 08.
Article
em En
| MEDLINE
| ID: mdl-35845836
An individual's B cell receptor (BCR) repertoire encodes information about past immune responses and potential for future disease protection. Deciphering the information stored in BCR sequence datasets will transform our understanding of disease and enable discovery of novel diagnostics and antibody therapeutics. A key challenge of BCR sequence analysis is the prediction of BCR properties from their amino acid sequence alone. Here, we present an antibody-specific language model, Antibody-specific Bidirectional Encoder Representation from Transformers (AntiBERTa), which provides a contextualized representation of BCR sequences. Following pre-training, we show that AntiBERTa embeddings capture biologically relevant information, generalizable to a range of applications. As a case study, we fine-tune AntiBERTa to predict paratope positions from an antibody sequence, outperforming public tools across multiple metrics. To our knowledge, AntiBERTa is the deepest protein-family-specific language model, providing a rich representation of BCRs. AntiBERTa embeddings are primed for multiple downstream tasks and can improve our understanding of the language of antibodies.
Texto completo:
1
Base de dados:
MEDLINE
Tipo de estudo:
Prognostic_studies
Idioma:
En
Revista:
Patterns (N Y)
Ano de publicação:
2022
Tipo de documento:
Article