Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Asunto principal
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
BMC Bioinformatics ; 25(1): 301, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39272021

RESUMEN

Transformer-based large language models (LLMs) are very suited for biological sequence data, because of analogies to natural language. Complex relationships can be learned, because a concept of "words" can be generated through tokenization. Training the models with masked token prediction, they learn both token sequence identity and larger sequence context. We developed methodology to interrogate model learning, which is both relevant for the interpretability of the model and to evaluate its potential for specific tasks. We used DNABERT, a DNA language model trained on the human genome with overlapping k-mers as tokens. To gain insight into the model's learning, we interrogated how the model performs predictions, extracted token embeddings, and defined a fine-tuning benchmarking task to predict the next tokens of different sizes without overlaps. This task evaluates foundation models without interrogating specific genome biology, it does not depend on tokenization strategies, vocabulary size, the dictionary, or the number of training parameters. Lastly, there is no leakage of information from token identity into the prediction task, which makes it particularly useful to evaluate the learning of sequence context. We discovered that the model with overlapping k-mers struggles to learn larger sequence context. Instead, the learned embeddings largely represent token sequence. Still, good performance is achieved for genome-biology-inspired fine-tuning tasks. Models with overlapping tokens may be used for tasks where a larger sequence context is of less relevance, but the token sequence directly represents the desired learning features. This emphasizes the need to interrogate knowledge representation in biological LLMs.


Asunto(s)
ADN , Humanos , ADN/química , Genoma Humano , Análisis de Secuencia de ADN/métodos , Procesamiento de Lenguaje Natural , Biología Computacional/métodos
2.
Artículo en Inglés | MEDLINE | ID: mdl-34720444

RESUMEN

In this paper we study stationary graphs for functionals of geometric nature defined on currents or varifolds. The point of view we adopt is the one of differential inclusions, introduced in this context in the recent papers (De Lellis et al. in Geometric measure theory and differential inclusions, 2019. arXiv:1910.00335; Tione in Minimal graphs and differential inclusions. Commun Part Differ Equ 7:1-33, 2021). In particular, given a polyconvex integrand f, we define a set of matrices C f that allows us to rewrite the stationarity condition for a graph with multiplicity as a differential inclusion. Then we prove that if f is assumed to be non-negative, then in C f there is no T N ' configuration, thus recovering the main result of De Lellis et al. (Geometric measure theory and differential inclusions, 2019. arXiv:1910.00335) as a corollary. Finally, we show that if the hypothesis of non-negativity is dropped, one can not only find T N ' configurations in C f , but it is also possible to construct via convex integration a very degenerate stationary point with multiplicity.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA