Your browser doesn't support javascript.
loading
Generative language models on nucleotide sequences of human genes.
Ihtiyar, Musa Nuri; Özgür, Arzucan.
Afiliação
  • Ihtiyar MN; Department of Computer Engineering, Bogaziçi University, 34342, Istanbul, Turkey. musa.ihtiyar@bogazici.edu.tr.
  • Özgür A; Department of Computer Engineering, Bogaziçi University, 34342, Istanbul, Turkey. arzucan.ozgur@bogazici.edu.tr.
Sci Rep ; 14(1): 22204, 2024 09 27.
Article em En | MEDLINE | ID: mdl-39333252
ABSTRACT
Language models, especially transformer-based ones, have achieved colossal success in natural language processing. To be precise, studies like BERT for natural language understanding and works like GPT-3 for natural language generation are very important. If we consider DNA sequences as a text written with an alphabet of four letters representing the nucleotides, they are similar in structure to natural languages. This similarity has led to the development of discriminative language models such as DNABERT in the field of DNA-related bioinformatics. To our knowledge, however, the generative side of the coin is still largely unexplored. Therefore, we have focused on the development of an autoregressive generative language model such as GPT-3 for DNA sequences. Since working with whole DNA sequences is challenging without extensive computational resources, we decided to conduct our study on a smaller scale and focus on nucleotide sequences of human genes, i.e. unique parts of DNA with specific functions, rather than the whole DNA. This decision has not significantly changed the structure of the problem, as both DNA and genes can be considered as 1D sequences consisting of four different nucleotides without losing much information and without oversimplification. First of all, we systematically studied an almost entirely unexplored problem and observed that recurrent neural networks (RNNs) perform best, while simple techniques such as N-grams are also promising. Another beneficial point was learning how to work with generative models on languages we do not understand, unlike natural languages. The importance of using real-world tasks beyond classical metrics such as perplexity was noted. In addition, we examined whether the data-hungry nature of these models can be altered by selecting a language with minimal vocabulary size, four due to four different types of nucleotides. The reason for reviewing this was that choosing such a language might make the problem easier. However, in this study, we found that this did not change the amount of data required very much.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural / Biologia Computacional Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural / Biologia Computacional Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article