Your browser doesn't support javascript.
loading
Proformer: a hybrid macaron transformer model predicts expression values from promoter sequences.
Kwak, Il-Youp; Kim, Byeong-Chan; Lee, Juhyun; Kang, Taein; Garry, Daniel J; Zhang, Jianyi; Gong, Wuming.
Affiliation
  • Kwak IY; Department of Applied Statistics, Chung­Ang University, Seoul, Republic of Korea.
  • Kim BC; Department of Applied Statistics, Chung­Ang University, Seoul, Republic of Korea.
  • Lee J; Department of Applied Statistics, Chung­Ang University, Seoul, Republic of Korea.
  • Kang T; Department of Applied Statistics, Chung­Ang University, Seoul, Republic of Korea.
  • Garry DJ; Cardiovascular Division, Department of Medicine, Lillehei Heart Institute, University of Minnesota, 2231 6th St SE, Minneapolis, MN, 55455, USA. garry@umn.edu.
  • Zhang J; Stem Cell Institute, University of Minnesota, Minneapolis, MN, 55455, USA. garry@umn.edu.
  • Gong W; Paul and Sheila Wellstone Muscular Dystrophy Center, University of Minnesota, Minneapolis, MN, 55455, USA. garry@umn.edu.
BMC Bioinformatics ; 25(1): 81, 2024 Feb 20.
Article in En | MEDLINE | ID: mdl-38378442
ABSTRACT
The breakthrough high-throughput measurement of the cis-regulatory activity of millions of randomly generated promoters provides an unprecedented opportunity to systematically decode the cis-regulatory logic that determines the expression values. We developed an end-to-end transformer encoder architecture named Proformer to predict the expression values from DNA sequences. Proformer used a Macaron-like Transformer encoder architecture, where two half-step feed forward (FFN) layers were placed at the beginning and the end of each encoder block, and a separable 1D convolution layer was inserted after the first FFN layer and in front of the multi-head attention layer. The sliding k-mers from one-hot encoded sequences were mapped onto a continuous embedding, combined with the learned positional embedding and strand embedding (forward strand vs. reverse complemented strand) as the sequence input. Moreover, Proformer introduced multiple expression heads with mask filling to prevent the transformer models from collapsing when training on relatively small amount of data. We empirically determined that this design had significantly better performance than the conventional design such as using the global pooling layer as the output layer for the regression task. These analyses support the notion that Proformer provides a novel method of learning and enhances our understanding of how cis-regulatory sequences determine the expression values.
Subject(s)
Key words

Full text: 1 Database: MEDLINE Main subject: Electric Power Supplies / Learning Language: En Year: 2024 Type: Article

Full text: 1 Database: MEDLINE Main subject: Electric Power Supplies / Learning Language: En Year: 2024 Type: Article