Your browser doesn't support javascript.
loading
CasANGCL: pre-training and fine-tuning model based on cascaded attention network and graph contrastive learning for molecular property prediction.
Zheng, Zixi; Tan, Yanyan; Wang, Hong; Yu, Shengpeng; Liu, Tianyu; Liang, Cheng.
Afiliação
  • Zheng Z; School of Information Science and Engineering, Shandong Normal University,Jinan 250358, China.
  • Tan Y; School of Information Science and Engineering, Shandong Normal University,Jinan 250358, China.
  • Wang H; School of Information Science and Engineering, Shandong Normal University,Jinan 250358, China.
  • Yu S; School of Information Science and Engineering, Shandong Normal University,Jinan 250358, China.
  • Liu T; School of Information Science and Engineering, Shandong Normal University,Jinan 250358, China.
  • Liang C; School of Information Science and Engineering, Shandong Normal University,Jinan 250358, China.
Brief Bioinform ; 24(1)2023 01 19.
Article em En | MEDLINE | ID: mdl-36592051
ABSTRACT
MOTIVATION Molecular property prediction is a significant requirement in AI-driven drug design and discovery, aiming to predict the molecular property information (e.g. toxicity) based on the mined biomolecular knowledge. Although graph neural networks have been proven powerful in predicting molecular property, unbalanced labeled data and poor generalization capability for new-synthesized molecules are always key issues that hinder further improvement of molecular encoding performance.

RESULTS:

We propose a novel self-supervised representation learning scheme based on a Cascaded Attention Network and Graph Contrastive Learning (CasANGCL). We design a new graph network variant, designated as cascaded attention network, to encode local-global molecular representations. We construct a two-stage contrast predictor framework to tackle the label imbalance problem of training molecular samples, which is an integrated end-to-end learning scheme. Moreover, we utilize the information-flow scheme for training our network, which explicitly captures the edge information in the node/graph representations and obtains more fine-grained knowledge. Our model achieves an 81.9% ROC-AUC average performance on 661 tasks from seven challenging benchmarks, showing better portability and generalizations. Further visualization studies indicate our model's better representation capacity and provide interpretability.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Benchmarking / Aprendizagem Idioma: En Ano de publicação: 2023 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Benchmarking / Aprendizagem Idioma: En Ano de publicação: 2023 Tipo de documento: Article País de afiliação: China