Self-Supervised Molecular Pretraining Strategy for Low-Resource Reaction Prediction Scenarios.
J Chem Inf Model
; 62(19): 4579-4590, 2022 10 10.
Article
en En
| MEDLINE
| ID: mdl-36129104
In the face of low-resource reaction training samples, we construct a chemical platform for addressing small-scale reaction prediction problems. Using a self-supervised pretraining strategy called MAsked Sequence to Sequence (MASS), the Transformer model can absorb the chemical information of about 1 billion molecules and then fine-tune on a small-scale reaction prediction. To further strengthen the predictive performance of our model, we combine MASS with the reaction transfer learning strategy. Here, we show that the average improved accuracies of the Transformer model can reach 14.07, 24.26, 40.31, and 57.69% in predicting the Baeyer-Villiger, Heck, C-C bond formation, and functional group interconversion reaction data sets, respectively, marking an important step to low-resource reaction prediction.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Tipo de estudio:
Prognostic_studies
/
Risk_factors_studies
Idioma:
En
Revista:
J Chem Inf Model
Asunto de la revista:
INFORMATICA MEDICA
/
QUIMICA
Año:
2022
Tipo del documento:
Article
Pais de publicación:
Estados Unidos