Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Chem Inf Model ; 64(8): 3161-3172, 2024 04 22.
Artigo em Inglês | MEDLINE | ID: mdl-38532612

RESUMO

Butyrylcholinesterase (BChE) is a target of interest in late-stage Alzheimer's Disease (AD) where selective BChE inhibitors (BIs) may offer symptomatic treatment without the harsh side effects of acetylcholinesterase (AChE) inhibitors. In this study, we explore multiple machine learning strategies to identify BIs in silico, optimizing for precision over all other metrics. We compare state-of-the-art supervised contrastive learning (CL) with deep learning (DL) and Random Forest (RF) machine learning, across single and sequential modeling configurations, to identify the best models for BChE selectivity. We used these models to virtually screen a vendor library of 5 million compounds for BIs and tested 20 of these compounds in vitro. Seven of the 20 compounds displayed selectivity for BChE over AChE, reflecting a hit rate of 35% for our model predictions, suggesting a highly efficient strategy for modeling selective inhibition.


Assuntos
Butirilcolinesterase , Inibidores da Colinesterase , Aprendizado Profundo , Butirilcolinesterase/metabolismo , Butirilcolinesterase/química , Inibidores da Colinesterase/farmacologia , Inibidores da Colinesterase/química , Humanos , Modelos Moleculares , Acetilcolinesterase/metabolismo , Acetilcolinesterase/química , Doença de Alzheimer/tratamento farmacológico
2.
Commun Chem ; 7(1): 134, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38866916

RESUMO

Recent advances in machine learning (ML) have led to newer model architectures including transformers (large language models, LLMs) showing state of the art results in text generation and image analysis as well as few-shot learning (FSLC) models which offer predictive power with extremely small datasets. These new architectures may offer promise, yet the 'no-free lunch' theorem suggests that no single model algorithm can outperform at all possible tasks. Here, we explore the capabilities of classical (SVR), FSLC, and transformer models (MolBART) over a range of dataset tasks and show a 'goldilocks zone' for each model type, in which dataset size and feature distribution (i.e. dataset "diversity") determines the optimal algorithm strategy. When datasets are small ( < 50 molecules), FSLC tend to outperform both classical ML and transformers. When datasets are small-to-medium sized (50-240 molecules) and diverse, transformers outperform both classical models and few-shot learning. Finally, when datasets are of larger and of sufficient size, classical models then perform the best, suggesting that the optimal model to choose likely depends on the dataset available, its size and diversity. These findings may help to answer the perennial question of which ML algorithm is to be used when faced with a new dataset.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA