Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Chem Inf Model ; 64(9): 3650-3661, 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38630581

RESUMO

Protein engineering faces challenges in finding optimal mutants from a massive pool of candidate mutants. In this study, we introduce a deep-learning-based data-efficient fitness prediction tool to steer protein engineering. Our methodology establishes a lightweight graph neural network scheme for protein structures, which efficiently analyzes the microenvironment of amino acids in wild-type proteins and reconstructs the distribution of the amino acid sequences that are more likely to pass natural selection. This distribution serves as a general guidance for scoring proteins toward arbitrary properties on any order of mutations. Our proposed solution undergoes extensive wet-lab experimental validation spanning diverse physicochemical properties of various proteins, including fluorescence intensity, antigen-antibody affinity, thermostability, and DNA cleavage activity. More than 40% of ProtLGN-designed single-site mutants outperform their wild-type counterparts across all studied proteins and targeted properties. More importantly, our model can bypass the negative epistatic effect to combine single mutation sites and form deep mutants with up to seven mutation sites in a single round, whose physicochemical properties are significantly improved. This observation provides compelling evidence of the structure-based model's potential to guide deep mutations in protein engineering. Overall, our approach emerges as a versatile tool for protein engineering, benefiting both the computational and bioengineering communities.


Assuntos
Redes Neurais de Computação , Engenharia de Proteínas , Engenharia de Proteínas/métodos , Mutação , Proteínas/química , Proteínas/genética , Proteínas/metabolismo , Modelos Moleculares , Conformação Proteica , Aprendizado Profundo
2.
Neural Netw ; 177: 106387, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38788292

RESUMO

In modern Reinforcement Learning (RL) approaches, optimizing the Bellman error is a critical element across various algorithms, notably in deep Q-Learning and related methodologies. Traditional approaches predominantly employ the mean-squared Bellman error (MSELoss) as the standard loss function. However, the assumption of Bellman errors following the Gaussian distribution may oversimplify the nuanced characteristics of RL applications. In this work, we revisit the distribution of Bellman error in RL training, demonstrating that it tends to follow the Logistic distribution rather than the commonly assumed Normal distribution. We propose replacing MSELoss with a Logistic maximum likelihood function (LLoss) and rigorously test this hypothesis through extensive numerical experiments across diverse online and offline RL environments. Our findings consistently show that integrating the Logistic correction into the loss functions of various baseline RL methods leads to superior performance compared to their MSE counterparts. Additionally, we employ Kolmogorov-Smirnov tests to substantiate that the Logistic distribution offers a more accurate fit for approximating Bellman errors. This study also offers a novel theoretical contribution by establishing a clear connection between the distribution of Bellman error and the practice of proportional reward scaling, a common technique for performance enhancement in RL. Moreover, we explore the sample-accuracy trade-off involved in approximating the Logistic distribution, leveraging the Bias-Variance decomposition to mitigate excessive computational resources. The theoretical and empirical insights presented in this study lay a significant foundation for future research, potentially advancing methodologies, and understanding in RL, particularly in the distribution-based optimization of Bellman error.


Assuntos
Reforço Psicológico , Modelos Logísticos , Algoritmos , Humanos , Redes Neurais de Computação , Aprendizado de Máquina , Recompensa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA