RESUMO
Directed laboratory evolution applies iterative rounds of mutation and selection to explore the protein fitness landscape and provides rich information regarding the underlying relationships between protein sequence, structure, and function. Laboratory evolution data consist of protein sequences sampled from evolving populations over multiple generations and this data type does not fit into established supervised and unsupervised machine learning approaches. We develop a statistical learning framework that models the evolutionary process and can infer the protein fitness landscape from multiple snapshots along an evolutionary trajectory. We apply our modeling approach to dihydrofolate reductase (DHFR) laboratory evolution data and the resulting landscape parameters capture important aspects of DHFR structure and function. We use the resulting model to understand the structure of the fitness landscape and find numerous examples of epistasis but an overall global peak that is evolutionarily accessible from most starting sequences. Finally, we use the model to perform an in silico extrapolation of the DHFR laboratory evolution trajectory and computationally design proteins from future evolutionary rounds.
Assuntos
Aptidão Genética , Proteínas , Aptidão Genética/genética , Proteínas/genética , Proteínas/metabolismo , Mutação/genética , Tetra-Hidrofolato Desidrogenase/genética , Tetra-Hidrofolato Desidrogenase/metabolismo , Sequência de Aminoácidos , Evolução Molecular , Modelos Genéticos , Epistasia GenéticaRESUMO
Machine learning (ML) has transformed protein engineering by constructing models of the underlying sequence-function landscape to accelerate the discovery of new biomolecules. ML-guided protein design requires models, trained on local sequence-function information, to accurately predict distant fitness peaks. In this work, we evaluate neural networks' capacity to extrapolate beyond their training data. We perform model-guided design using a panel of neural network architectures trained on protein G (GB1)-Immunoglobulin G (IgG) binding data and experimentally test thousands of GB1 designs to systematically evaluate the models' extrapolation. We find each model architecture infers markedly different landscapes from the same data, which give rise to unique design preferences. We find simpler models excel in local extrapolation to design high fitness proteins, while more sophisticated convolutional models can venture deep into sequence space to design proteins that fold but are no longer functional. We also find that implementing a simple ensemble of convolutional neural networks enables robust design of high-performing variants in the local landscape. Our findings highlight how each architecture's inductive biases prime them to learn different aspects of the protein fitness landscape and how a simple ensembling approach makes protein engineering more robust.
Assuntos
Imunoglobulina G , Redes Neurais de Computação , Engenharia de Proteínas , Engenharia de Proteínas/métodos , Imunoglobulina G/metabolismo , Imunoglobulina G/química , Aprendizado de Máquina , Ligação Proteica , Proteínas de Bactérias/metabolismo , Proteínas de Bactérias/genética , Proteínas de Bactérias/química , Modelos MolecularesRESUMO
Protein language models trained on evolutionary data have emerged as powerful tools for predictive problems involving protein sequence, structure, and function. However, these models overlook decades of research into biophysical factors governing protein function. We propose Mutational Effect Transfer Learning (METL), a protein language model framework that unites advanced machine learning and biophysical modeling. Using the METL framework, we pretrain transformer-based neural networks on biophysical simulation data to capture fundamental relationships between protein sequence, structure, and energetics. We finetune METL on experimental sequence-function data to harness these biophysical signals and apply them when predicting protein properties like thermostability, catalytic activity, and fluorescence. METL excels in challenging protein engineering tasks like generalizing from small training sets and position extrapolation, although existing methods that train on evolutionary signals remain powerful for many types of experimental assays. We demonstrate METL's ability to design functional green fluorescent protein variants when trained on only 64 examples, showcasing the potential of biophysics-based protein language models for protein engineering.
RESUMO
Machine learning (ML) has transformed protein engineering by constructing models of the underlying sequence-function landscape to accelerate the discovery of new biomolecules. ML-guided protein design requires models, trained on local sequence-function information, to accurately predict distant fitness peaks. In this work, we evaluate neural networks' capacity to extrapolate beyond their training data. We perform model-guided design using a panel of neural network architectures trained on protein G (GB1)-Immunoglobulin G (IgG) binding data and experimentally test thousands of GB1 designs to systematically evaluate the models' extrapolation. We find each model architecture infers markedly different landscapes from the same data, which give rise to unique design preferences. We find simpler models excel in local extrapolation to design high fitness proteins, while more sophisticated convolutional models can venture deep into sequence space to design proteins that fold but are no longer functional. Our findings highlight how each architecture's inductive biases prime them to learn different aspects of the protein fitness landscape.
RESUMO
Facilitating axon regeneration in the injured central nervous system remains a challenging task. RAF-MAP2K signaling plays a key role in axon elongation during nervous system development. Here, we show that conditional expression of a constitutively kinase-activated BRAF in mature corticospinal neurons elicited the expression of a set of transcription factors previously implicated in the regeneration of zebrafish retinal ganglion cell axons and promoted regeneration and sprouting of corticospinal tract (CST) axons after spinal cord injury in mice. Newly sprouting axon collaterals formed synaptic connections with spinal interneurons, resulting in improved recovery of motor function. Noninvasive suprathreshold high-frequency repetitive transcranial magnetic stimulation (HF-rTMS) activated the BRAF canonical downstream effectors MAP2K1/2 and modulated the expression of a set of regeneration-related transcription factors in a pattern consistent with that induced by BRAF activation. HF-rTMS enabled CST axon regeneration and sprouting, which was abolished in MAP2K1/2 conditional null mice. These data collectively demonstrate a central role of MAP2K signaling in augmenting the growth capacity of mature corticospinal neurons and suggest that HF-rTMS might have potential for treating spinal cord injury by modulating MAP2K signaling.
Assuntos
Axônios , Traumatismos da Medula Espinal , Animais , Camundongos , Axônios/fisiologia , Engenharia Genética , Regeneração Nervosa/fisiologia , Proteínas Proto-Oncogênicas B-raf/metabolismo , Tratos Piramidais/metabolismo , Recuperação de Função Fisiológica/fisiologia , Traumatismos da Medula Espinal/genética , Traumatismos da Medula Espinal/terapia , Traumatismos da Medula Espinal/metabolismo , Estimulação Magnética Transcraniana , Fatores de Transcrição/metabolismo , Peixe-ZebraRESUMO
Machine learning (ML) is revolutionizing our ability to understand and predict the complex relationships between protein sequence, structure, and function. Predictive sequence-function models are enabling protein engineers to efficiently search the sequence space for useful proteins with broad applications in biotechnology. In this review, we highlight the recent advances in applying ML to protein engineering. We discuss supervised learning methods that infer the sequence-function mapping from experimental data and new sequence representation strategies for data-efficient modeling. We then describe the various ways in which ML can be incorporated into protein engineering workflows, including purely in silico searches, ML-assisted directed evolution, and generative models that can learn the underlying distribution of the protein function in a sequence space. ML-driven protein engineering will become increasingly powerful with continued advances in high-throughput data generation, data science, and deep learning.