Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Nat Chem Biol ; 16(4): 387-390, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31873222

RESUMO

Here, we report a rapid CRISPR-Cas9-mediated gene knock-in strategy that uses Cas9 ribonucleoprotein and 5'-modified double-stranded DNA donors with 50-base-pair homology arms and achieved unprecedented 65/40% knock-in rates for 0.7/2.5 kilobase inserts, respectively, in human embryonic kidney 293T cells. The identified 5'-end modification led to up to a fivefold increase in gene knock-in rates at various genomic loci in human cancer and stem cells.


Assuntos
Técnicas de Introdução de Genes/métodos , Região 5'-Flanqueadora/genética , Sistemas CRISPR-Cas/genética , Repetições Palindrômicas Curtas Agrupadas e Regularmente Espaçadas/genética , DNA/genética , Genoma/genética , Células HEK293 , Humanos , RNA Guia de Cinetoplastídeos/genética , Homologia de Sequência do Ácido Nucleico
2.
Nat Chem Biol ; 16(4): 479, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31942048

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

3.
Protein Eng Des Sel ; 362023 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-37883472

RESUMO

Self-supervised pretraining on protein sequences has led to state-of-the art performance on protein function and fitness prediction. However, sequence-only methods ignore the rich information contained in experimental and predicted protein structures. Meanwhile, inverse folding methods reconstruct a protein's amino-acid sequence given its structure, but do not take advantage of sequences that do not have known structures. In this study, we train a masked inverse folding protein masked language model parameterized as a structured graph neural network. During pretraining, this model learns to reconstruct corrupted sequences conditioned on the backbone structure. We then show that using the outputs from a pretrained sequence-only protein masked language model as input to the inverse folding model further improves pretraining perplexity. We evaluate both of these models on downstream protein engineering tasks and analyze the effect of using information from experimental or predicted structures on performance.


Assuntos
Engenharia de Proteínas , Dobramento de Proteína , Sequência de Aminoácidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA