Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
Cell ; 141(2): 355-67, 2010 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-20403329

RESUMO

The genetic code is degenerate. Each amino acid is encoded by up to six synonymous codons; the choice between these codons influences gene expression. Here, we show that in coding sequences, once a particular codon has been used, subsequent occurrences of the same amino acid do not use codons randomly, but favor codons that use the same tRNA. The effect is pronounced in rapidly induced genes, involves both frequent and rare codons and diminishes only slowly as a function of the distance between subsequent synonymous codons. Furthermore, we found that in S. cerevisiae codon correlation accelerates translation relative to the translation of synonymous yet anticorrelated sequences. The data suggest that tRNA diffusion away from the ribosome is slower than translation, and that some tRNA channeling takes place at the ribosome. They also establish that the dynamics of translation leave a significant signature at the level of the genome.


Assuntos
Códon/metabolismo , Biossíntese de Proteínas , RNA de Transferência/metabolismo , Saccharomyces cerevisiae/genética , Aminoácidos/metabolismo , RNA Mensageiro/metabolismo , Ribossomos/metabolismo , Saccharomyces cerevisiae/citologia , Saccharomyces cerevisiae/metabolismo
2.
IEEE Trans Neural Netw ; 15(4): 828-37, 2004 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-15461076

RESUMO

This paper derives a family of differential learning rules that optimize the Shannon entropy at the output of an adaptive system via kernel density estimation. In contrast to parametric formulations of entropy, this nonparametric approach assumes no particular functional form of the output density. We address problems associated with quantized data and finite sample size, and implement efficient maximum likelihood techniques for optimizing the regularizer. We also develop a normalized entropy estimate that is invariant with respect to affine transformations, facilitating optimization of the shape, rather than the scale, of the output density. Kernel density estimates are smooth and differentiable; this makes the derived entropy estimates amenable to manipulation by gradient descent. The resulting weight updates are surprisingly simple and efficient learning rules that operate on pairs of input samples. They can be tuned for data-limited or memory-limited situations, or modified to give a fully online implementation.


Assuntos
Inteligência Artificial , Técnicas de Apoio para a Decisão , Armazenamento e Recuperação da Informação/métodos , Teoria da Informação , Modelos Estatísticos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Algoritmos , Simulação por Computador , Entropia , Mãos/anatomia & histologia , Humanos , Aprendizagem por Probabilidade , Tamanho da Amostra , Processamento de Sinais Assistido por Computador
3.
Pac Symp Biocomput ; : 4-15, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17992741

RESUMO

It is widely believed that comparing discrepancies in the protein-protein interaction (PPI) networks of individuals will become an important tool in understanding and preventing diseases. Currently PPI networks for individuals are not available, but gene expression data is becoming easier to obtain and allows us to represent individuals by a co-integrated gene expression/protein interaction network. Two major problems hamper the application of graph kernels - state-of-the-art methods for whole-graph comparison - to compare PPI networks. First, these methods do not scale to graphs of the size of a PPI network. Second, missing edges in these interaction networks are biologically relevant for detecting discrepancies, yet, these methods do not take this into account. In this article we present graph kernels for biological network comparison that are fast to compute and take into account missing interactions. We evaluate their practical performance on two datasets of co-integrated gene expression/PPI networks.


Assuntos
Mapeamento de Interação de Proteínas/estatística & dados numéricos , Biologia Computacional , Bases de Dados Genéticas , Progressão da Doença , Perfilação da Expressão Gênica/estatística & dados numéricos , Humanos , Prognóstico , Análise Serial de Proteínas/estatística & dados numéricos
4.
Neural Comput ; 14(7): 1723-38, 2002 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-12079553

RESUMO

We propose a generic method for iteratively approximating various second-order gradient steps - Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient - in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for on-line learning, matrix momentum and stochastic meta-descent (SMD), implement this approach. Since both were originally derived by very different routes, this offers fresh insight into their operation, resulting in further improvements to SMD.


Assuntos
Algoritmos , Redes Neurais de Computação , Aceleração , Processos Estocásticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA