Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Cell ; 141(2): 355-67, 2010 Apr 16.
Article in English | MEDLINE | ID: mdl-20403329

ABSTRACT

The genetic code is degenerate. Each amino acid is encoded by up to six synonymous codons; the choice between these codons influences gene expression. Here, we show that in coding sequences, once a particular codon has been used, subsequent occurrences of the same amino acid do not use codons randomly, but favor codons that use the same tRNA. The effect is pronounced in rapidly induced genes, involves both frequent and rare codons and diminishes only slowly as a function of the distance between subsequent synonymous codons. Furthermore, we found that in S. cerevisiae codon correlation accelerates translation relative to the translation of synonymous yet anticorrelated sequences. The data suggest that tRNA diffusion away from the ribosome is slower than translation, and that some tRNA channeling takes place at the ribosome. They also establish that the dynamics of translation leave a significant signature at the level of the genome.


Subject(s)
Codon/metabolism , Protein Biosynthesis , RNA, Transfer/metabolism , Saccharomyces cerevisiae/genetics , Amino Acids/metabolism , RNA, Messenger/metabolism , Ribosomes/metabolism , Saccharomyces cerevisiae/cytology , Saccharomyces cerevisiae/metabolism
2.
Pac Symp Biocomput ; : 4-15, 2007.
Article in English | MEDLINE | ID: mdl-17992741

ABSTRACT

It is widely believed that comparing discrepancies in the protein-protein interaction (PPI) networks of individuals will become an important tool in understanding and preventing diseases. Currently PPI networks for individuals are not available, but gene expression data is becoming easier to obtain and allows us to represent individuals by a co-integrated gene expression/protein interaction network. Two major problems hamper the application of graph kernels - state-of-the-art methods for whole-graph comparison - to compare PPI networks. First, these methods do not scale to graphs of the size of a PPI network. Second, missing edges in these interaction networks are biologically relevant for detecting discrepancies, yet, these methods do not take this into account. In this article we present graph kernels for biological network comparison that are fast to compute and take into account missing interactions. We evaluate their practical performance on two datasets of co-integrated gene expression/PPI networks.


Subject(s)
Protein Interaction Mapping/statistics & numerical data , Computational Biology , Databases, Genetic , Disease Progression , Gene Expression Profiling/statistics & numerical data , Humans , Prognosis , Protein Array Analysis/statistics & numerical data
3.
IEEE Trans Neural Netw ; 15(4): 828-37, 2004 Jul.
Article in English | MEDLINE | ID: mdl-15461076

ABSTRACT

This paper derives a family of differential learning rules that optimize the Shannon entropy at the output of an adaptive system via kernel density estimation. In contrast to parametric formulations of entropy, this nonparametric approach assumes no particular functional form of the output density. We address problems associated with quantized data and finite sample size, and implement efficient maximum likelihood techniques for optimizing the regularizer. We also develop a normalized entropy estimate that is invariant with respect to affine transformations, facilitating optimization of the shape, rather than the scale, of the output density. Kernel density estimates are smooth and differentiable; this makes the derived entropy estimates amenable to manipulation by gradient descent. The resulting weight updates are surprisingly simple and efficient learning rules that operate on pairs of input samples. They can be tuned for data-limited or memory-limited situations, or modified to give a fully online implementation.


Subject(s)
Artificial Intelligence , Decision Support Techniques , Information Storage and Retrieval/methods , Information Theory , Models, Statistical , Neural Networks, Computer , Pattern Recognition, Automated , Algorithms , Computer Simulation , Entropy , Hand/anatomy & histology , Humans , Probability Learning , Sample Size , Signal Processing, Computer-Assisted
4.
Neural Comput ; 14(7): 1723-38, 2002 Jul.
Article in English | MEDLINE | ID: mdl-12079553

ABSTRACT

We propose a generic method for iteratively approximating various second-order gradient steps - Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient - in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for on-line learning, matrix momentum and stochastic meta-descent (SMD), implement this approach. Since both were originally derived by very different routes, this offers fresh insight into their operation, resulting in further improvements to SMD.


Subject(s)
Algorithms , Neural Networks, Computer , Acceleration , Stochastic Processes
SELECTION OF CITATIONS
SEARCH DETAIL
...