Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Cell ; 177(4): 999-1009.e10, 2019 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-31051108

RESUMEN

What specific features should visual neurons encode, given the infinity of real-world images and the limited number of neurons available to represent them? We investigated neuronal selectivity in monkey inferotemporal cortex via the vast hypothesis space of a generative deep neural network, avoiding assumptions about features or semantic categories. A genetic algorithm searched this space for stimuli that maximized neuronal firing. This led to the evolution of rich synthetic images of objects with complex combinations of shapes, colors, and textures, sometimes resembling animals or familiar people, other times revealing novel patterns that did not map to any clear semantic category. These results expand our conception of the dictionary of features encoded in the cortex, and the approach can potentially reveal the internal representations of any system whose input can be captured by a generative model.


Asunto(s)
Red Nerviosa/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Algoritmos , Animales , Corteza Cerebral/fisiología , Macaca mulatta/fisiología , Masculino , Neuronas/metabolismo , Neuronas/fisiología
2.
Proc Natl Acad Sci U S A ; 119(16): e2118705119, 2022 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-35377737

RESUMEN

The primate inferior temporal cortex contains neurons that respond more strongly to faces than to other objects. Termed "face neurons," these neurons are thought to be selective for faces as a semantic category. However, face neurons also partly respond to clocks, fruits, and single eyes, raising the question of whether face neurons are better described as selective for visual features related to faces but dissociable from them. We used a recently described algorithm, XDream, to evolve stimuli that strongly activated face neurons. XDream leverages a generative neural network that is not limited to realistic objects. Human participants assessed images evolved for face neurons and for nonface neurons and natural images depicting faces, cars, fruits, etc. Evolved images were consistently judged to be distinct from real faces. Images evolved for face neurons were rated as slightly more similar to faces than images evolved for nonface neurons. There was a correlation among natural images between face neuron activity and subjective "faceness" ratings, but this relationship did not hold for face neuron­evolved images, which triggered high activity but were rated low in faceness. Our results suggest that so-called face neurons are better described as tuned to visual features rather than semantic categories.


Asunto(s)
Neuronas , Corteza Visual , Algoritmos , Cara , Humanos , Neuronas/fisiología , Semántica , Corteza Visual/citología , Corteza Visual/fisiología
3.
PLoS Comput Biol ; 18(11): e1010654, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36413523

RESUMEN

Primates constantly explore their surroundings via saccadic eye movements that bring different parts of an image into high resolution. In addition to exploring new regions in the visual field, primates also make frequent return fixations, revisiting previously foveated locations. We systematically studied a total of 44,328 return fixations out of 217,440 fixations. Return fixations were ubiquitous across different behavioral tasks, in monkeys and humans, both when subjects viewed static images and when subjects performed natural behaviors. Return fixations locations were consistent across subjects, tended to occur within short temporal offsets, and typically followed a 180-degree turn in saccadic direction. To understand the origin of return fixations, we propose a proof-of-principle, biologically-inspired and image-computable neural network model. The model combines five key modules: an image feature extractor, bottom-up saliency cues, task-relevant visual features, finite inhibition-of-return, and saccade size constraints. Even though there are no free parameters that are fine-tuned for each specific task, species, or condition, the model produces fixation sequences resembling the universal properties of return fixations. These results provide initial steps towards a mechanistic understanding of the trade-off between rapid foveal recognition and the need to scrutinize previous fixation locations.


Asunto(s)
Fijación Ocular , Movimientos Sacádicos , Animales , Humanos , Campos Visuales , Primates , Señales (Psicología)
4.
PLoS Comput Biol ; 16(6): e1007973, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32542056

RESUMEN

A longstanding question in sensory neuroscience is what types of stimuli drive neurons to fire. The characterization of effective stimuli has traditionally been based on a combination of intuition, insights from previous studies, and luck. A new method termed XDream (EXtending DeepDream with real-time evolution for activation maximization) combined a generative neural network and a genetic algorithm in a closed loop to create strong stimuli for neurons in the macaque visual cortex. Here we extensively and systematically evaluate the performance of XDream. We use ConvNet units as in silico models of neurons, enabling experiments that would be prohibitive with biological neurons. We evaluated how the method compares to brute-force search, and how well the method generalizes to different neurons and processing stages. We also explored design and parameter choices. XDream can efficiently find preferred features for visual units without any prior knowledge about them. XDream extrapolates to different layers, architectures, and developmental regimes, performing better than brute-force search, and often better than exhaustive sampling of >1 million images. Furthermore, XDream is robust to choices of multiple image generators, optimization algorithms, and hyperparameters, suggesting that its performance is locally near-optimal. Lastly, we found no significant advantage to problem-specific parameter tuning. These results establish expectations and provide practical recommendations for using XDream to investigate neural coding in biological preparations. Overall, XDream is an efficient, general, and robust algorithm for uncovering neuronal tuning preferences using a vast and diverse stimulus space. XDream is implemented in Python, released under the MIT License, and works on Linux, Windows, and MacOS.


Asunto(s)
Neuronas/fisiología , Estimulación Luminosa , Corteza Visual/fisiología , Algoritmos , Animales , Redes Neurales de la Computación , Percepción Visual/fisiología
5.
Proc Natl Acad Sci U S A ; 115(35): 8835-8840, 2018 08 28.
Artículo en Inglés | MEDLINE | ID: mdl-30104363

RESUMEN

Making inferences from partial information constitutes a critical aspect of cognition. During visual perception, pattern completion enables recognition of poorly visible or occluded objects. We combined psychophysics, physiology, and computational models to test the hypothesis that pattern completion is implemented by recurrent computations and present three pieces of evidence that are consistent with this hypothesis. First, subjects robustly recognized objects even when they were rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking. Second, invasive physiological responses along the human ventral cortex exhibited visually selective responses to partially visible objects that were delayed compared with whole objects, suggesting the need for additional computations. These physiological delays were correlated with the effects of backward masking. Third, state-of-the-art feed-forward computational architectures were not robust to partial visibility. However, recognition performance was recovered when the model was augmented with attractor-based recurrent connectivity. The recurrent model was able to predict which images of heavily occluded objects were easier or harder for humans to recognize, could capture the effect of introducing a backward mask on recognition behavior, and was consistent with the physiological delays along the human ventral visual stream. These results provide a strong argument of plausibility for the role of recurrent computations in making visual inferences from partial information.


Asunto(s)
Simulación por Computador , Modelos Neurológicos , Reconocimiento Visual de Modelos/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino
6.
Cereb Cortex ; 29(11): 4551-4567, 2019 12 17.
Artículo en Inglés | MEDLINE | ID: mdl-30590542

RESUMEN

Rapid and flexible learning during behavioral choices is critical to our daily endeavors and constitutes a hallmark of dynamic reasoning. An important paradigm to examine flexible behavior involves learning new arbitrary associations mapping visual inputs to motor outputs. We conjectured that visuomotor rules are instantiated by translating visual signals into actions through dynamic interactions between visual, frontal and motor cortex. We evaluated the neural representation of such visuomotor rules by performing intracranial field potential recordings in epilepsy subjects during a rule-learning delayed match-to-behavior task. Learning new visuomotor mappings led to the emergence of specific responses associating visual signals with motor outputs in 3 anatomical clusters in frontal, anteroventral temporal and posterior parietal cortex. After learning, mapping selective signals during the delay period showed interactions with visual and motor signals. These observations provide initial steps towards elucidating the dynamic circuits underlying flexible behavior and how communication between subregions of frontal, temporal, and parietal cortex leads to rapid learning of task-relevant choices.


Asunto(s)
Aprendizaje por Asociación/fisiología , Encéfalo/fisiología , Neuronas/fisiología , Desempeño Psicomotor/fisiología , Adolescente , Adulto , Niño , Femenino , Lóbulo Frontal/fisiología , Humanos , Masculino , Persona de Mediana Edad , Actividad Motora , Vías Nerviosas/fisiología , Lóbulo Parietal/fisiología , Estimulación Luminosa , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Adulto Joven
7.
Neuroimage ; 180(Pt A): 147-159, 2018 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-28823828

RESUMEN

The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision.


Asunto(s)
Corteza Visual/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Mapeo Encefálico/métodos , Niño , Preescolar , Epilepsia Refractaria/terapia , Terapia por Estimulación Eléctrica , Electrodos Implantados , Potenciales Evocados Visuales/fisiología , Femenino , Humanos , Masculino , Películas Cinematográficas , Estimulación Luminosa , Procesamiento de Señales Asistido por Computador , Adulto Joven
8.
Nucleic Acids Res ; 44(10): e97, 2016 06 02.
Artículo en Inglés | MEDLINE | ID: mdl-26980280

RESUMEN

The ability to integrate 'omics' (i.e. transcriptomics and proteomics) is becoming increasingly important to the understanding of regulatory mechanisms. There are currently no tools available to identify differentially expressed genes (DEGs) across different 'omics' data types or multi-dimensional data including time courses. We present fCI (f-divergence Cut-out Index), a model capable of simultaneously identifying DEGs from continuous and discrete transcriptomic, proteomic and integrated proteogenomic data. We show that fCI can be used across multiple diverse sets of data and can unambiguously find genes that show functional modulation, developmental changes or misregulation. Applying fCI to several proteogenomics datasets, we identified a number of important genes that showed distinctive regulation patterns. The package fCI is available at R Bioconductor and http://software.steenlab.org/fCI/.


Asunto(s)
Biología Computacional/métodos , Perfilación de la Expresión Génica/métodos , Expresión Génica , Proteómica/métodos , Algoritmos , Análisis de Secuencia de ARN , Espectrometría de Masas en Tándem
10.
Cereb Cortex ; 26(7): 3064-82, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-26092221

RESUMEN

When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects.


Asunto(s)
Atención , Simulación por Computador , Fijación Ocular , Percepción Visual , Adolescente , Adulto , Atención/fisiología , Encéfalo/fisiología , Femenino , Fijación Ocular/fisiología , Humanos , Masculino , Modelos Neurológicos , Psicofísica , Reconocimiento en Psicología/fisiología , Percepción Visual/fisiología , Adulto Joven
11.
Nature ; 465(7295): 182-7, 2010 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-20393465

RESUMEN

We used genome-wide sequencing methods to study stimulus-dependent enhancer function in mouse cortical neurons. We identified approximately 12,000 neuronal activity-regulated enhancers that are bound by the general transcriptional co-activator CBP in an activity-dependent manner. A function of CBP at enhancers may be to recruit RNA polymerase II (RNAPII), as we also observed activity-regulated RNAPII binding to thousands of enhancers. Notably, RNAPII at enhancers transcribes bi-directionally a novel class of enhancer RNAs (eRNAs) within enhancer domains defined by the presence of histone H3 monomethylated at lysine 4. The level of eRNA expression at neuronal enhancers positively correlates with the level of messenger RNA synthesis at nearby genes, suggesting that eRNA synthesis occurs specifically at enhancers that are actively engaged in promoting mRNA synthesis. These findings reveal that a widespread mechanism of enhancer activation involves RNAPII binding and eRNA synthesis.


Asunto(s)
Elementos de Facilitación Genéticos/genética , Regulación de la Expresión Génica/genética , Neuronas/metabolismo , Transcripción Genética/genética , Animales , Factores de Transcripción con Motivo Hélice-Asa-Hélice Básico/genética , Proteína de Unión a CREB/metabolismo , Secuencia de Consenso/genética , Proteínas del Citoesqueleto/genética , Genes Reporteros , Genes fos/genética , Histonas/metabolismo , Metilación , Ratones , Ratones Endogámicos C57BL , Proteínas del Tejido Nervioso/genética , ARN Polimerasa II/metabolismo , ARN no Traducido/biosíntesis , ARN no Traducido/genética
12.
J Neurosci ; 34(8): 3042-55, 2014 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-24553944

RESUMEN

Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses.


Asunto(s)
Encéfalo/fisiología , Desempeño Psicomotor/fisiología , Reconocimiento en Psicología/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Atención/fisiología , Niño , Interpretación Estadística de Datos , Electrodos Implantados , Electroencefalografía , Epilepsia/psicología , Movimientos Oculares/fisiología , Femenino , Objetivos , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Corteza Visual/fisiología , Adulto Joven
13.
J Neurophysiol ; 113(5): 1656-69, 2015 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-25429116

RESUMEN

Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds.


Asunto(s)
Potenciales Evocados Visuales , Reconocimiento Visual de Modelos , Tiempo de Reacción , Corteza Visual/fisiología , Femenino , Humanos , Masculino
14.
Nucleic Acids Res ; 40(16): 7858-69, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22684627

RESUMEN

More than 98% of a typical vertebrate genome does not code for proteins. Although non-coding regions are sprinkled with short (<200 bp) islands of evolutionarily conserved sequences, the function of most of these unannotated conserved islands remains unknown. One possibility is that unannotated conserved islands could encode non-coding RNAs (ncRNAs); alternatively, unannotated conserved islands could serve as promoter-distal regulatory factor binding sites (RFBSs) like enhancers. Here we assess these possibilities by comparing unannotated conserved islands in the human and mouse genomes to transcribed regions and to RFBSs, relying on a detailed case study of one human and one mouse cell type. We define transcribed regions by applying a novel transcript-calling algorithm to RNA-Seq data obtained from total cellular RNA, and we define RFBSs using ChIP-Seq and DNAse-hypersensitivity assays. We find that unannotated conserved islands are four times more likely to coincide with RFBSs than with unannotated ncRNAs. Thousands of conserved RFBSs can be categorized as insulators based on the presence of CTCF or as enhancers based on the presence of p300/CBP and H3K4me1. While many unannotated conserved RFBSs are transcriptionally active to some extent, the transcripts produced tend to be unspliced, non-polyadenylated and expressed at levels 10 to 100-fold lower than annotated coding or ncRNAs. Extending these findings across multiple cell types and tissues, we propose that most conserved non-coding genomic DNA in vertebrate genomes corresponds to promoter-distal regulatory elements.


Asunto(s)
Secuencia Conservada , Elementos Reguladores de la Transcripción , Animales , Secuencia de Bases , Sitios de Unión , ADN/química , Genoma , Células HeLa , Humanos , Ratones , Regiones Promotoras Genéticas , ARN no Traducido/genética , Transcripción Genética
15.
J Vis ; 14(5): 7, 2014 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-24819738

RESUMEN

Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition.


Asunto(s)
Percepción de Forma/fisiología , Reconocimiento Visual de Modelos/fisiología , Adulto , Femenino , Humanos , Masculino , Psicofísica , Factores de Tiempo , Visión Ocular/fisiología , Vías Visuales , Adulto Joven
16.
Nat Neurosci ; 27(6): 1157-1166, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38684892

RESUMEN

In natural vision, primates actively move their eyes several times per second via saccades. It remains unclear whether, during this active looking, visual neurons exhibit classical retinotopic properties, anticipate gaze shifts or mirror the stable quality of perception, especially in complex natural scenes. Here, we let 13 monkeys freely view thousands of natural images across 4.6 million fixations, recorded 883 h of neuronal responses in six areas spanning primary visual to anterior inferior temporal cortex and analyzed spatial, temporal and featural selectivity in these responses. Face neurons tracked their receptive field contents, indicated by category-selective responses. Self-consistency analysis showed that general feature-selective responses also followed eye movements and remained gaze-dependent over seconds of viewing the same image. Computational models of feature-selective responses located retinotopic receptive fields during free viewing. We found limited evidence for feature-selective predictive remapping and no viewing-history integration. Thus, ventral visual neurons represent the world in a predominantly eye-centered reference frame during natural vision.


Asunto(s)
Movimientos Oculares , Macaca mulatta , Neuronas , Corteza Visual , Animales , Corteza Visual/fisiología , Movimientos Oculares/fisiología , Neuronas/fisiología , Masculino , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Fijación Ocular/fisiología , Movimientos Sacádicos/fisiología , Visión Ocular/fisiología , Femenino
17.
bioRxiv ; 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38854011

RESUMEN

During natural vision, we rarely see objects in isolation but rather embedded in rich and complex contexts. Understanding how the brain recognizes objects in natural scenes by integrating contextual information remains a key challenge. To elucidate neural mechanisms compatible with human visual processing, we need an animal model that behaves similarly to humans, so that inferred neural mechanisms can provide hypotheses relevant to the human brain. Here we assessed whether rhesus macaques could model human context-driven object recognition by quantifying visual object identification abilities across variations in the amount, quality, and congruency of contextual cues. Behavioral metrics revealed strikingly similar context-dependent patterns between humans and monkeys. However, neural responses in the inferior temporal (IT) cortex of monkeys that were never explicitly trained to discriminate objects in context, as well as current artificial neural network models, could only partially explain this cross-species correspondence. The shared behavioral variance unexplained by context-naive neural data or computational models highlights fundamental knowledge gaps. Our findings demonstrate an intriguing alignment of human and monkey visual object processing that defies full explanation by either brain activity in a key visual region or state-of-the-art models.

18.
bioRxiv ; 2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38826332

RESUMEN

We show that neural networks can implement reward-seeking behavior using only local predictive updates and internal noise. These networks are capable of autonomous interaction with an environment and can switch between explore and exploit behavior, which we show is governed by attractor dynamics. Networks can adapt to changes in their architectures, environments, or motor interfaces without any external control signals. When networks have a choice between different tasks, they can form preferences that depend on patterns of noise and initialization, and we show that these preferences can be biased by network architectures or by changing learning rates. Our algorithm presents a flexible, biologically plausible way of interacting with environments without requiring an explicit environmental reward function, allowing for behavior that is both highly adaptable and autonomous. Code is available at https://github.com/ccli3896/PaN.

19.
ArXiv ; 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38947929

RESUMEN

We use (multi)modal deep neural networks (DNNs) to probe for sites of multimodal integration in the human brain by predicting stereoen-cephalography (SEEG) recordings taken while human subjects watched movies. We operationalize sites of multimodal integration as regions where a multimodal vision-language model predicts recordings better than unimodal language, unimodal vision, or linearly-integrated language-vision models. Our target DNN models span different architectures (e.g., convolutional networks and transformers) and multimodal training techniques (e.g., cross-attention and contrastive learning). As a key enabling step, we first demonstrate that trained vision and language models systematically outperform their randomly initialized counterparts in their ability to predict SEEG signals. We then compare unimodal and multimodal models against one another. Because our target DNN models often have different architectures, number of parameters, and training sets (possibly obscuring those differences attributable to integration), we carry out a controlled comparison of two models (SLIP and SimCLR), which keep all of these attributes the same aside from input modality. Using this approach, we identify a sizable number of neural sites (on average 141 out of 1090 total sites or 12.94%) and brain regions where multimodal integration seems to occur. Additionally, we find that among the variants of multimodal training techniques we assess, CLIP-style training is the best suited for downstream prediction of the neural activity in these sites.

20.
PLoS Comput Biol ; 8(11): e1002747, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23133354

RESUMEN

Eukaryotic genes are typically split into exons that need to be spliced together to form the mature mRNA. The splicing process depends on the dynamics and interactions among transcription by the RNA polymerase II complex (RNAPII) and the spliceosomal complex consisting of multiple small nuclear ribonucleo proteins (snRNPs). Here we propose a biophysically plausible initial theory of splicing that aims to explain the effects of the stochastic dynamics of snRNPs on the splicing patterns of eukaryotic genes. We consider two different ways to model the dynamics of snRNPs: pure three-dimensional diffusion and a combination of three- and one-dimensional diffusion along the emerging pre-mRNA. Our theoretical analysis shows that there exists an optimum position of the splice sites on the growing pre-mRNA at which the time required for snRNPs to find the 5' donor site is minimized. The minimization of the overall search time is achieved mainly via the increase in non-specific interactions between the snRNPs and the growing pre-mRNA. The theory further predicts that there exists an optimum transcript length that maximizes the probabilities for exons to interact with the snRNPs. We evaluate these theoretical predictions by considering human and mouse exon microarray data as well as RNAseq data from multiple different tissues. We observe that there is a broad optimum position of splice sites on the growing pre-mRNA and an optimum transcript length, which are roughly consistent with the theoretical predictions. The theoretical and experimental analyses suggest that there is a strong interaction between the dynamics of RNAPII and the stochastic nature of snRNP search for 5' donor splicing sites.


Asunto(s)
Biología Computacional/métodos , Modelos Genéticos , Sitios de Empalme de ARN , Transcripción Genética , Animales , Simulación por Computador , Perfilación de la Expresión Génica , Humanos , Intrones , Ratones , Análisis de Secuencia por Matrices de Oligonucleótidos , Precursores del ARN/genética , Empalme del ARN , Reproducibilidad de los Resultados , Ribonucleoproteínas Nucleares Pequeñas/genética , Procesos Estocásticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA