Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 87
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cell ; 177(4): 999-1009.e10, 2019 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-31051108

RESUMO

What specific features should visual neurons encode, given the infinity of real-world images and the limited number of neurons available to represent them? We investigated neuronal selectivity in monkey inferotemporal cortex via the vast hypothesis space of a generative deep neural network, avoiding assumptions about features or semantic categories. A genetic algorithm searched this space for stimuli that maximized neuronal firing. This led to the evolution of rich synthetic images of objects with complex combinations of shapes, colors, and textures, sometimes resembling animals or familiar people, other times revealing novel patterns that did not map to any clear semantic category. These results expand our conception of the dictionary of features encoded in the cortex, and the approach can potentially reveal the internal representations of any system whose input can be captured by a generative model.


Assuntos
Rede Nervosa/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Algoritmos , Animais , Córtex Cerebral/fisiologia , Macaca mulatta/fisiologia , Masculino , Neurônios/metabolismo , Neurônios/fisiologia
2.
Proc Natl Acad Sci U S A ; 119(16): e2118705119, 2022 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-35377737

RESUMO

The primate inferior temporal cortex contains neurons that respond more strongly to faces than to other objects. Termed "face neurons," these neurons are thought to be selective for faces as a semantic category. However, face neurons also partly respond to clocks, fruits, and single eyes, raising the question of whether face neurons are better described as selective for visual features related to faces but dissociable from them. We used a recently described algorithm, XDream, to evolve stimuli that strongly activated face neurons. XDream leverages a generative neural network that is not limited to realistic objects. Human participants assessed images evolved for face neurons and for nonface neurons and natural images depicting faces, cars, fruits, etc. Evolved images were consistently judged to be distinct from real faces. Images evolved for face neurons were rated as slightly more similar to faces than images evolved for nonface neurons. There was a correlation among natural images between face neuron activity and subjective "faceness" ratings, but this relationship did not hold for face neuron­evolved images, which triggered high activity but were rated low in faceness. Our results suggest that so-called face neurons are better described as tuned to visual features rather than semantic categories.


Assuntos
Neurônios , Córtex Visual , Algoritmos , Face , Humanos , Neurônios/fisiologia , Semântica , Córtex Visual/citologia , Córtex Visual/fisiologia
3.
PLoS Comput Biol ; 18(11): e1010654, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36413523

RESUMO

Primates constantly explore their surroundings via saccadic eye movements that bring different parts of an image into high resolution. In addition to exploring new regions in the visual field, primates also make frequent return fixations, revisiting previously foveated locations. We systematically studied a total of 44,328 return fixations out of 217,440 fixations. Return fixations were ubiquitous across different behavioral tasks, in monkeys and humans, both when subjects viewed static images and when subjects performed natural behaviors. Return fixations locations were consistent across subjects, tended to occur within short temporal offsets, and typically followed a 180-degree turn in saccadic direction. To understand the origin of return fixations, we propose a proof-of-principle, biologically-inspired and image-computable neural network model. The model combines five key modules: an image feature extractor, bottom-up saliency cues, task-relevant visual features, finite inhibition-of-return, and saccade size constraints. Even though there are no free parameters that are fine-tuned for each specific task, species, or condition, the model produces fixation sequences resembling the universal properties of return fixations. These results provide initial steps towards a mechanistic understanding of the trade-off between rapid foveal recognition and the need to scrutinize previous fixation locations.


Assuntos
Fixação Ocular , Movimentos Sacádicos , Animais , Humanos , Campos Visuais , Primatas , Sinais (Psicologia)
4.
PLoS Comput Biol ; 16(6): e1007973, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32542056

RESUMO

A longstanding question in sensory neuroscience is what types of stimuli drive neurons to fire. The characterization of effective stimuli has traditionally been based on a combination of intuition, insights from previous studies, and luck. A new method termed XDream (EXtending DeepDream with real-time evolution for activation maximization) combined a generative neural network and a genetic algorithm in a closed loop to create strong stimuli for neurons in the macaque visual cortex. Here we extensively and systematically evaluate the performance of XDream. We use ConvNet units as in silico models of neurons, enabling experiments that would be prohibitive with biological neurons. We evaluated how the method compares to brute-force search, and how well the method generalizes to different neurons and processing stages. We also explored design and parameter choices. XDream can efficiently find preferred features for visual units without any prior knowledge about them. XDream extrapolates to different layers, architectures, and developmental regimes, performing better than brute-force search, and often better than exhaustive sampling of >1 million images. Furthermore, XDream is robust to choices of multiple image generators, optimization algorithms, and hyperparameters, suggesting that its performance is locally near-optimal. Lastly, we found no significant advantage to problem-specific parameter tuning. These results establish expectations and provide practical recommendations for using XDream to investigate neural coding in biological preparations. Overall, XDream is an efficient, general, and robust algorithm for uncovering neuronal tuning preferences using a vast and diverse stimulus space. XDream is implemented in Python, released under the MIT License, and works on Linux, Windows, and MacOS.


Assuntos
Neurônios/fisiologia , Estimulação Luminosa , Córtex Visual/fisiologia , Algoritmos , Animais , Redes Neurais de Computação , Percepção Visual/fisiologia
5.
Proc Natl Acad Sci U S A ; 115(35): 8835-8840, 2018 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-30104363

RESUMO

Making inferences from partial information constitutes a critical aspect of cognition. During visual perception, pattern completion enables recognition of poorly visible or occluded objects. We combined psychophysics, physiology, and computational models to test the hypothesis that pattern completion is implemented by recurrent computations and present three pieces of evidence that are consistent with this hypothesis. First, subjects robustly recognized objects even when they were rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking. Second, invasive physiological responses along the human ventral cortex exhibited visually selective responses to partially visible objects that were delayed compared with whole objects, suggesting the need for additional computations. These physiological delays were correlated with the effects of backward masking. Third, state-of-the-art feed-forward computational architectures were not robust to partial visibility. However, recognition performance was recovered when the model was augmented with attractor-based recurrent connectivity. The recurrent model was able to predict which images of heavily occluded objects were easier or harder for humans to recognize, could capture the effect of introducing a backward mask on recognition behavior, and was consistent with the physiological delays along the human ventral visual stream. These results provide a strong argument of plausibility for the role of recurrent computations in making visual inferences from partial information.


Assuntos
Simulação por Computador , Modelos Neurológicos , Reconhecimento Visual de Modelos/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino
6.
Cereb Cortex ; 29(11): 4551-4567, 2019 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-30590542

RESUMO

Rapid and flexible learning during behavioral choices is critical to our daily endeavors and constitutes a hallmark of dynamic reasoning. An important paradigm to examine flexible behavior involves learning new arbitrary associations mapping visual inputs to motor outputs. We conjectured that visuomotor rules are instantiated by translating visual signals into actions through dynamic interactions between visual, frontal and motor cortex. We evaluated the neural representation of such visuomotor rules by performing intracranial field potential recordings in epilepsy subjects during a rule-learning delayed match-to-behavior task. Learning new visuomotor mappings led to the emergence of specific responses associating visual signals with motor outputs in 3 anatomical clusters in frontal, anteroventral temporal and posterior parietal cortex. After learning, mapping selective signals during the delay period showed interactions with visual and motor signals. These observations provide initial steps towards elucidating the dynamic circuits underlying flexible behavior and how communication between subregions of frontal, temporal, and parietal cortex leads to rapid learning of task-relevant choices.


Assuntos
Aprendizagem por Associação/fisiologia , Encéfalo/fisiologia , Neurônios/fisiologia , Desempenho Psicomotor/fisiologia , Adolescente , Adulto , Criança , Feminino , Lobo Frontal/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Atividade Motora , Vias Neurais/fisiologia , Lobo Parietal/fisiologia , Estimulação Luminosa , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
7.
Neuroimage ; 180(Pt A): 147-159, 2018 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-28823828

RESUMO

The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision.


Assuntos
Córtex Visual/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Mapeamento Encefálico/métodos , Criança , Pré-Escolar , Epilepsia Resistente a Medicamentos/terapia , Terapia por Estimulação Elétrica , Eletrodos Implantados , Potenciais Evocados Visuais/fisiologia , Feminino , Humanos , Masculino , Filmes Cinematográficos , Estimulação Luminosa , Processamento de Sinais Assistido por Computador , Adulto Jovem
8.
Nucleic Acids Res ; 44(10): e97, 2016 06 02.
Artigo em Inglês | MEDLINE | ID: mdl-26980280

RESUMO

The ability to integrate 'omics' (i.e. transcriptomics and proteomics) is becoming increasingly important to the understanding of regulatory mechanisms. There are currently no tools available to identify differentially expressed genes (DEGs) across different 'omics' data types or multi-dimensional data including time courses. We present fCI (f-divergence Cut-out Index), a model capable of simultaneously identifying DEGs from continuous and discrete transcriptomic, proteomic and integrated proteogenomic data. We show that fCI can be used across multiple diverse sets of data and can unambiguously find genes that show functional modulation, developmental changes or misregulation. Applying fCI to several proteogenomics datasets, we identified a number of important genes that showed distinctive regulation patterns. The package fCI is available at R Bioconductor and http://software.steenlab.org/fCI/.


Assuntos
Biologia Computacional/métodos , Perfilação da Expressão Gênica/métodos , Expressão Gênica , Proteômica/métodos , Algoritmos , Análise de Sequência de RNA , Espectrometria de Massas em Tandem
10.
Cereb Cortex ; 26(7): 3064-82, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-26092221

RESUMO

When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects.


Assuntos
Atenção , Simulação por Computador , Fixação Ocular , Percepção Visual , Adolescente , Adulto , Atenção/fisiologia , Encéfalo/fisiologia , Feminino , Fixação Ocular/fisiologia , Humanos , Masculino , Modelos Neurológicos , Psicofísica , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
11.
Nature ; 465(7295): 182-7, 2010 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-20393465

RESUMO

We used genome-wide sequencing methods to study stimulus-dependent enhancer function in mouse cortical neurons. We identified approximately 12,000 neuronal activity-regulated enhancers that are bound by the general transcriptional co-activator CBP in an activity-dependent manner. A function of CBP at enhancers may be to recruit RNA polymerase II (RNAPII), as we also observed activity-regulated RNAPII binding to thousands of enhancers. Notably, RNAPII at enhancers transcribes bi-directionally a novel class of enhancer RNAs (eRNAs) within enhancer domains defined by the presence of histone H3 monomethylated at lysine 4. The level of eRNA expression at neuronal enhancers positively correlates with the level of messenger RNA synthesis at nearby genes, suggesting that eRNA synthesis occurs specifically at enhancers that are actively engaged in promoting mRNA synthesis. These findings reveal that a widespread mechanism of enhancer activation involves RNAPII binding and eRNA synthesis.


Assuntos
Elementos Facilitadores Genéticos/genética , Regulação da Expressão Gênica/genética , Neurônios/metabolismo , Transcrição Gênica/genética , Animais , Fatores de Transcrição Hélice-Alça-Hélice Básicos/genética , Proteína de Ligação a CREB/metabolismo , Sequência Consenso/genética , Proteínas do Citoesqueleto/genética , Genes Reporter , Genes fos/genética , Histonas/metabolismo , Metilação , Camundongos , Camundongos Endogâmicos C57BL , Proteínas do Tecido Nervoso/genética , RNA Polimerase II/metabolismo , RNA não Traduzido/biossíntese , RNA não Traduzido/genética
12.
J Neurosci ; 34(8): 3042-55, 2014 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-24553944

RESUMO

Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses.


Assuntos
Encéfalo/fisiologia , Desempenho Psicomotor/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Atenção/fisiologia , Criança , Interpretação Estatística de Dados , Eletrodos Implantados , Eletroencefalografia , Epilepsia/psicologia , Movimentos Oculares/fisiologia , Feminino , Objetivos , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Córtex Visual/fisiologia , Adulto Jovem
13.
J Neurophysiol ; 113(5): 1656-69, 2015 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-25429116

RESUMO

Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds.


Assuntos
Potenciais Evocados Visuais , Reconhecimento Visual de Modelos , Tempo de Reação , Córtex Visual/fisiologia , Feminino , Humanos , Masculino
14.
Nucleic Acids Res ; 40(16): 7858-69, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22684627

RESUMO

More than 98% of a typical vertebrate genome does not code for proteins. Although non-coding regions are sprinkled with short (<200 bp) islands of evolutionarily conserved sequences, the function of most of these unannotated conserved islands remains unknown. One possibility is that unannotated conserved islands could encode non-coding RNAs (ncRNAs); alternatively, unannotated conserved islands could serve as promoter-distal regulatory factor binding sites (RFBSs) like enhancers. Here we assess these possibilities by comparing unannotated conserved islands in the human and mouse genomes to transcribed regions and to RFBSs, relying on a detailed case study of one human and one mouse cell type. We define transcribed regions by applying a novel transcript-calling algorithm to RNA-Seq data obtained from total cellular RNA, and we define RFBSs using ChIP-Seq and DNAse-hypersensitivity assays. We find that unannotated conserved islands are four times more likely to coincide with RFBSs than with unannotated ncRNAs. Thousands of conserved RFBSs can be categorized as insulators based on the presence of CTCF or as enhancers based on the presence of p300/CBP and H3K4me1. While many unannotated conserved RFBSs are transcriptionally active to some extent, the transcripts produced tend to be unspliced, non-polyadenylated and expressed at levels 10 to 100-fold lower than annotated coding or ncRNAs. Extending these findings across multiple cell types and tissues, we propose that most conserved non-coding genomic DNA in vertebrate genomes corresponds to promoter-distal regulatory elements.


Assuntos
Sequência Conservada , Elementos Reguladores de Transcrição , Animais , Sequência de Bases , Sítios de Ligação , DNA/química , Genoma , Células HeLa , Humanos , Camundongos , Regiões Promotoras Genéticas , RNA não Traduzido/genética , Transcrição Gênica
15.
J Vis ; 14(5): 7, 2014 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-24819738

RESUMO

Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition.


Assuntos
Percepção de Forma/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adulto , Feminino , Humanos , Masculino , Psicofísica , Fatores de Tempo , Visão Ocular/fisiologia , Vias Visuais , Adulto Jovem
16.
Nat Neurosci ; 27(6): 1157-1166, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38684892

RESUMO

In natural vision, primates actively move their eyes several times per second via saccades. It remains unclear whether, during this active looking, visual neurons exhibit classical retinotopic properties, anticipate gaze shifts or mirror the stable quality of perception, especially in complex natural scenes. Here, we let 13 monkeys freely view thousands of natural images across 4.6 million fixations, recorded 883 h of neuronal responses in six areas spanning primary visual to anterior inferior temporal cortex and analyzed spatial, temporal and featural selectivity in these responses. Face neurons tracked their receptive field contents, indicated by category-selective responses. Self-consistency analysis showed that general feature-selective responses also followed eye movements and remained gaze-dependent over seconds of viewing the same image. Computational models of feature-selective responses located retinotopic receptive fields during free viewing. We found limited evidence for feature-selective predictive remapping and no viewing-history integration. Thus, ventral visual neurons represent the world in a predominantly eye-centered reference frame during natural vision.


Assuntos
Movimentos Oculares , Macaca mulatta , Neurônios , Córtex Visual , Animais , Córtex Visual/fisiologia , Movimentos Oculares/fisiologia , Neurônios/fisiologia , Masculino , Estimulação Luminosa/métodos , Percepção Visual/fisiologia , Fixação Ocular/fisiologia , Movimentos Sacádicos/fisiologia , Visão Ocular/fisiologia , Feminino
17.
bioRxiv ; 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38826332

RESUMO

We show that neural networks can implement reward-seeking behavior using only local predictive updates and internal noise. These networks are capable of autonomous interaction with an environment and can switch between explore and exploit behavior, which we show is governed by attractor dynamics. Networks can adapt to changes in their architectures, environments, or motor interfaces without any external control signals. When networks have a choice between different tasks, they can form preferences that depend on patterns of noise and initialization, and we show that these preferences can be biased by network architectures or by changing learning rates. Our algorithm presents a flexible, biologically plausible way of interacting with environments without requiring an explicit environmental reward function, allowing for behavior that is both highly adaptable and autonomous. Code is available at https://github.com/ccli3896/PaN.

18.
bioRxiv ; 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38854011

RESUMO

During natural vision, we rarely see objects in isolation but rather embedded in rich and complex contexts. Understanding how the brain recognizes objects in natural scenes by integrating contextual information remains a key challenge. To elucidate neural mechanisms compatible with human visual processing, we need an animal model that behaves similarly to humans, so that inferred neural mechanisms can provide hypotheses relevant to the human brain. Here we assessed whether rhesus macaques could model human context-driven object recognition by quantifying visual object identification abilities across variations in the amount, quality, and congruency of contextual cues. Behavioral metrics revealed strikingly similar context-dependent patterns between humans and monkeys. However, neural responses in the inferior temporal (IT) cortex of monkeys that were never explicitly trained to discriminate objects in context, as well as current artificial neural network models, could only partially explain this cross-species correspondence. The shared behavioral variance unexplained by context-naive neural data or computational models highlights fundamental knowledge gaps. Our findings demonstrate an intriguing alignment of human and monkey visual object processing that defies full explanation by either brain activity in a key visual region or state-of-the-art models.

19.
ArXiv ; 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38947929

RESUMO

We use (multi)modal deep neural networks (DNNs) to probe for sites of multimodal integration in the human brain by predicting stereoen-cephalography (SEEG) recordings taken while human subjects watched movies. We operationalize sites of multimodal integration as regions where a multimodal vision-language model predicts recordings better than unimodal language, unimodal vision, or linearly-integrated language-vision models. Our target DNN models span different architectures (e.g., convolutional networks and transformers) and multimodal training techniques (e.g., cross-attention and contrastive learning). As a key enabling step, we first demonstrate that trained vision and language models systematically outperform their randomly initialized counterparts in their ability to predict SEEG signals. We then compare unimodal and multimodal models against one another. Because our target DNN models often have different architectures, number of parameters, and training sets (possibly obscuring those differences attributable to integration), we carry out a controlled comparison of two models (SLIP and SimCLR), which keep all of these attributes the same aside from input modality. Using this approach, we identify a sizable number of neural sites (on average 141 out of 1090 total sites or 12.94%) and brain regions where multimodal integration seems to occur. Additionally, we find that among the variants of multimodal training techniques we assess, CLIP-style training is the best suited for downstream prediction of the neural activity in these sites.

20.
Nat Hum Behav ; 2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-39363119

RESUMO

Associating different aspects of experience with discrete events is critical for human memory. A potential mechanism for linking memory components is phase precession, during which neurons fire progressively earlier in time relative to theta oscillations. However, no direct link between phase precession and memory has been established. Here we recorded single-neuron activity and local field potentials in the human medial temporal lobe while participants (n = 22) encoded and retrieved memories of movie clips. Bouts of theta and phase precession occurred following cognitive boundaries during movie watching and following stimulus onsets during memory retrieval. Phase precession was dynamic, with different neurons exhibiting precession in different task periods. Phase precession strength provided information about memory encoding and retrieval success that was complementary with firing rates. These data provide direct neural evidence for a functional role of phase precession in human episodic memory.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA