Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 17(2): e1008558, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33539366

RESUMO

Neural information flow (NIF) provides a novel approach for system identification in neuroscience. It models the neural computations in multiple brain regions and can be trained end-to-end via stochastic gradient descent from noninvasive data. NIF models represent neural information processing via a network of coupled tensors, each encoding the representation of the sensory input contained in a brain region. The elements of these tensors can be interpreted as cortical columns whose activity encodes the presence of a specific feature in a spatiotemporal location. Each tensor is coupled to the measured data specific to a brain region via low-rank observation models that can be decomposed into the spatial, temporal and feature receptive fields of a localized neuronal population. Both these observation models and the convolutional weights defining the information processing within regions are learned end-to-end by predicting the neural signal during sensory stimulation. We trained a NIF model on the activity of early visual areas using a large-scale fMRI dataset recorded in a single participant. We show that we can recover plausible visual representations and population receptive fields that are consistent with empirical findings.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Imageamento por Ressonância Magnética/métodos , Modelos Neurológicos , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Adulto , Cognição , Humanos , Aprendizagem , Masculino , Neurônios/fisiologia , Estimulação Luminosa , Semântica , Processos Estocásticos , Televisão , Visão Ocular
2.
Neuroimage ; 181: 775-785, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30031932

RESUMO

We explore a method for reconstructing visual stimuli from brain activity. Using large databases of natural images we trained a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimuli presented during two functional magnetic resonance imaging experiments. Using a linear model we learned to predict the generative model's latent space from measured brain activity. The objective was to create an image similar to the presented stimulus image through the previously trained generator. Using this approach we were able to reconstruct structural and some semantic features of a proportion of the natural images sets. A behavioural test showed that subjects were capable of identifying a reconstruction of the original stimulus in 67.2% and 66.4% of the cases in a pairwise comparison for the two natural image datasets respectively. Our approach does not require end-to-end training of a large generative model on limited neuroimaging data. Rapid advances in generative modeling promise further improvements in reconstruction performance.


Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Neuroimagem Funcional/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Teóricos , Redes Neurais de Computação , Reconhecimento Visual de Modelos/fisiologia , Adulto , Humanos
3.
Neuroimage ; 180(Pt A): 253-266, 2018 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-28723578

RESUMO

Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade was captured by the network layer representations, where the increasingly abstract stimulus representation in the hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out validation set of viewed objects, achieving state-of-the-art decoding accuracy.


Assuntos
Redes Neurais de Computação , Reconhecimento Visual de Modelos/fisiologia , Córtex Visual/fisiologia , Adulto , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Processamento de Sinais Assistido por Computador , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA