Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 113
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Nat Rev Neurosci ; 24(7): 431-450, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37253949

RESUMEN

Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.


Asunto(s)
Encéfalo , Redes Neurales de la Computación , Humanos , Encéfalo/fisiología
2.
Annu Rev Neurosci ; 42: 407-432, 2019 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-31283895

RESUMEN

The brain's function is to enable adaptive behavior in the world. To this end, the brain processes information about the world. The concept of representation links the information processed by the brain back to the world and enables us to understand what the brain does at a functional level. The appeal of making the connection between brain activity and what it represents has been irresistible to neuroscience, despite the fact that representational interpretations pose several challenges: We must define which aspects of brain activity matter, how the code works, and how it supports computations that contribute to adaptive behavior. It has been suggested that we might drop representational language altogether and seek to understand the brain, more simply, as a dynamical system. In this review, we argue that the concept of representation provides a useful link between dynamics and computational function and ask which aspects of brain activity should be analyzed to achieve a representational understanding. We peel the onion of brain representations in search of the layers (the aspects of brain activity) that matter to computation. The article provides an introduction to the motivation and mathematics of representational models, a critical discussion of their assumptions and limitations, and a preview of future directions in this area.


Asunto(s)
Mapeo Encefálico , Encéfalo/patología , Cognición/fisiología , Modelos Neurológicos , Humanos , Imagen por Resonancia Magnética/métodos
3.
Nat Rev Neurosci ; 22(11): 703-718, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34522043

RESUMEN

A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.


Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Modelos Teóricos , Neuronas/fisiología , Animales , Encéfalo/citología , Humanos
4.
Proc Natl Acad Sci U S A ; 119(27): e2115047119, 2022 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-35767642

RESUMEN

Human vision is attuned to the subtle differences between individual faces. Yet we lack a quantitative way of predicting how similar two face images look and whether they appear to show the same person. Principal component-based three-dimensional (3D) morphable models are widely used to generate stimuli in face perception research. These models capture the distribution of real human faces in terms of dimensions of physical shape and texture. How well does a "face space" based on these dimensions capture the similarity relationships humans perceive among faces? To answer this, we designed a behavioral task to collect dissimilarity and same/different identity judgments for 232 pairs of realistic faces. Stimuli sampled geometric relationships in a face space derived from principal components of 3D shape and texture (Basel face model [BFM]). We then compared a wide range of models in their ability to predict the data, including the BFM from which faces were generated, an active appearance model derived from face photographs, and image-computable models of visual perception. Euclidean distance in the BFM explained both dissimilarity and identity judgments surprisingly well. In a comparison against 16 diverse models, BFM distance was competitive with representational distances in state-of-the-art deep neural networks (DNNs), including novel DNNs trained on BFM synthetic identities or BFM latents. Models capturing the distribution of face shape and texture across individuals are not only useful tools for stimulus generation. They also capture important information about how faces are perceived, suggesting that human face representations are tuned to the statistical distribution of faces.


Asunto(s)
Reconocimiento Facial , Juicio , Percepción Visual , Humanos , Redes Neurales de la Computación
5.
J Neurosci ; 43(10): 1731-1741, 2023 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-36759190

RESUMEN

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing. We focus on the ability of visuo-semantic models, consisting of human-generated labels of object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual reversal in the relative importance of DNN versus visuo-semantic features as ventral-stream object representations unfold over space and time. Although lower-level visual areas are better explained by DNN features starting early in time (at 66 ms after stimulus onset), higher-level cortical dynamics are best accounted for by visuo-semantic features starting later in time (at 146 ms after stimulus onset). Among the visuo-semantic features, object parts and basic categories drive the advantage over DNNs. These results show that a significant component of the variance unexplained by DNNs in higher-level cortical dynamics is structured and can be explained by readily nameable aspects of the objects. We conclude that current DNNs fail to fully capture dynamic representations in higher-level human visual cortex and suggest a path toward more accurate models of ventral-stream computations.SIGNIFICANCE STATEMENT When we view objects such as faces and cars in our visual environment, their neural representations dynamically unfold over time at a millisecond scale. These dynamics reflect the cortical computations that support fast and robust object recognition. DNNs have emerged as a promising framework for modeling these computations but cannot yet fully account for the neural dynamics. Using magnetoencephalography data acquired in human observers during object viewing, we show that readily nameable aspects of objects, such as 'eye', 'wheel', and 'face', can account for variance in the neural dynamics over and above DNNs. These findings suggest that DNNs and humans may in part rely on different object features for visual recognition and provide guidelines for model improvement.


Asunto(s)
Reconocimiento Visual de Modelos , Semántica , Masculino , Femenino , Humanos , Redes Neurales de la Computación , Percepción Visual , Encéfalo , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos
6.
Proc Natl Acad Sci U S A ; 118(8)2021 02 23.
Artículo en Inglés | MEDLINE | ID: mdl-33593900

RESUMEN

Deep neural networks provide the current best models of visual information processing in the primate brain. Drawing on work from computer vision, the most commonly used networks are pretrained on data from the ImageNet Large Scale Visual Recognition Challenge. This dataset comprises images from 1,000 categories, selected to provide a challenging testbed for automated visual object recognition systems. Moving beyond this common practice, we here introduce ecoset, a collection of >1.5 million images from 565 basic-level categories selected to better capture the distribution of objects relevant to humans. Ecoset categories were chosen to be both frequent in linguistic usage and concrete, thereby mirroring important physical objects in the world. We test the effects of training on this ecologically more valid dataset using multiple instances of two neural network architectures: AlexNet and vNet, a novel architecture designed to mimic the progressive increase in receptive field sizes along the human ventral stream. We show that training on ecoset leads to significant improvements in predicting representations in human higher-level visual cortex and perceptual judgments, surpassing the previous state of the art. Significant and highly consistent benefits are demonstrated for both architectures on two separate functional magnetic resonance imaging (fMRI) datasets and behavioral data, jointly covering responses to 1,292 visual stimuli from a wide variety of object categories. These results suggest that computational visual neuroscience may take better advantage of the deep learning framework by using image sets that reflect the human perceptual and cognitive experience. Ecoset and trained network models are openly available to the research community.


Asunto(s)
Aprendizaje Profundo , Ecología , Modelos Neurológicos , Redes Neurales de la Computación , Reconocimiento Visual de Modelos , Corteza Visual/fisiología , Percepción Visual/fisiología , Mapeo Encefálico , Humanos
7.
Proc Natl Acad Sci U S A ; 117(47): 29330-29337, 2020 11 24.
Artículo en Inglés | MEDLINE | ID: mdl-33229549

RESUMEN

Distinct scientific theories can make similar predictions. To adjudicate between theories, we must design experiments for which the theories make distinct predictions. Here we consider the problem of comparing deep neural networks as models of human visual recognition. To efficiently compare models' ability to predict human responses, we synthesize controversial stimuli: images for which different models produce distinct responses. We applied this approach to two visual recognition tasks, handwritten digits (MNIST) and objects in small natural images (CIFAR-10). For each task, we synthesized controversial stimuli to maximize the disagreement among models which employed different architectures and recognition algorithms. Human subjects viewed hundreds of these stimuli, as well as natural examples, and judged the probability of presence of each digit/object category in each image. We quantified how accurately each model predicted the human judgments. The best-performing models were a generative analysis-by-synthesis model (based on variational autoencoders) for MNIST and a hybrid discriminative-generative joint energy model for CIFAR-10. These deep neural networks (DNNs), which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. None of the candidate models fully explained the human responses. Controversial stimuli generalize the concept of adversarial examples, obviating the need to assume a ground-truth model. Unlike natural images, controversial stimuli are not constrained to the stimulus distribution models are trained on, thus providing severe out-of-distribution tests that reveal the models' inductive biases. Controversial stimuli therefore provide powerful probes of discrepancies between models and human perception.


Asunto(s)
Cognición/fisiología , Aprendizaje Profundo , Modelos Neurológicos , Reconocimiento de Normas Patrones Automatizadas/métodos , Patrones de Reconocimiento Fisiológico/fisiología , Adulto , Femenino , Humanos , Masculino , Distribución Normal
8.
Behav Brain Sci ; 46: e392, 2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-38054329

RESUMEN

An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.


Asunto(s)
Lenguaje , Redes Neurales de la Computación , Humanos
9.
J Neurosci ; 41(9): 1952-1969, 2021 03 03.
Artículo en Inglés | MEDLINE | ID: mdl-33452225

RESUMEN

Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.


Asunto(s)
Encéfalo/fisiología , Reconocimiento Facial/fisiología , Modelos Neurológicos , Mapeo Encefálico/métodos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino
10.
J Neurosci ; 2021 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-34099508

RESUMEN

Social behaviour is coordinated by a network of brain regions, including those involved in the perception of social stimuli and those involved in complex functions like inferring perceptual and mental states and controlling social interactions. The properties and function of many of these regions in isolation is relatively well-understood, but less is known about how these regions interact whilst processing dynamic social interactions. To investigate whether the functional connectivity between brain regions is modulated by social context, we collected functional MRI (fMRI) data from male monkeys (Macaca mulatta) viewing videos of social interactions labelled as "affiliative", "aggressive", or "ambiguous". We show activation related to the perception of social interactions along both banks of the superior temporal sulcus, parietal cortex, medial and lateral frontal cortex, and the caudate nucleus. Within this network, we show that fronto-temporal functional connectivity is significantly modulated by social context. Crucially, we link the observation of specific behaviours to changes in functional connectivity within our network. Viewing aggressive behaviour was associated with a limited increase in temporo-temporal and a weak increase in cingulate-temporal connectivity. By contrast, viewing interactions where the outcome was uncertain was associated with a pronounced increase in temporo-temporal, and cingulate-temporal functional connectivity. We hypothesise that this widespread network synchronisation occurs when cingulate and temporal areas coordinate their activity when more difficult social inferences are being made.SIGNIFICANCE STATEMENT:Processing social information from our environment requires the activation of several brain regions, which are concentrated within the frontal and temporal lobes. However, little is known about how these areas interact to facilitate the processing of different social interactions. Here we show that functional connectivity within and between the frontal and temporal lobes is modulated by social context. Specifically, we demonstrate that viewing social interactions where the outcome was unclear is associated with increased synchrony within and between the cingulate cortex and temporal cortices. These findings suggest that the coordination between the cingulate and temporal cortices is enhanced when more difficult social inferences are being made.

11.
Proc Natl Acad Sci U S A ; 116(43): 21854-21863, 2019 10 22.
Artículo en Inglés | MEDLINE | ID: mdl-31591217

RESUMEN

The human visual system is an intricate network of brain regions that enables us to recognize the world around us. Despite its abundant lateral and feedback connections, object processing is commonly viewed and studied as a feedforward process. Here, we measure and model the rapid representational dynamics across multiple stages of the human ventral stream using time-resolved brain imaging and deep learning. We observe substantial representational transformations during the first 300 ms of processing within and across ventral-stream regions. Categorical divisions emerge in sequence, cascading forward and in reverse across regions, and Granger causality analysis suggests bidirectional information flow between regions. Finally, recurrent deep neural network models clearly outperform parameter-matched feedforward models in terms of their ability to capture the multiregion cortical dynamics. Targeted virtual cooling experiments on the recurrent deep network models further substantiate the importance of their lateral and top-down connections. These results establish that recurrent models are required to understand information processing in the human ventral stream.


Asunto(s)
Modelos Neurológicos , Percepción Visual/fisiología , Adulto , Aprendizaje Profundo , Retroalimentación Sensorial , Femenino , Humanos , Magnetoencefalografía , Red Nerviosa , Vías Visuales
12.
J Cogn Neurosci ; 33(10): 2044-2064, 2021 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-34272948

RESUMEN

Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual cortex. What remains unclear is how strongly experimental choices, such as network architecture, training, and fitting to brain data, contribute to the observed similarities. Here, we compare a diverse set of nine DNN architectures on their ability to explain the representational geometry of 62 object images in human inferior temporal cortex (hIT), as measured with fMRI. We compare untrained networks to their task-trained counterparts and assess the effect of cross-validated fitting to hIT, by taking a weighted combination of the principal components of features within each layer and, subsequently, a weighted combination of layers. For each combination of training and fitting, we test all models for their correlation with the hIT representational dissimilarity matrix, using independent images and subjects. Trained models outperform untrained models (accounting for 57% more of the explainable variance), suggesting that structured visual features are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the Imagenet object-recognition task used to train the networks. The same models can also explain the disparate representations in primary visual cortex (V1), where stronger weights are given to earlier layers. In each region, all architectures achieved equivalently high performance once trained and fitted. The models' shared properties-deep feedforward hierarchies of spatially restricted nonlinear filters-seem more important than their differences, when modeling human visual representations.


Asunto(s)
Redes Neurales de la Computación , Corteza Visual , Humanos , Imagen por Resonancia Magnética , Lóbulo Temporal/diagnóstico por imagen , Corteza Visual/diagnóstico por imagen , Percepción Visual
13.
Neuroimage ; 224: 117408, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-33049407

RESUMEN

A class of semantic theories defines concepts in terms of statistical distributions of lexical items, basing meaning on vectors of word co-occurrence frequencies. A different approach emphasizes abstract hierarchical taxonomic relationships among concepts. However, the functional relevance of these different accounts and how they capture information-encoding of lexical meaning in the brain still remains elusive. We investigated to what extent distributional and taxonomic models explained word-elicited neural responses using cross-validated representational similarity analysis (RSA) of functional magnetic resonance imaging (fMRI) and model comparisons. Our findings show that the brain encodes both types of semantic information, but in distinct cortical regions. Posterior middle temporal regions reflected lexical-semantic similarity based on hierarchical taxonomies, in coherence with the action-relatedness of specific semantic word categories. In contrast, distributional semantics best predicted the representational patterns in left inferior frontal gyrus (LIFG, BA 47). Both representations coexisted in the angular gyrus supporting semantic binding and integration. These results reveal that neuronal networks with distinct cortical distributions across higher-order association cortex encode different representational properties of word meanings. Taxonomy may shape long-term lexical-semantic representations in memory consistently with the sensorimotor details of semantic categories, whilst distributional knowledge in the LIFG (BA 47) may enable semantic combinatorics in the context of language use. Our approach helps to elucidate the nature of semantic representations essential for understanding human language.


Asunto(s)
Asociación , Lóbulo Frontal/diagnóstico por imagen , Lóbulo Parietal/diagnóstico por imagen , Lóbulo Temporal/diagnóstico por imagen , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Mapeo Encefálico , Clasificación , Comprensión , Formación de Concepto , Lóbulo Frontal/fisiología , Neuroimagen Funcional , Humanos , Lenguaje , Imagen por Resonancia Magnética , Lóbulo Parietal/fisiología , Semántica , Lóbulo Temporal/fisiología
14.
PLoS Comput Biol ; 16(10): e1008215, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33006992

RESUMEN

Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale visual recognition tasks on natural images. (2) Setting a confidence threshold, at which recurrent computations terminate and a decision is made, enables flexible trading of speed for accuracy. At a given confidence threshold, the model expends more time and energy on images that are harder to recognise, without requiring additional parameters for deeper computations. (3) The recurrent model's reaction time for an image predicts the human reaction time for the same image better than several parameter-matched and state-of-the-art feedforward models. (4) Across confidence thresholds, the recurrent model emulates the behaviour of feedforward control models in that it achieves the same accuracy at approximately the same computational cost (mean number of floating-point operations). However, the recurrent model can be run longer (higher confidence threshold) and then outperforms parameter-matched feedforward comparison models. These results suggest that recurrent connectivity, a hallmark of biological visual systems, may be essential for understanding the accuracy, flexibility, and dynamics of human visual recognition.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Tiempo de Reacción/fisiología , Visión Ocular/fisiología , Percepción Visual/fisiología , Adulto , Biología Computacional , Femenino , Humanos , Masculino , Adulto Joven
15.
Neuroimage ; 201: 116004, 2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31299368

RESUMEN

Face-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face videos and voice recordings of the same identity. Models of face and voice integration suggest that such representations could exist in multimodal brain regions, and in unimodal regions via direct coupling between face- and voice-selective regions. Therefore, in this study we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in face-selective, voice-selective, and person-selective multimodal brain regions. We used representational similarity analysis to (1) compare representational geometries (i.e. representational dissimilarity matrices) of face- and voice-elicited identities, and to (2) investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We did not find any evidence of similar representational geometries across modalities in any of our regions of interest. However, our results showed that pattern discriminants that were trained to discriminate pairs of identities from their faces could also discriminate the respective voices (and vice-versa) in the right posterior superior temporal sulcus (rpSTS). Our findings suggest that the rpSTS is a person-selective multimodal region that shows a modality-general person-identity representation and integrates face and voice identity information.


Asunto(s)
Percepción Auditiva/fisiología , Reconocimiento Facial/fisiología , Reconocimiento en Psicología/fisiología , Lóbulo Temporal/fisiología , Voz , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
16.
Neuroimage ; 194: 12-24, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-30894333

RESUMEN

The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200 ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity.


Asunto(s)
Reconocimiento Visual de Modelos/fisiología , Corteza Visual/fisiología , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Masculino
17.
Neuroimage ; 183: 606-616, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30170148

RESUMEN

GLMdenoise is a denoising technique for task-based fMRI. In GLMdenoise, estimates of spatially correlated noise (which may be physiological, instrumental, motion-related, or neural in origin) are derived from the data and incorporated as nuisance regressors in a general linear model (GLM) analysis. We previously showed that GLMdenoise outperforms a variety of other denoising techniques in terms of cross-validation accuracy of GLM estimates (Kay et al., 2013a). However, the practical impact of denoising for experimental studies remains unclear. Here we examine whether and to what extent GLMdenoise improves sensitivity in the context of multivariate pattern analysis of fMRI data. On a large number of participants (31 participants across 4 experiments; 3 T, gradient-echo, spatial resolution 2-3.75 mm, temporal resolution 1.3-2 s, number of conditions 32-75), we perform representational similarity analysis (Kriegeskorte et al., 2008a) as well as pattern classification (Haxby et al., 2001). We find that GLMdenoise substantially improves replicability of representational dissimilarity matrices (RDMs) across independent splits of each participant's dataset (average RDM replicability increases from r = 0.46 to r = 0.61). Additionally, we find that GLMdenoise substantially improves pairwise classification accuracy (average classification accuracy increases from 79% correct to 84% correct). We show that GLMdenoise often improves and never degrades performance for individual participants and that GLMdenoise also improves across-participant consistency. We conclude that GLMdenoise is a useful tool that can be routinely used to maximize the amount of information extracted from fMRI activity patterns.


Asunto(s)
Corteza Cerebral/fisiología , Neuroimagen Funcional/métodos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Adulto , Corteza Cerebral/diagnóstico por imagen , Humanos , Análisis Multivariante , Reconocimiento de Normas Patrones Automatizadas , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología
18.
Hum Brain Mapp ; 39(10): 4018-4031, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29885014

RESUMEN

We evaluated the effectiveness of prospective motion correction (PMC) on a simple visual task when no deliberate subject motion was present. The PMC system utilizes an in-bore optical camera to track an external marker attached to the participant via a custom-molded mouthpiece. The study was conducted at two resolutions (1.5 mm vs 3 mm) and under three conditions (PMC On and Mouthpiece On vs PMC Off and Mouthpiece On vs PMC Off and Mouthpiece Off). Multiple data analysis methods were conducted, including univariate and multivariate approaches, and we demonstrated that the benefit of PMC is most apparent for multi-voxel pattern decoding at higher resolutions. Additional testing on two participants showed that our inexpensive, commercially available mouthpiece solution produced comparable results to a dentist-molded mouthpiece. Our results showed that PMC is increasingly important at higher resolutions for analyses that require accurate voxel registration across time.


Asunto(s)
Artefactos , Neuroimagen Funcional/normas , Movimientos de la Cabeza , Procesamiento de Imagen Asistido por Computador/normas , Imagen por Resonancia Magnética/normas , Reconocimiento de Normas Patrones Automatizadas/normas , Reconocimiento Visual de Modelos/fisiología , Máquina de Vectores de Soporte , Corteza Visual/fisiología , Adulto , Neuroimagen Funcional/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Sensibilidad y Especificidad , Corteza Visual/diagnóstico por imagen
19.
PLoS Comput Biol ; 13(7): e1005604, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28746335

RESUMEN

The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging.


Asunto(s)
Encéfalo/fisiología , Reconocimiento Facial/fisiología , Imagen por Resonancia Magnética/métodos , Modelos Neurológicos , Algoritmos , Mapeo Encefálico , Cara/fisiología , Humanos , Análisis de Componente Principal
20.
PLoS Comput Biol ; 13(4): e1005508, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28437426

RESUMEN

Representational models specify how activity patterns in populations of neurons (or, more generally, in multivariate brain-activity measurements) relate to sensory stimuli, motor responses, or cognitive processes. In an experimental context, representational models can be defined as hypotheses about the distribution of activity profiles across experimental conditions. Currently, three different methods are being used to test such hypotheses: encoding analysis, pattern component modeling (PCM), and representational similarity analysis (RSA). Here we develop a common mathematical framework for understanding the relationship of these three methods, which share one core commonality: all three evaluate the second moment of the distribution of activity profiles, which determines the representational geometry, and thus how well any feature can be decoded from population activity. Using simulated data for three different experimental designs, we compare the power of the methods to adjudicate between competing representational models. PCM implements a likelihood-ratio test and therefore provides the most powerful test if its assumptions hold. However, the other two approaches-when conducted appropriately-can perform similarly. In encoding analysis, the linear model needs to be appropriately regularized, which effectively imposes a prior on the activity profiles. With such a prior, an encoding model specifies a well-defined distribution of activity profiles. In RSA, the unequal variances and statistical dependencies of the dissimilarity estimates need to be taken into account to reach near-optimal power in inference. The three methods render different aspects of the information explicit (e.g. single-response tuning in encoding analysis and population-response representational dissimilarity in RSA) and have specific advantages in terms of computational demands, ease of use, and extensibility. The three methods are properly construed as complementary components of a single data-analytical toolkit for understanding neural representations on the basis of multivariate brain-activity data.


Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Neuronas/fisiología , Algoritmos , Encéfalo/citología , Biología Computacional , Modelos Lineales , Imagen por Resonancia Magnética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA