Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
J Neurosci ; 43(10): 1731-1741, 2023 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-36759190

RESUMEN

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing. We focus on the ability of visuo-semantic models, consisting of human-generated labels of object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual reversal in the relative importance of DNN versus visuo-semantic features as ventral-stream object representations unfold over space and time. Although lower-level visual areas are better explained by DNN features starting early in time (at 66 ms after stimulus onset), higher-level cortical dynamics are best accounted for by visuo-semantic features starting later in time (at 146 ms after stimulus onset). Among the visuo-semantic features, object parts and basic categories drive the advantage over DNNs. These results show that a significant component of the variance unexplained by DNNs in higher-level cortical dynamics is structured and can be explained by readily nameable aspects of the objects. We conclude that current DNNs fail to fully capture dynamic representations in higher-level human visual cortex and suggest a path toward more accurate models of ventral-stream computations.SIGNIFICANCE STATEMENT When we view objects such as faces and cars in our visual environment, their neural representations dynamically unfold over time at a millisecond scale. These dynamics reflect the cortical computations that support fast and robust object recognition. DNNs have emerged as a promising framework for modeling these computations but cannot yet fully account for the neural dynamics. Using magnetoencephalography data acquired in human observers during object viewing, we show that readily nameable aspects of objects, such as 'eye', 'wheel', and 'face', can account for variance in the neural dynamics over and above DNNs. These findings suggest that DNNs and humans may in part rely on different object features for visual recognition and provide guidelines for model improvement.


Asunto(s)
Reconocimiento Visual de Modelos , Semántica , Masculino , Femenino , Humanos , Redes Neurales de la Computación , Percepción Visual , Encéfalo , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos
2.
J Neurosci ; 43(3): 484-500, 2023 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-36535769

RESUMEN

Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while human participants (female and male) viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital, ventral-temporal, and posterior parietal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together, our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings.SIGNIFICANCE STATEMENT When we see a line drawing, we effortlessly recognize it as an object in the world despite its simple and abstract style. Here we asked to what extent this correspondence in perception is reflected in the brain. To answer this question, we measured how neural processing of objects depicted as photographs and line drawings with varying levels of detail (from natural images to abstract line drawings) evolves over space and time. We find broad commonalities in the spatiotemporal dynamics and the neural representations underlying the perception of photographs and even abstract drawings. These results indicate a shared basic mechanism supporting recognition of drawings and natural images.


Asunto(s)
Reconocimiento Visual de Modelos , Percepción Visual , Humanos , Masculino , Femenino , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Imagen por Resonancia Magnética/métodos , Lóbulo Parietal/fisiología , Mapeo Encefálico/métodos
3.
J Cogn Neurosci ; 35(11): 1879-1897, 2023 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-37590093

RESUMEN

Humans effortlessly make quick and accurate perceptual decisions about the nature of their immediate visual environment, such as the category of the scene they face. Previous research has revealed a rich set of cortical representations potentially underlying this feat. However, it remains unknown which of these representations are suitably formatted for decision-making. Here, we approached this question empirically and computationally, using neuroimaging and computational modeling. For the empirical part, we collected EEG data and RTs from human participants during a scene categorization task (natural vs. man-made). We then related EEG data to behavior to behavior using a multivariate extension of signal detection theory. We observed a correlation between neural data and behavior specifically between ∼100 msec and ∼200 msec after stimulus onset, suggesting that the neural scene representations in this time period are suitably formatted for decision-making. For the computational part, we evaluated a recurrent convolutional neural network (RCNN) as a model of brain and behavior. Unifying our previous observations in an image-computable model, the RCNN predicted well the neural representations, the behavioral scene categorization data, as well as the relationship between them. Our results identify and computationally characterize the neural and behavioral correlates of scene categorization in humans.


Asunto(s)
Encéfalo , Reconocimiento Visual de Modelos , Humanos , Estimulación Luminosa/métodos , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos
4.
Neuroimage ; 272: 120053, 2023 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-36966853

RESUMEN

Spatial attention helps us to efficiently localize objects in cluttered environments. However, the processing stage at which spatial attention modulates object location representations remains unclear. Here we investigated this question identifying processing stages in time and space in an EEG and fMRI experiment respectively. As both object location representations and attentional effects have been shown to depend on the background on which objects appear, we included object background as an experimental factor. During the experiments, human participants viewed images of objects appearing in different locations on blank or cluttered backgrounds while either performing a task on fixation or on the periphery to direct their covert spatial attention away or towards the objects. We used multivariate classification to assess object location information. Consistent across the EEG and fMRI experiment, we show that spatial attention modulated location representations during late processing stages (>150 ms, in middle and high ventral visual stream areas) independent of background condition. Our results clarify the processing stage at which attention modulates object location representations in the ventral visual stream and show that attentional modulation is a cognitive process separate from recurrent processes related to the processing of objects on cluttered backgrounds.


Asunto(s)
Corteza Visual , Humanos , Atención , Imagen por Resonancia Magnética , Percepción Visual , Reconocimiento Visual de Modelos
5.
PLoS Comput Biol ; 18(2): e1009837, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35120139

RESUMEN

conceptual representations are critical for human cognition. Despite their importance, key properties of these representations remain poorly understood. Here, we used computational models of distributional semantics to predict multivariate fMRI activity patterns during the activation and contextualization of abstract concepts. We devised a task in which participants had to embed abstract nouns into a story that they developed around a given background context. We found that representations in inferior parietal cortex were predicted by concept similarities emerging in models of distributional semantics. By constructing different model families, we reveal the models' learning trajectories and delineate how abstract and concrete training materials contribute to the formation of brain-like representations. These results inform theories about the format and emergence of abstract conceptual representations in the human brain.


Asunto(s)
Encéfalo/fisiología , Formación de Concepto/fisiología , Semántica , Humanos , Imagen por Resonancia Magnética
6.
Neuroimage ; 264: 119754, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36400378

RESUMEN

The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models' prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.


Asunto(s)
Mapeo Encefálico , Percepción Visual , Humanos , Percepción Visual/fisiología , Aprendizaje Automático , Encéfalo/fisiología , Electroencefalografía
7.
J Neurophysiol ; 127(6): 1622-1628, 2022 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-35583972

RESUMEN

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG (n = 48) and time-resolved multivariate pattern analysis to investigate 1) the time course with which object category information emerges in the auditory modality and 2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that 1) auditory object category representations can be reliably extracted from EEG signals and 2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects' category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code.NEW & NOTEWORTHY Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects' category membership.


Asunto(s)
Mapeo Encefálico , Encéfalo , Percepción Auditiva , Cognición , Humanos , Reconocimiento Visual de Modelos , Estimulación Luminosa/métodos , Visión Ocular
8.
Cereb Cortex ; 31(12): 5664-5675, 2021 10 22.
Artículo en Inglés | MEDLINE | ID: mdl-34291294

RESUMEN

Brain decoding can predict visual perception from non-invasive electrophysiological data by combining information across multiple channels. However, decoding methods typically conflate the composite and distributed neural processes underlying perception that are together present in the signal, making it unclear what specific aspects of the neural computations involved in perception are reflected in this type of macroscale data. Using MEG data recorded while participants viewed a large number of naturalistic images, we analytically decomposed the brain signal into its oscillatory and non-oscillatory components, and used this decomposition to show that there are at least three dissociable stimulus-specific aspects to the brain data: a slow, non-oscillatory component, reflecting the temporally stable aspect of the stimulus representation; a global phase shift of the oscillation, reflecting the overall speed of processing of specific stimuli; and differential patterns of phase across channels, likely reflecting stimulus-specific computations. Further, we show that common cognitive interpretations of decoding analysis, in particular about how representations generalize across time, can benefit from acknowledging the multicomponent nature of the signal in the study of perception.


Asunto(s)
Encéfalo , Percepción Visual , Encéfalo/fisiología , Cabeza , Humanos , Estimulación Luminosa/métodos , Percepción Visual/fisiología
9.
Proc Natl Acad Sci U S A ; 116(43): 21854-21863, 2019 10 22.
Artículo en Inglés | MEDLINE | ID: mdl-31591217

RESUMEN

The human visual system is an intricate network of brain regions that enables us to recognize the world around us. Despite its abundant lateral and feedback connections, object processing is commonly viewed and studied as a feedforward process. Here, we measure and model the rapid representational dynamics across multiple stages of the human ventral stream using time-resolved brain imaging and deep learning. We observe substantial representational transformations during the first 300 ms of processing within and across ventral-stream regions. Categorical divisions emerge in sequence, cascading forward and in reverse across regions, and Granger causality analysis suggests bidirectional information flow between regions. Finally, recurrent deep neural network models clearly outperform parameter-matched feedforward models in terms of their ability to capture the multiregion cortical dynamics. Targeted virtual cooling experiments on the recurrent deep network models further substantiate the importance of their lateral and top-down connections. These results establish that recurrent models are required to understand information processing in the human ventral stream.


Asunto(s)
Modelos Neurológicos , Percepción Visual/fisiología , Adulto , Aprendizaje Profundo , Retroalimentación Sensorial , Femenino , Humanos , Magnetoencefalografía , Red Nerviosa , Vías Visuales
10.
J Cogn Neurosci ; 34(1): 4-15, 2021 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-34705031

RESUMEN

During natural vision, our brains are constantly exposed to complex, but regularly structured, environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.


Asunto(s)
Encéfalo , Reconocimiento Visual de Modelos , Humanos , Percepción Visual
11.
Neuroimage ; 240: 118365, 2021 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-34233220

RESUMEN

Looking for objects within complex natural environments is a task everybody performs multiple times each day. In this study, we explore how the brain uses the typical composition of real-world environments to efficiently solve this task. We recorded fMRI activity while participants performed two different categorization tasks on natural scenes. In the object task, they indicated whether the scene contained a person or a car, while in the scene task, they indicated whether the scene depicted an urban or a rural environment. Critically, each scene was presented in an "intact" way, preserving its coherent structure, or in a "jumbled" way, with information swapped across quadrants. In both tasks, participants' categorization was more accurate and faster for intact scenes. These behavioral benefits were accompanied by stronger responses to intact than to jumbled scenes across high-level visual cortex. To track the amount of object information in visual cortex, we correlated multi-voxel response patterns during the two categorization tasks with response patterns evoked by people and cars in isolation. We found that object information in object- and body-selective cortex was enhanced when the object was embedded in an intact, rather than a jumbled scene. However, this enhancement was only found in the object task: When participants instead categorized the scenes, object information did not differ between intact and jumbled scenes. Together, these results indicate that coherent scene structure facilitates the extraction of object information in a task-dependent way, suggesting that interactions between the object and scene processing pathways adaptively support behavioral goals.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Desempeño Psicomotor/fisiología , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Análisis Multivariante , Percepción Visual/fisiología , Adulto Joven
12.
Neuroimage ; 239: 118314, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-34175428

RESUMEN

Contextual information triggers predictions about the content ("what") of environmental stimuli to update an internal generative model of the surrounding world. However, visual information dynamically changes across time, and temporal predictability ("when") may influence the impact of internal predictions on visual processing. In this magnetoencephalography (MEG) study, we investigated how processing feature specific information ("what") is affected by temporal predictability ("when"). Participants (N = 16) were presented with four consecutive Gabor patches (entrainers) with constant spatial frequency but with variable orientation and temporal onset. A fifth target Gabor was presented after a longer delay and with higher or lower spatial frequency that participants had to judge. We compared the neural responses to entrainers where the Gabor orientation could, or could not be temporally predicted along the entrainer sequence, and with inter-entrainer timing that was constant (predictable), or variable (unpredictable). We observed suppression of evoked neural responses in the visual cortex for predictable stimuli. Interestingly, we found that temporal uncertainty increased expectation suppression. This suggests that in temporally uncertain scenarios the neurocognitive system invests less resources in integrating bottom-up information. Multivariate pattern analysis showed that predictable visual features could be decoded from neural responses. Temporal uncertainty did not affect decoding accuracy for early visual responses, with the feature specificity of early visual neural activity preserved across conditions. However, decoding accuracy was less sustained over time for temporally jittered than for isochronous predictable visual stimuli. These findings converge to suggest that the cognitive system processes visual features of temporally predictable stimuli in higher detail, while processing temporally uncertain stimuli may rely more heavily on abstract internal expectations.


Asunto(s)
Anticipación Psicológica/fisiología , Magnetoencefalografía , Estimulación Luminosa , Tiempo , Incertidumbre , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Análisis Multivariante , Tiempo de Reacción , Adulto Joven
13.
Neuroimage ; 219: 117045, 2020 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-32540354

RESUMEN

Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments' cortical representations across time. We found that the fragments' typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments' cortical representations were most strongly organized by their vertical location at around 200 â€‹ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.


Asunto(s)
Encéfalo/fisiología , Reconocimiento Visual de Modelos/fisiología , Vías Visuales/fisiología , Percepción Visual/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Estimulación Luminosa , Adulto Joven
14.
J Neurophysiol ; 124(1): 145-151, 2020 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-32519577

RESUMEN

In everyday life, our visual surroundings are not arranged randomly but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100 ms of visual processing. Critically, within 200 ms of processing, category decoding was more pronounced for the intact scenes compared with the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments.NEW & NOTEWORTHY Natural scenes are structured, with different types of information appearing in predictable locations. Here, we use EEG decoding to show that the visual brain uses this structure to efficiently analyze scene content. During early visual processing, the category of a scene (e.g., a church vs. a supermarket) could be more accurately decoded from EEG signals when the scene adhered to its typical spatial structure compared with when it did not.


Asunto(s)
Corteza Cerebral/fisiología , Electroencefalografía , Neuroimagen Funcional , Reconocimiento Visual de Modelos/fisiología , Percepción Espacial/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
15.
Hum Brain Mapp ; 41(5): 1286-1295, 2020 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-31758632

RESUMEN

Natural scenes are inherently structured, with meaningful objects appearing in predictable locations. Human vision is tuned to this structure: When scene structure is purposefully jumbled, perception is strongly impaired. Here, we tested how such perceptual effects are reflected in neural sensitivity to scene structure. During separate fMRI and EEG experiments, participants passively viewed scenes whose spatial structure (i.e., the position of scene parts) and categorical structure (i.e., the content of scene parts) could be intact or jumbled. Using multivariate decoding, we show that spatial (but not categorical) scene structure profoundly impacts on cortical processing: Scene-selective responses in occipital and parahippocampal cortices (fMRI) and after 255 ms (EEG) accurately differentiated between spatially intact and jumbled scenes. Importantly, this differentiation was more pronounced for upright than for inverted scenes, indicating genuine sensitivity to spatial structure rather than sensitivity to low-level attributes. Our findings suggest that visual scene analysis is tightly linked to the spatial structure of our natural environments. This link between cortical processing and scene structure may be crucial for rapidly parsing naturalistic visual inputs.


Asunto(s)
Corteza Cerebral/crecimiento & desarrollo , Corteza Cerebral/fisiología , Percepción Visual/fisiología , Adulto , Mapeo Encefálico , Electroencefalografía , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Lóbulo Occipital , Giro Parahipocampal/diagnóstico por imagen , Giro Parahipocampal/fisiología , Estimulación Luminosa , Percepción Espacial , Adulto Joven
16.
Neuroimage ; 194: 12-24, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-30894333

RESUMEN

The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200 ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity.


Asunto(s)
Reconocimiento Visual de Modelos/fisiología , Corteza Visual/fisiología , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Masculino
17.
Neuroimage ; 179: 252-262, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-29886145

RESUMEN

Multivariate decoding methods applied to neuroimaging data have become the standard in cognitive neuroscience for unravelling statistical dependencies between brain activation patterns and experimental conditions. The current challenge is to demonstrate that decodable information is in fact used by the brain itself to guide behaviour. Here we demonstrate a promising approach to do so in the context of neural activation during object perception and categorisation behaviour. We first localised decodable information about visual objects in the human brain using a multivariate decoding analysis and a spatially-unbiased searchlight approach. We then related brain activation patterns to behaviour by testing whether the classifier used for decoding can be used to predict behaviour. We show that while there is decodable information about visual category throughout the visual brain, only a subset of those representations predicted categorisation behaviour, which were strongest in anterior ventral temporal cortex. Our results have important implications for the interpretation of neuroimaging studies, highlight the importance of relating decoding results to behaviour, and suggest a suitable methodology towards this aim.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento Visual de Modelos/fisiología , Humanos , Imagen por Resonancia Magnética , Análisis Multivariante , Estimulación Luminosa
18.
Neuroimage ; 176: 372-379, 2018 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-29733954

RESUMEN

In everyday visual environments, objects are non-uniformly distributed across visual space. Many objects preferentially occupy particular retinotopic locations: for example, lamps more often fall into the upper visual field, whereas carpets more often fall into the lower visual field. The long-term experience with natural environments prompts the hypothesis that the visual system is tuned to such retinotopic object locations. A key prediction is that typically positioned objects should be coded more efficiently. To test this prediction, we recorded electroencephalography (EEG) while participants viewed briefly presented objects appearing in their typical locations (e.g., an airplane in the upper visual field) or in atypical locations (e.g., an airplane in the lower visual field). Multivariate pattern analysis applied to the EEG data revealed that object classification depended on positional regularities: Objects were classified more accurately when positioned typically, rather than atypically, already at 140 ms, suggesting that relatively early stages of object processing are tuned to typical retinotopic locations. Our results confirm the prediction that long-term experience with objects occurring at specific locations leads to enhanced perceptual processing when these objects appear in their typical locations. This may indicate a neural mechanism for efficient natural scene processing, where a large number of typically positioned objects needs to be processed.


Asunto(s)
Corteza Cerebral/fisiología , Electroencefalografía/métodos , Reconocimiento Visual de Modelos/fisiología , Percepción Espacial/fisiología , Campos Visuales/fisiología , Adulto , Femenino , Humanos , Masculino , Factores de Tiempo , Adulto Joven
19.
J Neurophysiol ; 120(2): 848-853, 2018 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-29766762

RESUMEN

Natural environments consist of multiple objects, many of which repeatedly occupy similar locations within a scene. For example, hats are seen on people's heads, while shoes are most often seen close to the ground. Such positional regularities bias the distribution of objects across the visual field: hats are more often encountered in the upper visual field, while shoes are more often encountered in the lower visual field. Here we tested the hypothesis that typical visual field locations of objects facilitate cortical processing. We recorded functional MRI while participants viewed images of objects that were associated with upper or lower visual field locations. Using multivariate classification, we show that object information can be more successfully decoded from response patterns in object-selective lateral occipital cortex (LO) when the objects are presented in their typical location (e.g., shoe in the lower visual field) than when they are presented in an atypical location (e.g., shoe in the upper visual field). In a functional connectivity analysis, we relate this benefit to increased coupling between LO and early visual cortex, suggesting that typical object positioning facilitates information propagation across the visual hierarchy. Together these results suggest that object representations in occipital visual cortex are tuned to the structure of natural environments. This tuning may support object perception in spatially structured environments. NEW & NOTEWORTHY In the real world, objects appear in predictable spatial locations. Hats, commonly appearing on people's heads, often fall into the upper visual field. Shoes, mostly appearing on people's feet, often fall into the lower visual field. Here we used functional MRI to demonstrate that such regularities facilitate cortical processing: Objects encountered in their typical locations are coded more efficiently, which may allow us to effortlessly recognize objects in natural environments.


Asunto(s)
Lóbulo Occipital/fisiología , Reconocimiento Visual de Modelos/fisiología , Percepción Espacial/fisiología , Campos Visuales , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Corteza Visual/fisiología , Adulto Joven
20.
J Neurosci ; 34(36): 12155-67, 2014 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-25186759

RESUMEN

Humans recognize faces and objects with high speed and accuracy regardless of their orientation. Recent studies have proposed that orientation invariance in face recognition involves an intermediate representation where neural responses are similar for mirror-symmetric views. Here, we used fMRI, multivariate pattern analysis, and computational modeling to investigate the neural encoding of faces and vehicles at different rotational angles. Corroborating previous studies, we demonstrate a representation of face orientation in the fusiform face-selective area (FFA). We go beyond these studies by showing that this representation is category-selective and tolerant to retinal translation. Critically, by controlling for low-level confounds, we found the representation of orientation in FFA to be compatible with a linear angle code. Aspects of mirror-symmetric coding cannot be ruled out when FFA mean activity levels are considered as a dimension of coding. Finally, we used a parametric family of computational models, involving a biased sampling of view-tuned neuronal clusters, to compare different face angle encoding models. The best fitting model exhibited a predominance of neuronal clusters tuned to frontal views of faces. In sum, our findings suggest a category-selective and monotonic code of face orientation in the human FFA, in line with primate electrophysiology studies that observed mirror-symmetric tuning of neural responses at higher stages of the visual system, beyond the putative homolog of human FFA.


Asunto(s)
Cara/anatomía & histología , Modelos Neurológicos , Reconocimiento Visual de Modelos , Rotación , Corteza Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Vehículos a Motor
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA