Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20.431
Filtrar
Más filtros

Intervalo de año de publicación
1.
Nature ; 593(7859): 411-417, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33883745

RESUMEN

The ability to categorize sensory stimuli is crucial for an animal's survival in a complex environment. Memorizing categories instead of individual exemplars enables greater behavioural flexibility and is computationally advantageous. Neurons that show category selectivity have been found in several areas of the mammalian neocortex1-4, but the prefrontal cortex seems to have a prominent role4,5 in this context. Specifically, in primates that are extensively trained on a categorization task, neurons in the prefrontal cortex rapidly and flexibly represent learned categories6,7. However, how these representations first emerge in naive animals remains unexplored, leaving it unclear whether flexible representations are gradually built up as part of semantic memory or assigned more or less instantly during task execution8,9. Here we investigate the formation of a neuronal category representation throughout the entire learning process by repeatedly imaging individual cells in the mouse medial prefrontal cortex. We show that mice readily learn rule-based categorization and generalize to novel stimuli. Over the course of learning, neurons in the prefrontal cortex display distinct dynamics in acquiring category selectivity and are differentially engaged during a later switch in rules. A subset of neurons selectively and uniquely respond to categories and reflect generalization behaviour. Thus, a category representation in the mouse prefrontal cortex is gradually acquired during learning rather than recruited ad hoc. This gradual process suggests that neurons in the medial prefrontal cortex are part of a specific semantic memory for visual categories.


Asunto(s)
Aprendizaje/fisiología , Modelos Neurológicos , Reconocimiento Visual de Modelos/fisiología , Corteza Prefrontal/fisiología , Animales , Femenino , Memoria/fisiología , Ratones , Ratones Endogámicos C57BL , Neuronas/fisiología , Estimulación Luminosa , Corteza Prefrontal/citología , Factores de Tiempo
2.
Proc Natl Acad Sci U S A ; 121(28): e2321346121, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38954551

RESUMEN

How does the brain process the faces of familiar people? Neuropsychological studies have argued for an area of the temporal pole (TP) linking faces with person identities, but magnetic susceptibility artifacts in this region have hampered its study with fMRI. Using data acquisition and analysis methods optimized to overcome this artifact, we identify a familiar face response in TP, reliably observed in individual brains. This area responds strongly to visual images of familiar faces over unfamiliar faces, objects, and scenes. However, TP did not just respond to images of faces, but also to a variety of high-level social cognitive tasks, including semantic, episodic, and theory of mind tasks. The response profile of TP contrasted with a nearby region of the perirhinal cortex that responded specifically to faces, but not to social cognition tasks. TP was functionally connected with a distributed network in the association cortex associated with social cognition, while PR was functionally connected with face-preferring areas of the ventral visual cortex. This work identifies a missing link in the human face processing system that specifically processes familiar faces, and is well placed to integrate visual information about faces with higher-order conceptual information about other people. The results suggest that separate streams for person and face processing reach anterior temporal areas positioned at the top of the cortical hierarchy.


Asunto(s)
Imagen por Resonancia Magnética , Lóbulo Temporal , Humanos , Imagen por Resonancia Magnética/métodos , Lóbulo Temporal/fisiología , Lóbulo Temporal/diagnóstico por imagen , Masculino , Femenino , Adulto , Reconocimiento Facial/fisiología , Mapeo Encefálico/métodos , Reconocimiento en Psicología/fisiología , Cara/fisiología , Adulto Joven , Reconocimiento Visual de Modelos/fisiología
3.
Proc Natl Acad Sci U S A ; 120(9): e2214996120, 2023 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-36802419

RESUMEN

Neurons throughout the primate inferior temporal (IT) cortex respond selectively to visual images of faces and other complex objects. The response magnitude of neurons to a given image often depends on the size at which the image is presented, usually on a flat display at a fixed distance. While such size sensitivity might simply reflect the angular subtense of retinal image stimulation in degrees, one unexplored possibility is that it tracks the real-world geometry of physical objects, such as their size and distance to the observer in centimeters. This distinction bears fundamentally on the nature of object representation in IT and on the scope of visual operations supported by the ventral visual pathway. To address this question, we assessed the response dependency of neurons in the macaque anterior fundus (AF) face patch to the angular versus physical size of faces. We employed a macaque avatar to stereoscopically render three-dimensional (3D) photorealistic faces at multiple sizes and distances, including a subset of size/distance combinations designed to cast the same size retinal image projection. We found that most AF neurons were modulated principally by the 3D physical size of the face rather than its two-dimensional (2D) angular size on the retina. Further, most neurons responded strongest to extremely large and small faces, rather than to those of normal size. Together, these findings reveal a graded encoding of physical size among face patch neurons, providing evidence that category-selective regions of the primate ventral visual pathway participate in a geometric analysis of real-world objects.


Asunto(s)
Macaca , Lóbulo Temporal , Animales , Lóbulo Temporal/fisiología , Neuronas/fisiología , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Mapeo Encefálico
4.
Proc Natl Acad Sci U S A ; 120(32): e2221122120, 2023 08 08.
Artículo en Inglés | MEDLINE | ID: mdl-37523552

RESUMEN

Segmentation, the computation of object boundaries, is one of the most important steps in intermediate visual processing. Previous studies have reported cells across visual cortex that are modulated by segmentation features, but the functional role of these cells remains unclear. First, it is unclear whether these cells encode segmentation consistently since most studies used only a limited variety of stimulus types. Second, it is unclear whether these cells are organized into specialized modules or instead randomly scattered across the visual cortex: the former would lend credence to a functional role for putative segmentation cells. Here, we used fMRI-guided electrophysiology to systematically characterize the consistency and spatial organization of segmentation-encoding cells across the visual cortex. Using fMRI, we identified a set of patches in V2, V3, V3A, V4, and V4A that were more active for stimuli containing figures compared to ground, regardless of whether figures were defined by texture, motion, luminance, or disparity. We targeted these patches for single-unit recordings and found that cells inside segmentation patches were tuned to both figure-ground and borders more consistently across types of stimuli than cells in the visual cortex outside the patches. Remarkably, we found clusters of cells inside segmentation patches that showed the same border-ownership preference across all stimulus types. Finally, using a population decoding approach, we found that segmentation could be decoded with higher accuracy from segmentation patches than from either color-selective or control regions. Overall, our results suggest that segmentation signals are preferentially encoded in spatially discrete patches.


Asunto(s)
Macaca , Corteza Visual , Animales , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología
5.
J Neurosci ; 44(27)2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38806251

RESUMEN

The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.


Asunto(s)
Magnetoencefalografía , Estimulación Luminosa , Semántica , Humanos , Femenino , Masculino , Adulto , Adulto Joven , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Formación de Concepto/fisiología , Encéfalo/fisiología , Mapeo Encefálico , Reconocimiento Visual de Modelos/fisiología
6.
J Neurosci ; 44(20)2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38569924

RESUMEN

The superior colliculus (SC) is a prominent and conserved visual center in all vertebrates. In mice, the most superficial lamina of the SC is enriched with neurons that are selective for the moving direction of visual stimuli. Here, we study how these direction selective neurons respond to complex motion patterns known as plaids, using two-photon calcium imaging in awake male and female mice. The plaid pattern consists of two superimposed sinusoidal gratings moving in different directions, giving an apparent pattern direction that lies between the directions of the two component gratings. Most direction selective neurons in the mouse SC respond robustly to the plaids and show a high selectivity for the moving direction of the plaid pattern but not of its components. Pattern motion selectivity is seen in both excitatory and inhibitory SC neurons and is especially prevalent in response to plaids with large cross angles between the two component gratings. However, retinal inputs to the SC are ambiguous in their selectivity to pattern versus component motion. Modeling suggests that pattern motion selectivity in the SC can arise from a nonlinear transformation of converging retinal inputs. In contrast, the prevalence of pattern motion selective neurons is not seen in the primary visual cortex (V1). These results demonstrate an interesting difference between the SC and V1 in motion processing and reveal the SC as an important site for encoding pattern motion.


Asunto(s)
Ratones Endogámicos C57BL , Percepción de Movimiento , Estimulación Luminosa , Retina , Colículos Superiores , Vías Visuales , Animales , Colículos Superiores/fisiología , Percepción de Movimiento/fisiología , Ratones , Masculino , Femenino , Retina/fisiología , Estimulación Luminosa/métodos , Vías Visuales/fisiología , Neuronas/fisiología , Reconocimiento Visual de Modelos/fisiología
7.
J Neurosci ; 44(24)2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38670806

RESUMEN

Visual crowding refers to the phenomenon where a target object that is easily identifiable in isolation becomes difficult to recognize when surrounded by other stimuli (distractors). Many psychophysical studies have investigated this phenomenon and proposed alternative models for the underlying mechanisms. One prominent hypothesis, albeit with mixed psychophysical support, posits that crowding arises from the loss of information due to pooled encoding of features from target and distractor stimuli in the early stages of cortical visual processing. However, neurophysiological studies have not rigorously tested this hypothesis. We studied the responses of single neurons in macaque (one male, one female) area V4, an intermediate stage of the object-processing pathway, to parametrically designed crowded displays and texture statistics-matched metameric counterparts. Our investigations reveal striking parallels between how crowding parameters-number, distance, and position of distractors-influence human psychophysical performance and V4 shape selectivity. Importantly, we also found that enhancing the salience of a target stimulus could alleviate crowding effects in highly cluttered scenes, and this could be temporally protracted reflecting a dynamical process. Thus, a pooled encoding of nearby stimuli cannot explain the observed responses, and we propose an alternative model where V4 neurons preferentially encode salient stimuli in crowded displays. Overall, we conclude that the magnitude of crowding effects is determined not just by the number of distractors and target-distractor separation but also by the relative salience of targets versus distractors based on their feature attributes-the similarity of distractors and the contrast between target and distractor stimuli.


Asunto(s)
Macaca mulatta , Neuronas , Estimulación Luminosa , Corteza Visual , Animales , Masculino , Femenino , Corteza Visual/fisiología , Estimulación Luminosa/métodos , Neuronas/fisiología , Humanos , Reconocimiento Visual de Modelos/fisiología , Psicofísica
8.
J Neurosci ; 44(12)2024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-38331583

RESUMEN

Capacity limitations in visual tasks can be observed when the number of task-related objects increases. An influential idea is that such capacity limitations are determined by competition at the neural level: two objects that are encoded by shared neural populations interfere more in behavior (e.g., visual search) than two objects encoded by separate neural populations. However, the neural representational similarity of objects varies across brain regions and across time, raising the questions of where and when competition determines task performance. Furthermore, it is unclear whether the association between neural representational similarity and task performance is common or unique across tasks. Here, we used neural representational similarity derived from fMRI, MEG, and a deep neural network (DNN) to predict performance on two visual search tasks involving the same objects and requiring the same responses but differing in instructions: cued visual search and oddball visual search. Separate groups of human participants (both sexes) viewed the individual objects in neuroimaging experiments to establish the neural representational similarity between those objects. Results showed that performance on both search tasks could be predicted by neural representational similarity throughout the visual system (fMRI), from 80 ms after onset (MEG), and in all DNN layers. Stepwise regression analysis, however, revealed task-specific associations, with unique variability in oddball search performance predicted by early/posterior neural similarity and unique variability in cued search task performance predicted by late/anterior neural similarity. These results reveal that capacity limitations in superficially similar visual search tasks may reflect competition at different stages of visual processing.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Masculino , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Percepción Visual/fisiología , Señales (Psicología) , Mapeo Encefálico , Redes Neurales de la Computación , Reconocimiento Visual de Modelos/fisiología
9.
J Neurosci ; 44(5)2024 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-38124013

RESUMEN

Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.


Asunto(s)
Corteza Visual , Humanos , Masculino , Femenino , Estimulación Luminosa/métodos , Corteza Visual/fisiología , Imagen por Resonancia Magnética/métodos , Cuerpo Humano , Reconocimiento Visual de Modelos/fisiología , Mapeo Encefálico/métodos , Percepción Visual/fisiología
10.
J Neurosci ; 44(21)2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38569925

RESUMEN

When we perceive a scene, our brain processes various types of visual information simultaneously, ranging from sensory features, such as line orientations and colors, to categorical features, such as objects and their arrangements. Whereas the role of sensory and categorical visual representations in predicting subsequent memory has been studied using isolated objects, their impact on memory for complex scenes remains largely unknown. To address this gap, we conducted an fMRI study in which female and male participants encoded pictures of familiar scenes (e.g., an airport picture) and later recalled them, while rating the vividness of their visual recall. Outside the scanner, participants had to distinguish each seen scene from three similar lures (e.g., three airport pictures). We modeled the sensory and categorical visual features of multiple scenes using both early and late layers of a deep convolutional neural network. Then, we applied representational similarity analysis to determine which brain regions represented stimuli in accordance with the sensory and categorical models. We found that categorical, but not sensory, representations predicted subsequent memory. In line with the previous result, only for the categorical model, the average recognition performance of each scene exhibited a positive correlation with the average visual dissimilarity between the item in question and its respective lures. These results strongly suggest that even in memory tests that ostensibly rely solely on visual cues (such as forced-choice visual recognition with similar distractors), memory decisions for scenes may be primarily influenced by categorical rather than sensory representations.


Asunto(s)
Imagen por Resonancia Magnética , Reconocimiento Visual de Modelos , Reconocimiento en Psicología , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Reconocimiento en Psicología/fisiología , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Recuerdo Mental/fisiología , Mapeo Encefálico
11.
J Neurosci ; 44(24)2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38641406

RESUMEN

Faces and bodies are processed in separate but adjacent regions in the primate visual cortex. Yet, the functional significance of dividing the whole person into areas dedicated to its face and body components and their neighboring locations remains unknown. Here we hypothesized that this separation and proximity together with a normalization mechanism generate clutter-tolerant representations of the face, body, and whole person when presented in complex multi-category scenes. To test this hypothesis, we conducted a fMRI study, presenting images of a person within a multi-category scene to human male and female participants and assessed the contribution of each component to the response to the scene. Our results revealed a clutter-tolerant representation of the whole person in areas selective for both faces and bodies, typically located at the border between the two category-selective regions. Regions exclusively selective for faces or bodies demonstrated clutter-tolerant representations of their preferred category, corroborating earlier findings. Thus, the adjacent locations of face- and body-selective areas enable a hardwired machinery for decluttering of the whole person, without the need for a dedicated population of person-selective neurons. This distinct yet proximal functional organization of category-selective brain regions enhances the representation of the socially significant whole person, along with its face and body components, within multi-category scenes.


Asunto(s)
Reconocimiento Facial , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Reconocimiento Facial/fisiología , Mapeo Encefálico , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Corteza Visual/fisiología , Corteza Visual/diagnóstico por imagen , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen
12.
PLoS Biol ; 20(1): e3001546, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-35100261

RESUMEN

The subiculum is positioned at a critical juncture at the interface of the hippocampus with the rest of the brain. However, the exact roles of the subiculum in most hippocampal-dependent memory tasks remain largely unknown. One obstacle to make comparisons of neural firing patterns between the subiculum and hippocampus is the broad firing fields of the subicular cells. Here, we used spiking phases in relation to theta rhythm to parse the broad firing field of a subicular neuron into multiple subfields to find the unique functional contribution of the subiculum while male rats performed a hippocampal-dependent visual scene memory task. Some of the broad firing fields of the subicular neurons were successfully divided into multiple subfields similar to those in the CA1 by using the theta phase precession cycle. The new paradigm significantly improved the detection of task-relevant information in subicular cells without affecting the information content represented by CA1 cells. Notably, we found that multiple fields of a single subicular neuron, unlike those in the CA1, carried heterogeneous task-related information such as visual context and choice response. Our findings suggest that the subicular cells integrate multiple task-related factors by using theta rhythm to associate environmental context with action.


Asunto(s)
Potenciales de Acción/fisiología , Región CA1 Hipocampal/fisiología , Memoria/fisiología , Neuronas/fisiología , Ritmo Teta/fisiología , Algoritmos , Animales , Región CA1 Hipocampal/anatomía & histología , Masculino , Aprendizaje por Laberinto/fisiología , Neuronas/citología , Reconocimiento Visual de Modelos/fisiología , Ratas , Ratas Long-Evans
13.
PLoS Comput Biol ; 20(6): e1012159, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38870125

RESUMEN

Humans are extremely robust in our ability to perceive and recognize objects-we see faces in tea stains and can recognize friends on dark streets. Yet, neurocomputational models of primate object recognition have focused on the initial feed-forward pass of processing through the ventral stream and less on the top-down feedback that likely underlies robust object perception and recognition. Aligned with the generative approach, we propose that the visual system actively facilitates recognition by reconstructing the object hypothesized to be in the image. Top-down attention then uses this reconstruction as a template to bias feedforward processing to align with the most plausible object hypothesis. Building on auto-encoder neural networks, our model makes detailed hypotheses about the appearance and location of the candidate objects in the image by reconstructing a complete object representation from potentially incomplete visual input due to noise and occlusion. The model then leverages the best object reconstruction, measured by reconstruction error, to direct the bottom-up process of selectively routing low-level features, a top-down biasing that captures a core function of attention. We evaluated our model using the MNIST-C (handwritten digits under corruptions) and ImageNet-C (real-world objects under corruptions) datasets. Not only did our model achieve superior performance on these challenging tasks designed to approximate real-world noise and occlusion viewing conditions, but also better accounted for human behavioral reaction times and error patterns than a standard feedforward Convolutional Neural Network. Our model suggests that a complete understanding of object perception and recognition requires integrating top-down and attention feedback, which we propose is an object reconstruction.


Asunto(s)
Atención , Redes Neurales de la Computación , Reconocimiento Visual de Modelos , Humanos , Atención/fisiología , Reconocimiento Visual de Modelos/fisiología , Biología Computacional , Modelos Neurológicos , Reconocimiento en Psicología/fisiología
14.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38795358

RESUMEN

We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.


Asunto(s)
Prosopagnosia , Humanos , Prosopagnosia/fisiopatología , Femenino , Adulto , Encéfalo/fisiopatología , Redes Neurales de la Computación , Persona de Mediana Edad , Reconocimiento Visual de Modelos/fisiología , Masculino , Modelos Neurológicos
15.
Cereb Cortex ; 34(1)2024 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-38011118

RESUMEN

Sensory stimulation triggers synchronized bioelectrical activity in the brain across various frequencies. This study delves into network-level activities, specifically focusing on local field potentials as a neural signature of visual category representation. Specifically, we studied the role of different local field potential frequency oscillation bands in visual stimulus category representation by presenting images of faces and objects to three monkeys while recording local field potential from inferior temporal cortex. We found category selective local field potential responses mainly for animate, but not inanimate, objects. Notably, face-selective local field potential responses were evident across all tested frequency bands, manifesting in both enhanced (above mean baseline activity) and suppressed (below mean baseline activity) local field potential powers. We observed four different local field potential response profiles based on frequency bands and face selective excitatory and suppressive responses. Low-frequency local field potential bands (1-30 Hz) were more prodominstaly suppressed by face stimulation than the high-frequency (30-170 Hz) local field potential bands. Furthermore, the low-frequency local field potentials conveyed less face category informtion than the high-frequency local field potential in both enhansive and suppressive conditions. Furthermore, we observed a negative correlation between face/object d-prime values in all the tested local field potential frequency bands and the anterior-posterior position of the recording sites. In addition, the power of low-frequency local field potential systematically declined across inferior temporal anterior-posterior positions, whereas high-frequency local field potential did not exhibit such a pattern. In general, for most of the above-mentioned findings somewhat similar results were observed for body, but not, other stimulus categories. The observed findings suggest that a balance of face selective excitation and inhibition across time and cortical space shape face category selectivity in inferior temporal cortex.


Asunto(s)
Encéfalo , Lóbulo Temporal , Lóbulo Temporal/fisiología , Torso , Estimulación Luminosa/métodos , Reconocimiento Visual de Modelos/fisiología , Mapeo Encefálico/métodos
16.
Cereb Cortex ; 34(6)2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38879816

RESUMEN

Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.


Asunto(s)
Atención , Electroencefalografía , Atención/fisiología , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Periodicidad , Reconocimiento Facial/fisiología , Estimulación Luminosa/métodos , Reconocimiento Visual de Modelos/fisiología , Encéfalo/fisiología , Percepción Visual/fisiología
17.
Cereb Cortex ; 34(6)2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38864574

RESUMEN

The amygdala is present in a diverse range of vertebrate species, such as lizards, rodents, and primates; however, its structure and connectivity differs across species. The increased connections to visual sensory areas in primate species suggests that understanding the visual selectivity of the amygdala in detail is critical to revealing the principles underlying its function in primate cognition. Therefore, we designed a high-resolution, contrast-agent enhanced, event-related fMRI experiment, and scanned 3 adult rhesus macaques, while they viewed 96 naturalistic stimuli. Half of these stimuli were social (defined by the presence of a conspecific), the other half were nonsocial. We also nested manipulations of emotional valence (positive, neutral, and negative) and visual category (faces, nonfaces, animate, and inanimate) within the stimulus set. The results reveal widespread effects of emotional valence, with the amygdala responding more on average to inanimate objects and animals than faces, bodies, or social agents in this experimental context. These findings suggest that the amygdala makes a contribution to primate vision that goes beyond an auxiliary role in face or social perception. Furthermore, the results highlight the importance of stimulus selection and experimental design when probing the function of the amygdala and other visually responsive brain regions.


Asunto(s)
Amígdala del Cerebelo , Macaca mulatta , Imagen por Resonancia Magnética , Estimulación Luminosa , Animales , Amígdala del Cerebelo/fisiología , Amígdala del Cerebelo/diagnóstico por imagen , Masculino , Estimulación Luminosa/métodos , Emociones/fisiología , Mapeo Encefálico , Percepción Visual/fisiología , Femenino , Reconocimiento Visual de Modelos/fisiología
18.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38679483

RESUMEN

Prior research has yet to fully elucidate the impact of varying relative saliency between target and distractor on attentional capture and suppression, along with their underlying neural mechanisms, especially when social (e.g. face) and perceptual (e.g. color) information interchangeably serve as singleton targets or distractors, competing for attention in a search array. Here, we employed an additional singleton paradigm to investigate the effects of relative saliency on attentional capture (as assessed by N2pc) and suppression (as assessed by PD) of color or face singleton distractors in a visual search task by recording event-related potentials. We found that face singleton distractors with higher relative saliency induced stronger attentional processing. Furthermore, enhancing the physical salience of colors using a bold color ring could enhance attentional processing toward color singleton distractors. Reducing the physical salience of facial stimuli by blurring weakened attentional processing toward face singleton distractors; however, blurring enhanced attentional processing toward color singleton distractors because of the change in relative saliency. In conclusion, the attentional processes of singleton distractors are affected by their relative saliency to singleton targets, with higher relative saliency of singleton distractors resulting in stronger attentional capture and suppression; faces, however, exhibit some specificity in attentional capture and suppression due to high social saliency.


Asunto(s)
Atención , Percepción de Color , Electroencefalografía , Potenciales Evocados , Humanos , Atención/fisiología , Femenino , Masculino , Adulto Joven , Potenciales Evocados/fisiología , Adulto , Percepción de Color/fisiología , Estimulación Luminosa/métodos , Reconocimiento Facial/fisiología , Reconocimiento Visual de Modelos/fisiología , Encéfalo/fisiología
19.
Cereb Cortex ; 34(8)2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39191663

RESUMEN

The visual word form area in the occipitotemporal sulcus (here OTS-words) is crucial for reading and shows a preference for text stimuli. We hypothesized that this text preference may be driven by lexical processing. Hence, we performed three fMRI experiments (n = 15), systematically varying participants' task and stimulus, and separately evaluated middle mOTS-words and posterior pOTS-words. Experiment 1 contrasted text with other visual stimuli to identify both OTS-words subregions. Experiment 2 utilized an fMRI adaptation paradigm, presenting compound words as texts or emojis. In experiment 3, participants performed a lexical or color judgment task on compound words in text or emoji format. In experiment 2, pOTS-words, but not mOTS-words, showed fMRI adaptation for compound words in both formats. In experiment 3, both subregions showed higher responses to compound words in emoji format. Moreover, mOTS-words showed higher responses during the lexical judgment task and a task-stimulus interaction. Multivariate analyses revealed that distributed responses in pOTS-words encode stimulus and distributed responses in mOTS-words encode stimulus and task. Together, our findings suggest that the function of the OTS-words subregions goes beyond the specific visual processing of text and that these regions are flexibly recruited whenever semantic meaning needs to be assigned to visual input.


Asunto(s)
Juicio , Imagen por Resonancia Magnética , Lectura , Humanos , Masculino , Femenino , Juicio/fisiología , Adulto Joven , Adulto , Estimulación Luminosa/métodos , Mapeo Encefálico , Reconocimiento Visual de Modelos/fisiología , Semántica , Lóbulo Temporal/fisiología , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Occipital/fisiología , Lóbulo Occipital/diagnóstico por imagen
20.
Cereb Cortex ; 34(7)2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-39077920

RESUMEN

Contextual features are integral to episodic memories; yet, we know little about context effects on pattern separation, a hippocampal function promoting orthogonalization of overlapping memory representations. Recent studies suggested that various extrahippocampal brain regions support pattern separation; however, the specific role of the parahippocampal cortex-a region involved in context representation-in pattern separation has not yet been studied. Here, we investigated the contribution of the parahippocampal cortex (specifically, the parahippocampal place area) to context reinstatement effects on mnemonic discrimination, using functional magnetic resonance imaging. During scanning, participants saw object images on unique context scenes, followed by a recognition task involving the repetitions of encoded objects or visually similar lures on either their original context or a lure context. Context reinstatement at retrieval improved item recognition but hindered mnemonic discrimination. Crucially, our region of interest analyses of the parahippocampal place area and an object-selective visual area, the lateral occipital cortex indicated that while during successful mnemonic decisions parahippocampal place area activity decreased for old contexts compared to lure contexts irrespective of object novelty, lateral occipital cortex activity differentiated between old and lure objects exclusively. These results imply that pattern separation of contextual and item-specific memory features may be differentially aided by scene and object-selective cortical areas.


Asunto(s)
Imagen por Resonancia Magnética , Lóbulo Occipital , Giro Parahipocampal , Reconocimiento Visual de Modelos , Reconocimiento en Psicología , Humanos , Femenino , Masculino , Giro Parahipocampal/fisiología , Giro Parahipocampal/diagnóstico por imagen , Adulto Joven , Adulto , Lóbulo Occipital/fisiología , Lóbulo Occipital/diagnóstico por imagen , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Mapeo Encefálico/métodos , Estimulación Luminosa/métodos , Memoria Episódica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA