Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 111
Filter
1.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38864574

ABSTRACT

The amygdala is present in a diverse range of vertebrate species, such as lizards, rodents, and primates; however, its structure and connectivity differs across species. The increased connections to visual sensory areas in primate species suggests that understanding the visual selectivity of the amygdala in detail is critical to revealing the principles underlying its function in primate cognition. Therefore, we designed a high-resolution, contrast-agent enhanced, event-related fMRI experiment, and scanned 3 adult rhesus macaques, while they viewed 96 naturalistic stimuli. Half of these stimuli were social (defined by the presence of a conspecific), the other half were nonsocial. We also nested manipulations of emotional valence (positive, neutral, and negative) and visual category (faces, nonfaces, animate, and inanimate) within the stimulus set. The results reveal widespread effects of emotional valence, with the amygdala responding more on average to inanimate objects and animals than faces, bodies, or social agents in this experimental context. These findings suggest that the amygdala makes a contribution to primate vision that goes beyond an auxiliary role in face or social perception. Furthermore, the results highlight the importance of stimulus selection and experimental design when probing the function of the amygdala and other visually responsive brain regions.


Subject(s)
Amygdala , Macaca mulatta , Magnetic Resonance Imaging , Photic Stimulation , Animals , Amygdala/physiology , Amygdala/diagnostic imaging , Male , Photic Stimulation/methods , Emotions/physiology , Brain Mapping , Visual Perception/physiology , Female , Pattern Recognition, Visual/physiology
2.
Nat Hum Behav ; 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38499772

ABSTRACT

A long-standing engineering ambition has been to design anthropomorphic bionic limbs: devices that look like and are controlled in the same way as the biological body (biomimetic). The untested assumption is that biomimetic motor control enhances device embodiment, learning, generalization and automaticity. To test this, we compared biomimetic and non-biomimetic control strategies for non-disabled participants when learning to control a wearable myoelectric bionic hand operated by an eight-channel electromyography pattern-recognition system. We compared motor learning across days and behavioural tasks for two training groups: biomimetic (mimicking the desired bionic hand gesture with biological hand) and arbitrary control (mapping an unrelated biological hand gesture with the desired bionic gesture). For both trained groups, training improved bionic limb control, reduced cognitive reliance and increased embodiment over the bionic hand. Biomimetic users had more intuitive and faster control early in training. Arbitrary users matched biomimetic performance later in training. Furthermore, arbitrary users showed increased generalization to a new control strategy. Collectively, our findings suggest that biomimetic and arbitrary control strategies provide different benefits. The optimal strategy is probably not strictly biomimetic, but rather a flexible strategy within the biomimetic-to-arbitrary spectrum, depending on the user, available training opportunities and user requirements.

3.
bioRxiv ; 2023 Sep 11.
Article in English | MEDLINE | ID: mdl-37745325

ABSTRACT

Our visual world consists of an immense number of unique objects and yet, we are easily able to identify, distinguish, interact, and reason about the things we see within several hundred milliseconds. This requires that we flexibly integrate and focus on different object properties to support specific behavioral goals. In the current study, we examined how these rich object representations unfold in the human brain by modelling time-resolved MEG signals evoked by viewing thousands of objects. Using millions of behavioral judgments to guide our understanding of the neural representation of the object space, we find distinct temporal profiles across the object dimensions. These profiles fell into two broad types with either a distinct and early peak (~150 ms) or a slow rise to a late peak (~300 ms). Further, the early effects are stable across participants in contrast to later effects which show more variability across people. This highlights that early peaks may carry stimulus-specific and later peaks subject-specific information. Given that the dimensions with early peaks seem to be primarily visual dimensions and those with later peaks more conceptual, our results suggest that conceptual processing is more variable across people. Together, these data provide a comprehensive account of how a variety of object properties unfold in the human brain and contribute to the rich nature of object vision.

4.
Sci Adv ; 9(17): eadd2981, 2023 Apr 28.
Article in English | MEDLINE | ID: mdl-37126552

ABSTRACT

What makes certain images more memorable than others? While much of memory research has focused on participant effects, recent studies using a stimulus-centric perspective have sparked debate on the determinants of memory, including the roles of semantic and visual features and whether the most prototypical or atypical items are best remembered. Prior studies have typically relied on constrained stimulus sets, limiting a generalized view of the features underlying what we remember. Here, we collected more than 1 million memory ratings for a naturalistic dataset of 26,107 object images designed to comprehensively sample concrete objects. We establish a model of object features that is predictive of image memorability and examined whether memorability could be accounted for by the typicality of the objects. We find that semantic features exert a stronger influence than perceptual features on what we remember and that the relationship between memorability and typicality is more complex than a simple positive or negative association alone.

5.
Sci Rep ; 13(1): 5383, 2023 04 03.
Article in English | MEDLINE | ID: mdl-37012369

ABSTRACT

Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as 'happy' and 'angry'). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.


Subject(s)
Anger , Emotions , Humans , Happiness , Facial Expression , Arousal
6.
Neuroimage Clin ; 38: 103384, 2023.
Article in English | MEDLINE | ID: mdl-37023490

ABSTRACT

Choroideremia (CHM) is an X-linked recessive form of hereditary retinal degeneration, which preserves only small islands of central retinal tissue. Previously, we demonstrated the relationship between central vision and structure and population receptive fields (pRF) using functional magnetic resonance imaging (fMRI) in untreated CHM subjects. Here, we replicate and extend this work, providing a more in-depth analysis of the visual responses in a cohort of CHM subjects who participated in a retinal gene therapy clinical trial. fMRI was conducted in six CHM subjects and six age-matched healthy controls (HC's) while they viewed drifting contrast pattern stimuli monocularly. A single ∼3-minute fMRI run was collected for each eye. Participants also underwent ophthalmic evaluations of visual acuity and static automatic perimetry (SAP). Consistent with our previous report, a single âˆ¼ 3 min fMRI run accurately characterized ophthalmic evaluations of visual function in most CHM subjects. In-depth analyses of the cortical distribution of pRF responses revealed that the motion-selective regions V5/MT and MST appear resistant to progressive retinal degenerations in CHM subjects. This effect was restricted to V5/MT and MST and was not present in either primary visual cortex (V1), motion-selective V3A or regions within the ventral visual pathway. Motion-selective areas V5/MT and MST appear to be resistant to the continuous detrimental impact of CHM. Such resilience appears selective to these areas and may be mediated by independent retina-V5/MT anatomical connections that bypass V1. We did not observe any significant impact of gene therapy.


Subject(s)
Choroideremia , Motion Perception , Humans , Choroideremia/therapy , Magnetic Resonance Imaging , Motion Perception/physiology , Retina/diagnostic imaging , Visual Acuity
7.
bioRxiv ; 2023 Feb 08.
Article in English | MEDLINE | ID: mdl-36945476

ABSTRACT

A longstanding engineering ambition has been to design anthropomorphic bionic limbs: devices that look like and are controlled in the same way as the biological body (biomimetic). The untested assumption is that biomimetic motor control enhances device embodiment, learning, generalization, and automaticity. To test this, we compared biomimetic and non-biomimetic control strategies for able-bodied participants when learning to operate a wearable myoelectric bionic hand. We compared motor learning across days and behavioural tasks for two training groups: Biomimetic (mimicking the desired bionic hand gesture with biological hand) and Arbitrary control (mapping an unrelated biological hand gesture with the desired bionic gesture). For both trained groups, training improved bionic limb control, reduced cognitive reliance, and increased embodiment over the bionic hand. Biomimetic users had more intuitive and faster control early in training. Arbitrary users matched biomimetic performance later in training. Further, arbitrary users showed increased generalization to a novel control strategy. Collectively, our findings suggest that biomimetic and arbitrary control strategies provide different benefits. The optimal strategy is likely not strictly biomimetic, but rather a flexible strategy within the biomimetic to arbitrary spectrum, depending on the user, available training opportunities and user requirements.

8.
Elife ; 122023 02 27.
Article in English | MEDLINE | ID: mdl-36847339

ABSTRACT

Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.


Subject(s)
Brain , Pattern Recognition, Visual , Humans , Reproducibility of Results , Pattern Recognition, Visual/physiology , Brain/diagnostic imaging , Magnetoencephalography/methods , Magnetic Resonance Imaging/methods , Brain Mapping/methods
9.
Cognition ; 235: 105398, 2023 06.
Article in English | MEDLINE | ID: mdl-36791506

ABSTRACT

Face pareidolia is the experience of seeing illusory faces in inanimate objects. While children experience face pareidolia, it is unknown whether they perceive gender in illusory faces, as their face evaluation system is still developing in the first decade of life. In a sample of 412 children and adults from 4 to 80 years of age we found that like adults, children perceived many illusory faces in objects to have a gender and had a strong bias to see them as male rather than female, regardless of their own gender identification. These results provide evidence that the male bias for face pareidolia emerges early in life, even before the ability to discriminate gender from facial cues alone is fully developed. Further, the existence of a male bias in children suggests that any social context that elicits the cognitive bias to see faces as male has remained relatively consistent across generations.


Subject(s)
Face , Illusions , Adult , Humans , Male , Child , Female , Illusions/psychology
10.
Cortex ; 158: 71-82, 2023 01.
Article in English | MEDLINE | ID: mdl-36459788

ABSTRACT

The recall and visualization of people and places from memory is an everyday occurrence, yet the neural mechanisms underpinning this phenomenon are not well understood. In particular, the temporal characteristics of the internal representations generated by active recall are unclear. Here, we used magnetoencephalography (MEG) and multivariate pattern analysis to measure the evolving neural representation of familiar places and people across the whole brain when human participants engage in active recall. To isolate self-generated imagined representations, we used a retro-cue paradigm in which participants were first presented with two possible labels before being cued to recall either the first or second item. We collected personalized labels for specific locations and people familiar to each participant. Importantly, no visual stimuli were presented during the recall period, and the retro-cue paradigm allowed the dissociation of responses associated with the labels from those corresponding to the self-generated representations. First, we found that following the retro-cue it took on average ∼1000 ms for distinct neural representations of freely recalled people or places to develop. Second, we found distinct representations of personally familiar concepts throughout the 4 s recall period. Finally, we found that these representations were highly stable and generalizable across time. These results suggest that self-generated visualizations and recall of familiar places and people are subserved by a stable neural mechanism that operates relatively slowly when under conscious control.


Subject(s)
Cues , Mental Recall , Humans , Mental Recall/physiology , Brain/physiology , Brain Mapping , Magnetoencephalography
11.
bioRxiv ; 2023 Dec 14.
Article in English | MEDLINE | ID: mdl-38168448

ABSTRACT

Neuroscientists have long debated the adult brain's capacity to reorganize itself in response to injury. A driving model for studying plasticity has been limb amputation. For decades, it was believed that amputation triggers large-scale reorganization of cortical body resources. However, these studies have relied on cross-sectional observations post-amputation, without directly tracking neural changes. Here, we longitudinally followed adult patients with planned arm amputations and measured hand and face representations, before and after amputation. By interrogating the representational structure elicited from movements of the hand (pre-amputation) and phantom hand (post-amputation), we demonstrate that hand representation is unaltered. Further, we observed no evidence for lower face (lip) reorganization into the deprived hand region. Collectively, our findings provide direct and decisive evidence that amputation does not trigger large-scale cortical reorganization.

12.
Nat Commun ; 13(1): 6508, 2022 10 31.
Article in English | MEDLINE | ID: mdl-36316315

ABSTRACT

Our memories form a tapestry of events, people, and places, woven across the decades of our lives. However, research has often been limited in assessing the nature of episodic memory by using artificial stimuli and short time scales. The explosion of social media enables new ways to examine the neural representations of naturalistic episodic memories, for features like the memory's age, location, memory strength, and emotions. We recruited 23 users of a video diary app ("1 s Everyday"), who had recorded 9266 daily memory videos spanning up to 7 years. During a 3 T fMRI scan, participants viewed 300 of their memory videos intermixed with 300 from another individual. We find that memory features are tightly interrelated, highlighting the need to test them in conjunction, and discover a multidimensional topography in medial parietal cortex, with subregions sensitive to a memory's age, strength, and the familiarity of the people and places involved.


Subject(s)
Memory, Episodic , Humans , Parietal Lobe/diagnostic imaging , Brain Mapping , Recognition, Psychology , Magnetic Resonance Imaging , Neuroimaging , Mental Recall
13.
J Neurosci ; 2022 Jul 21.
Article in English | MEDLINE | ID: mdl-35868861

ABSTRACT

According to a prominent view in neuroscience, visual stimuli are coded by discrete cortical networks that respond preferentially to specific categories, such as faces or objects. However, it remains unclear how these category-selective networks respond when viewing conditions are cluttered, i.e., when there is more than one stimulus in the visual field. Here, we asked three questions: (1) Does clutter reduce the response and selectivity for faces as a function of retinal location? (2) Is the preferential response to faces uniform across the visual field? And (3) Does the ventral visual pathway encode information about the location of cluttered faces? We used fMRI to measure the response of the face-selective network in awake, fixating macaques (2 female, 5 male). Across a series of four experiments, we manipulated the presence and absence of clutter, as well as the location of the faces relative to the fovea. We found that clutter reduces the response to peripheral faces. When presented in isolation, without clutter, the selectivity for faces is fairly uniform across the visual field, but, when clutter is present, there is a marked decrease in the selectivity for peripheral faces. We also found no evidence of a contralateral visual field bias when faces were presented in clutter. Nonetheless, multivariate analyses revealed that the location of cluttered faces could be decoded from the multivoxel response of the face-selective network. Collectively, these findings demonstrate that clutter blunts the selectivity of the face-selective network to peripheral faces, although information about their retinal location is retained.SIGNIFICANCE STATEMENTNumerous studies that have measured brain activity in macaques have found visual regions that respond preferentially to faces. Although these regions are thought to be essential for social behavior, their responses have typically been measured while faces were presented in isolation, a situation atypical of the real world. How do these regions respond when faces are presented with other stimuli? We report that, when clutter is present, the preferential response to foveated faces is spared but preferential response to peripheral faces is reduced. Our results indicate that the presence of clutter changes the response of the face-selective network.

14.
Article in English | MEDLINE | ID: mdl-35609964

ABSTRACT

Phantom limb pain (PLP) impacts the majority of individuals who undergo limb amputation. The PLP experience is highly heterogenous in its quality, intensity, frequency and severity. This heterogeneity, combined with the low prevalence of amputation in the general population, has made it difficult to accumulate reliable data on PLP. Consequently, we lack consensus on PLP mechanisms, as well as effective treatment options. However, the wealth of new PLP research, over the past decade, provides a unique opportunity to re-evaluate some of the core assumptions underlying what we know about PLP and the rationale behind PLP treatments. The goal of this review is to help generate consensus in the field on how best to research PLP, from phenomenology to treatment. We highlight conceptual and methodological challenges in studying PLP, which have hindered progress on the topic and spawned disagreement in the field, and offer potential solutions to overcome these challenges. Our hope is that a constructive evaluation of the foundational knowledge underlying PLP research practices will enable more informed decisions when testing the efficacy of existing interventions and will guide the development of the next generation of PLP treatments.

15.
Cortex ; 153: 66-86, 2022 08.
Article in English | MEDLINE | ID: mdl-35597052

ABSTRACT

Objects disappearing briefly from sight due to occlusion is an inevitable occurrence in everyday life. Yet we generally have a strong experience that occluded objects continue to exist, despite the fact that they objectively disappear. This indicates that neural object representations must be maintained during dynamic occlusion. However, it is unclear what the nature of such representation is and in particular whether it is perception-like or more abstract, for example, reflecting limited features such as position or movement direction only. In this study, we address this question by examining how different object features such as object shape, luminance, and position are represented in the brain when a moving object is dynamically occluded. We apply multivariate decoding methods to Magnetoencephalography (MEG) data to track how object representations unfold over time. Our methods allow us to contrast the representations of multiple object features during occlusion and enable us to compare the neural responses evoked by visible and occluded objects. The results show that object position information is represented during occlusion to a limited extent while object identity features are not maintained through the period of occlusion. Together, this suggests that the nature of object representations during dynamic occlusion is different from visual representations during perception.


Subject(s)
Visual Cortex , Brain , Brain Mapping , Humans , Magnetic Resonance Imaging/methods , Magnetoencephalography , Pattern Recognition, Visual/physiology , Visual Cortex/physiology
16.
Proc Natl Acad Sci U S A ; 119(5)2022 02 01.
Article in English | MEDLINE | ID: mdl-35074880

ABSTRACT

Despite our fluency in reading human faces, sometimes we mistakenly perceive illusory faces in objects, a phenomenon known as face pareidolia. Although illusory faces share some neural mechanisms with real faces, it is unknown to what degree pareidolia engages higher-level social perception beyond the detection of a face. In a series of large-scale behavioral experiments (ntotal = 3,815 adults), we found that illusory faces in inanimate objects are readily perceived to have a specific emotional expression, age, and gender. Most strikingly, we observed a strong bias to perceive illusory faces as male rather than female. This male bias could not be explained by preexisting semantic or visual gender associations with the objects, or by visual features in the images. Rather, this robust bias in the perception of gender for illusory faces reveals a cognitive bias arising from a broadly tuned face evaluation system in which minimally viable face percepts are more likely to be perceived as male.


Subject(s)
Face/physiology , Illusions/physiology , Adult , Facial Recognition/physiology , Female , Humans , Male , Photic Stimulation/methods
17.
Brain Struct Funct ; 227(4): 1405-1421, 2022 May.
Article in English | MEDLINE | ID: mdl-34727232

ABSTRACT

Human visual cortex is organised broadly according to two major principles: retinotopy (the spatial mapping of the retina in cortex) and category-selectivity (preferential responses to specific categories of stimuli). Historically, these principles were considered anatomically separate, with retinotopy restricted to the occipital cortex and category-selectivity emerging in the lateral-occipital and ventral-temporal cortex. However, recent studies show that category-selective regions exhibit systematic retinotopic biases, for example exhibiting stronger activation for stimuli presented in the contra- compared to the ipsilateral visual field. It is unclear, however, whether responses within category-selective regions are more strongly driven by retinotopic location or by category preference, and if there are systematic differences between category-selective regions in the relative strengths of these preferences. Here, we directly compare contralateral and category preferences by measuring fMRI responses to scene and face stimuli presented in the left or right visual field and computing two bias indices: a contralateral bias (response to the contralateral minus ipsilateral visual field) and a face/scene bias (preferred response to scenes compared to faces, or vice versa). We compare these biases within and between scene- and face-selective regions and across the lateral and ventral surfaces of the visual cortex more broadly. We find an interaction between surface and bias: lateral surface regions show a stronger contralateral than face/scene bias, whilst ventral surface regions show the opposite. These effects are robust across and within subjects, and appear to reflect large-scale, smoothly varying gradients. Together, these findings support distinct functional roles for the lateral and ventral visual cortex in terms of the relative importance of the spatial location of stimuli during visual information processing.


Subject(s)
Pattern Recognition, Visual , Visual Cortex , Bias , Brain Mapping , Humans , Magnetic Resonance Imaging , Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation , Temporal Lobe/physiology , Visual Cortex/physiology
18.
Memory ; 30(3): 279-292, 2022 03.
Article in English | MEDLINE | ID: mdl-34913412

ABSTRACT

Drawings of scenes made from memory can be highly detailed and spatially accurate, with little information not found in the observed stimuli. While prior work has focused on studying memory for distinct scenes, less is known about the specific detail recalled when episodes are highly similar and competing. Here, participants (N = 30) were asked to study and recall eight complex scene images using a drawing task. Importantly, four of these images were exemplars of different scene categories, while the other four images were from the same scene category. The resulting 213 drawings were judged by 1764 online scorers for a comprehensive set of measures, including scene and object diagnosticity, spatial information, and fixation and pen movement behaviour. We observed that competition in memory resulted in diminished object detail, with drawings and objects that were less diagnostic of their original image. However, repeated exemplars of a category did not result in differences in spatial memory accuracy, and there were no differences in fixations during study or pen movements during recall. These results reveal that while drawings for distinct categories of scenes can be highly detailed and accurate, drawings for scenes from repeated categories, creating competition in memory, show reduced object detail.


Subject(s)
Mental Recall , Spatial Memory , Humans , Pattern Recognition, Visual
19.
Trends Cogn Sci ; 25(11): 978-991, 2021 11.
Article in English | MEDLINE | ID: mdl-34489180

ABSTRACT

Perceptual gaps can be caused by objects in the foreground temporarily occluding objects in the background or by eyeblinks, which briefly but frequently interrupt visual information. Resolving visual motion across perceptual gaps is particularly challenging, as object position changes during the gap. We examine how visual motion is maintained and updated through externally driven (occlusion) and internally driven (eyeblinks) perceptual gaps. Focusing on both phenomenology and potential mechanisms such as suppression, extrapolation, and integration, we present a framework for how perceptual gaps are resolved over space and time. We finish by highlighting critical questions and directions for future work.


Subject(s)
Motion Perception , Humans
20.
J Neurosci ; 2021 Jun 07.
Article in English | MEDLINE | ID: mdl-34099511

ABSTRACT

The map of category-selectivity in human ventral temporal cortex (VTC) provides organizational constraints to models of object recognition. One important principle is lateral-medial response biases to stimuli that are typically viewed in the center or periphery of the visual field. However, little is known about the relative temporal dynamics and location of regions that respond preferentially to stimulus classes that are centrally viewed, like the face- and word-processing networks. Here, word- and face-selective regions within VTC were mapped using intracranial recordings from 36 patients. Partially overlapping, but also anatomically dissociable patches of face- and word-selectivity were found in VTC. In addition to canonical word-selective regions along the left posterior occipitotemporal sulcus, selectivity was also located medial and anterior to face-selective regions on the fusiform gyrus at the group level and within individual male and female subjects. These regions were replicated using 7 Tesla fMRI in healthy subjects. Left hemisphere word-selective regions preceded right hemisphere responses by 125 ms, potentially reflecting the left hemisphere bias for language; with no hemispheric difference in face-selective response latency. Word-selective regions along the posterior fusiform responded first, then spread medially and laterally, then anteriorally. Face-selective responses were first seen in posterior fusiform regions bilaterally, then proceeded anteriorally from there. For both words and faces, the relative delay between regions was longer than would be predicted by purely feedforward models of visual processing. The distinct time-courses of responses across these regions, and between hemispheres, suggest a complex and dynamic functional circuit supports face and word perception.SIGNIFICANCE STATEMENT:Representations of visual objects in the human brain have been shown to be organized by several principles, including whether those objects tend to be viewed centrally or peripherally in the visual field. However, it remains unclear how regions that process objects that are viewed centrally, like words and faces, are organized relative to one another. Here, invasive and non-invasive neuroimaging suggests there is a mosaic of regions in ventral temporal cortex that respond selectively to either words or faces. These regions display differences in the strength and timing of their responses, both within and between brain hemispheres, suggesting they play different roles in perception. These results illuminate extended, bilateral, and dynamic brain pathways that support face perception and reading.

SELECTION OF CITATIONS
SEARCH DETAIL
...