ABSTRACT
Functional neuroimaging studies indicate that the human brain can represent concepts and their relational structure in memory using coding schemes typical of spatial navigation. However, whether we can read out the internal representational geometries of conceptual spaces solely from human behavior remains unclear. Here, we report that the relational structure between concepts in memory might be reflected in spontaneous eye movements during verbal fluency tasks: When we asked participants to randomly generate numbers, their eye movements correlated with distances along the left-to-right one-dimensional geometry of the number space (mental number line), while they scaled with distance along the ring-like two-dimensional geometry of the color space (color wheel) when they randomly generated color names. Moreover, when participants randomly produced animal names, eye movements correlated with low-dimensional similarity in word frequencies. These results suggest that the representational geometries used to internally organize conceptual spaces might be read out from gaze behavior.
Subject(s)
Eye Movements , Spatial Navigation , Humans , Brain , Movement , Functional NeuroimagingABSTRACT
We can sense an object's shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups' bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups' left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.
Subject(s)
Cerebral Cortex , Touch Perception , Humans , Occipital Lobe , Touch Perception/physiology , Touch/physiology , Parietal Lobe/physiology , Blindness , Magnetic Resonance Imaging/methods , Brain MappingABSTRACT
hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.
Subject(s)
Brain Mapping , Motion Perception , Auditory Perception/physiology , Blindness , Female , Humans , Magnetic Resonance Imaging/methods , Male , Motion Perception/physiologyABSTRACT
How do humans compute approximate number? According to one influential theory, approximate number representations arise in the intraparietal sulcus and are amodal, meaning that they arise independent of any sensory modality. Alternatively, approximate number may be computed initially within sensory systems. Here we tested for sensitivity to approximate number in the visual system using steady state visual evoked potentials. We recorded electroencephalography from humans while they viewed dotclouds presented at 30 Hz, which alternated in numerosity (ranging from 10 to 20 dots) at 15 Hz. At this rate, each dotcloud backward masked the previous dotcloud, disrupting top-down feedback to visual cortex and preventing conscious awareness of the dotclouds' numerosities. Spectral amplitude at 15 Hz measured over the occipital lobe (Oz) correlated positively with the numerical ratio of the stimuli, even when nonnumerical stimulus attributes were controlled, indicating that subjects' visual systems were differentiating dotclouds on the basis of their numerical ratios. Crucially, subjects were unable to discriminate the numerosities of the dotclouds consciously, indicating the backward masking of the stimuli disrupted reentrant feedback to visual cortex. Approximate number appears to be computed within the visual system, independently of higher-order areas, such as the intraparietal sulcus.
Subject(s)
Evoked Potentials, Visual/physiology , Mathematical Concepts , Visual Cortex/physiology , Adult , Consciousness/physiology , Electroencephalography , Female , Humans , Male , Photic Stimulation , Visual Perception/physiologyABSTRACT
If conceptual retrieval is partially based on the simulation of sensorimotor experience, people with a different sensorimotor experience, such as congenitally blind people, should retrieve concepts in a different way. However, studies investigating the neural basis of several conceptual domains (e.g., actions, objects, places) have shown a very limited impact of early visual deprivation. We approached this problem by investigating brain regions that encode the perceptual similarity of action and color concepts evoked by spoken words in sighted and congenitally blind people. At first, and in line with previous findings, a contrast between action and color concepts (independently of their perceptual similarity) revealed similar activations in sighted and blind people for action concepts and partially different activations for color concepts, but outside visual areas. On the other hand, adaptation analyses based on subjective ratings of perceptual similarity showed compelling differences across groups. Perceptually similar colors and actions induced adaptation in the posterior occipital cortex of sighted people only, overlapping with regions known to represent low-level visual features of those perceptual domains. Early-blind people instead showed a stronger adaptation for perceptually similar concepts in temporal regions, arguably indexing higher reliance on a lexical-semantic code to represent perceptual knowledge. Overall, our results show that visual deprivation does changes the neural bases of conceptual retrieval, but mostly at specific levels of representation supporting perceptual similarity discrimination, reconciling apparently contrasting findings in the field.
Subject(s)
Adaptation, Physiological/physiology , Blindness/physiopathology , Brain Mapping , Color , Concept Formation/physiology , Mental Recall/physiology , Occipital Lobe/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Adult , Blindness/congenital , Color Perception/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Middle AgedABSTRACT
Cognitive maps in the hippocampal-entorhinal system are central for the representation of both spatial and non-spatial relationships. Although this system, especially in humans, heavily relies on vision, the role of visual experience in shaping the development of cognitive maps remains largely unknown. Here, we test sighted and early blind individuals in both imagined navigation in fMRI and real-world navigation. During imagined navigation, the Human Navigation Network, constituted by frontal, medial temporal, and parietal cortices, is reliably activated in both groups, showing resilience to visual deprivation. However, neural geometry analyses highlight crucial differences between groups. A 60° rotational symmetry, characteristic of a hexagonal grid-like coding, emerges in the entorhinal cortex of sighted but not blind people, who instead show a 90° (4-fold) symmetry, indicative of a square grid. Moreover, higher parietal cortex activity during navigation in blind people correlates with the magnitude of 4-fold symmetry. In sum, early blindness can alter the geometry of entorhinal cognitive maps, possibly as a consequence of higher reliance on parietal egocentric coding during navigation.
Subject(s)
Blindness , Brain Mapping , Entorhinal Cortex , Magnetic Resonance Imaging , Humans , Blindness/physiopathology , Male , Adult , Female , Entorhinal Cortex/diagnostic imaging , Entorhinal Cortex/physiopathology , Entorhinal Cortex/physiology , Brain Mapping/methods , Parietal Lobe/diagnostic imaging , Parietal Lobe/physiopathology , Middle Aged , Spatial Navigation/physiology , Young Adult , Visually Impaired Persons , Cognition/physiology , Imagination/physiologyABSTRACT
Reading is both a visual and a linguistic task, and as such it relies on both general-purpose, visual mechanisms and more abstract, meaning-oriented processes. Disentangling the roles of these resources is of paramount importance in reading research. The present study capitalizes on the coupling of fast periodic visual stimulation and MEG recordings to address this issue and investigate the role of different kinds of visual and linguistic units in the visual word identification system. We compared strings of pseudo-characters; strings of consonants (e.g., sfcl); readable, but unattested strings (e.g., amsi); frequent, but non-meaningful chunks (e.g., idge); suffixes (e.g., ment); and words (e.g., vibe); and looked for discrimination responses with a particular focus on the ventral, occipito-temporal regions. The results revealed sensitivity to alphabetic, readable, familiar, and lexical stimuli. Interestingly, there was no discrimination between suffixes and equally frequent, but meaningless endings, thus highlighting a lack of sensitivity to semantics. Taken together, the data suggest that the visual word identification system, at least in its early processing stages, is particularly tuned to form-based regularities, most likely reflecting its reliance on general-purpose, statistical learning mechanisms that are a core feature of the visual system as implemented in the ventral stream.
ABSTRACT
Grid-cells firing fields tile the environment with a 6-fold periodicity during both locomotion and visual exploration. Here, we tested, in humans, whether movements of covert attention elicit grid-like coding using frequency tagging. Participants observed visual trajectories presented sequentially at fixed rate, allowing different spatial periodicities (e.g., 4-, 6-, and 8-fold) to have corresponding temporal periodicities (e.g., 1, 1.5, and 2 Hz), thus resulting in distinct spectral responses. We found a higher response for the (grid-like) 6-fold periodicity and localized this effect in medial-temporal sources. In a control experiment featuring the same temporal periodicity but lacking spatial structure, the 6-fold effect did not emerge, suggesting its dependency on spatial movements of attention. We report evidence that grid-like signals in the human medial-temporal lobe can be elicited by covert attentional movements and suggest that attentional coding may provide a suitable mechanism to support the activation of cognitive maps during conceptual navigation.
Subject(s)
Attention , Temporal Lobe , Humans , Attention/physiology , Locomotion , Computer Systems , Electrodes , Entorhinal Cortex/physiologyABSTRACT
The human hippocampal-entorhinal system is known to represent both spatial locations and abstract concepts in memory in the form of allocentric cognitive maps. Using fMRI, we show that the human parietal cortex evokes complementary egocentric representations in conceptual spaces during goal-directed mental search, akin to those observable during physical navigation to determine where a goal is located relative to oneself (e.g., to our left or to our right). Concurrently, the strength of the grid-like signal, a neural signature of allocentric cognitive maps in entorhinal, prefrontal, and parietal cortices, is modulated as a function of goal proximity in conceptual space. These brain mechanisms might support flexible and parallel readout of where target conceptual information is stored in memory, capitalizing on complementary reference frames.
Subject(s)
Brain , Hippocampus , Humans , Parietal Lobe/diagnostic imaging , Brain Mapping , Head , Space PerceptionABSTRACT
Dual Coding Theories (DCT) suggest that meaning is represented in the brain by a double code: a language-derived code in the Anterior Temporal Lobe (ATL) and a sensory-derived code in perceptual and motor regions. Concrete concepts should activate both codes, while abstract ones rely solely on the linguistic code. To test these hypotheses, the present magnetoencephalography (MEG) experiment had participants judge whether visually presented words relate to the senses while we recorded brain responses to abstract and concrete semantic components obtained from 65 independently rated semantic features. Results evidenced early involvement of anterior-temporal and inferior-frontal brain areas in both abstract and concrete semantic information encoding. At later stages, occipital and occipito-temporal regions showed greater responses to concrete compared to abstract features. The present findings suggest that the concreteness of words is processed first with a transmodal/linguistic code, housed in frontotemporal brain systems, and only after with an imagistic/sensorimotor code in perceptual regions.
ABSTRACT
words are typically more difficult to identify than concrete words in lexical-decision, word-naming, and recall tasks. This behavioral advantage, known as the concreteness effect, is often considered as evidence for embodied semantics, which emphasizes the role of sensorimotor experience in the comprehension of word meaning. In this view, online sensorimotor simulations triggered by concrete words, but not by abstract words, facilitate access to word meaning and speed up word identification. To test whether perceptual simulation is the driving force underlying the concreteness effect, we compared data from early-blind and sighted individuals performing an auditory lexical-decision task. Subjects were presented with property words referring to abstract (e.g., "logic"), concrete multimodal (e.g., "spherical"), and concrete unimodal visual concepts (e.g., "blue"). According to the embodied account, the processing advantage for concrete unimodal visual words should disappear in the early blind because they cannot rely on visual experience and simulation during semantics processing (i.e., purely visual words should be abstract for early-blind people). On the contrary, we found that both sighted and blind individuals are faster when processing multimodal and unimodal visual words compared with abstract words. This result suggests that the concreteness effect does not depend on perceptual simulations but might be driven by modality-independent properties of word meaning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Subject(s)
Comprehension , Semantics , Blindness , Humans , Reaction TimeABSTRACT
Humans capitalize on statistical cues to discriminate fundamental units of information within complex streams of sensory input. We sought neural evidence for this phenomenon by combining fast periodic visual stimulation (FPVS) and EEG recordings. Skilled readers were exposed to sequences of linguistic items with decreasing familiarity, presented at a fast rate and periodically interleaved with oddballs. Crucially, each sequence comprised stimuli of the same category, and the only distinction between base and oddball items was the frequency of occurrence of individual tokens within a stream. Frequency-domain analyses revealed robust neural responses at the oddball presentation rate in all conditions, reflecting the discrimination between two locally-emerged groups of items purely informed by token frequency. Results provide evidence for a fundamental frequency-tuned mechanism that operates under high temporal constraints and could underpin category bootstrapping. Concurrently, they showcase the potential of FPVS for providing a direct neural measure of implicit statistical learning.
Subject(s)
Electroencephalography , Recognition, Psychology , Cues , Electroencephalography/methods , Humans , Photic Stimulation/methods , Recognition, Psychology/physiologyABSTRACT
Synesthesia represents an atypical merging of percepts, in which a given sensory experience (e.g., words, letters, music) triggers sensations in a different perceptual domain (e.g., color). According to recent estimates, the vast majority of the reported cases of synesthesia involve a visual experience. Purely non-visual synesthesia is extremely rare and to date there is no reported case of a congenitally blind synesthete. Moreover, it has been suggested that congenital blindness impairs the emergence of synesthesia-related phenomena such as multisensory integration and cross-modal correspondences between non-visual senses (e.g., sound-touch). Is visual experience necessary to develop synesthesia? Here we describe the case of a congenital blind man (CB) reporting a complex synesthetic experience, involving numbers, letters, months and days of the week. Each item is associated with a precise position in mental space and with a precise tactile texture. In one experiment we empirically verified the presence of number-texture and letter-texture synesthesia in CB, compared to non-synesthete controls, probing the consistency of item-texture associations across time and demonstrating that synesthesia can develop without vision. Our data fill an important void in the current knowledge on synesthesia and shed light on the mechanisms behind sensory crosstalk in the human mind.
Subject(s)
Music , Perceptual Disorders , Touch Perception , Blindness/complications , Color Perception , Humans , Male , Perceptual Disorders/etiology , Synesthesia , TouchABSTRACT
A central question in the cognitive sciences is which role embodiment plays for high-level cognitive functions, such as conceptual processing. Here, we propose that one reason why progress regarding this question has been slow is a lacking focus on what Platt (1964) called "strong inference". Strong inference is possible when results from an experimental paradigm are not merely consistent with a hypothesis, but they provide decisive evidence for one particular hypothesis compared to competing hypotheses. We discuss how causal paradigms, which test the functional relevance of sensory-motor processes for high-level cognitive functions, can move the field forward. In particular, we explore how congenital sensory-motor disorders, acquired sensory-motor deficits, and interference paradigms with healthy participants can be utilized as an opportunity to better understand the role of sensory experience in conceptual processing. Whereas all three approaches can bring about valuable insights, we highlight that the study of congenitally and acquired sensorimotor disorders is particularly effective in the case of conceptual domains with strong unimodal basis (e.g., colors), whereas interference paradigms with healthy participants have a broader application, avoid many of the practical and interpretational limitations of patient studies, and allow a systematic and step-wise progressive inference approach to causal mechanisms.
ABSTRACT
Our brain constructs reality through narrative and argumentative thought. Some hypotheses argue that these two modes of cognitive functioning are irreducible, reflecting distinct mental operations underlain by separate neural bases; Others ascribe both to a unitary neural system dedicated to long-timescale information. We addressed this question by employing inter-subject measures to investigate the stimulus-induced neural responses when participants were listening to narrative and argumentative texts during fMRI. We found that following both kinds of texts enhanced functional couplings within the frontoparietal control system. However, while a narrative specifically implicated the default mode system, an argument specifically induced synchronization between the intraparietal sulcus in the frontoparietal control system and multiple perisylvian areas in the language system. Our findings reconcile the two hypotheses by revealing commonalities and differences between the narrative and the argumentative brain networks, showing how diverse mental activities arise from the segregation and integration of the existing brain systems.
Subject(s)
Brain/physiology , Cognition/physiology , Thinking/physiology , Adult , Aged , Brain Mapping/methods , Female , Humans , Language , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/physiology , Young AdultABSTRACT
In human and non-human animals, conceptual knowledge is partially organized according to low-dimensional geometries that rely on brain structures and computations involved in spatial representations. Recently, two separate lines of research have investigated cognitive maps, that are associated with the hippocampal formation and are similar to world-centered representations of the environment, and image spaces, that are associated with the parietal cortex and are similar to self-centered spatial relationships. We review evidence supporting cognitive maps and image spaces, and we propose a hippocampal-parietal network that can account for the organization and retrieval of knowledge across multiple reference frames. We also suggest that cognitive maps and image spaces may be two manifestations of a more general propensity of the mind to create low-dimensional internal models.
Subject(s)
Brain Mapping , Parietal Lobe , Animals , Brain/diagnostic imaging , Cognition , Hippocampus , Parietal Lobe/diagnostic imaging , Space PerceptionABSTRACT
Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.
The world is full of rich and dynamic visual information. To avoid information overload, the human brain groups inputs into categories such as faces, houses, or tools. A part of the brain called the ventral occipito-temporal cortex (VOTC) helps categorize visual information. Specific parts of the VOTC prefer different types of visual input; for example, one part may tend to respond more to faces, whilst another may prefer houses. However, it is not clear how the VOTC characterizes information. One idea is that similarities between certain types of visual information may drive how information is organized in the VOTC. For example, looking at faces requires using central vision, while looking at houses requires using peripheral vision. Furthermore, all faces have a roundish shape while houses tend to have a more rectangular shape. Another possibility, however, is that the categorization of different inputs cannot be explained just by vision, and is also be driven by higher-level aspects of each category. For instance, how humans use or interact with something may also influence how an input is categorized. If categories are established depending (at least partially) on these higher-level aspects, rather than purely through visual likeness, it is likely that the VOTC would respond similarly to both sounds and images representing these categories. Now, Mattioni et al. have tested how individuals with and without sight respond to eight different categories of information to find out whether or not categorization is driven purely by visual likeness. Each category was presented to participants using sounds while measuring their brain activity. In addition, a group of participants who could see were also presented with the categories visually. Mattioni et al. then compared what happened in the VOTC of the three groups sighted people presented with sounds, blind people presented with sounds, and sighted people presented with images in response to each category. The experiment revealed that the VOTC organizes both auditory and visual information in a similar way. However, there were more similarities between the way blind people categorized auditory information and how sighted people categorized visual information than between how sighted people categorized each type of input. Mattioni et al. also found that the region of the VOTC that responds to inanimate objects massively overlapped across the three groups, whereas the part of the VOTC that responds to living things was more variable. These findings suggest that the way that the VOTC organizes information is, at least partly, independent from vision. The experiments also provide some information about how the brain reorganizes in people who are born blind. Further studies may reveal how differences in the VOTC of people with and without sight affect regions typically associated with auditory categorization, and potentially explain how the brain reorganizes in people who become blind later in life.
Subject(s)
Auditory Perception , Blindness/physiopathology , Occipital Lobe/physiopathology , Temporal Lobe/physiopathology , Acoustic Stimulation , Case-Control Studies , HumansABSTRACT
Non-arbitrary sound-shape correspondences (SSC), such as the "bouba-kiki" effect, have been consistently observed across languages and together with other sound-symbolic phenomena challenge the classic linguistic dictum of the arbitrariness of the sign. Yet, it is unclear what makes a sound "round" or "spiky" to the human mind. Here we tested the hypothesis that visual experience is necessary for the emergence of SSC, supported by empirical evidence showing reduced SSC in visually impaired people. Results of two experiments comparing early blind and sighted individuals showed that SSC emerged strongly in both groups. Experiment 2, however, showed a partially different pattern of SSC in sighted and blind, that was mostly explained by a different effect of orthographic letter shape: The shape of written letters (spontaneously activated by spoken words) influenced SSC in the sighted, but not in the blind, who are exposed to an orthography (Braille) in which letters do not have spiky or round outlines. In sum, early blindness does not prevent the emergence of SSC, and differences between sighted and visually impaired people may be due the indirect influence (or lack thereof) of orthographic letter shape.
Subject(s)
Pattern Recognition, Visual/physiology , Psycholinguistics , Reading , Speech Perception/physiology , Touch Perception/physiology , Vision Disorders/physiopathology , Adult , Female , Humans , Male , Middle AgedABSTRACT
How perceptual information is encoded into language and conceptual knowledge is a debated topic in cognitive (neuro)science. We present modality norms for 643 Italian adjectives, which referred to one of the five perceptual modalities or were abstract. Overall, words were rated as mostly connected to the visual modality and least connected to the olfactory and gustatory modality. We found that words associated to visual and auditory experience were more unimodal compared to words associated to other sensory modalities. A principal components analysis highlighted a strong coupling between gustatory and olfactory information in word meaning, and the tendency of words referring to tactile experience to also include information from the visual dimension. Abstract words were found to encode only marginal perceptual information, mostly from visual and auditory experience. The modality norms were augmented with corpus-based (e.g., Zipf Frequency, Orthographic Levenshtein Distance 20) and ratings-based psycholinguistic variables (Age of Acquisition, Familiarity, Contextual Availability). Split-half correlations performed for each experimental variable and comparisons with similar databases confirmed that our norms are highly reliable. This database thus provides a new important tool for investigating the interplay between language, perception and cognition.
ABSTRACT
Recent studies proposed that the use of internal and external coordinate systems for perception and action may be more flexible in congenitally blind when compared to sighted individuals. To investigate this hypothesis further, we asked congenitally blind and sighted people to perform, with the hands uncrossed and crossed over the body midline, a tactile temporal order judgment and an auditory Simon task. Crucially, both tasks were carried out under task instructions either favoring the use of an internal (left vs. right hand) or an external (left vs. right hemispace) frame of reference. In the internal condition of the temporal order judgment task, our results replicated previous findings (Röder, Rösler, & Spence, 2004) showing that hand crossing only impaired sighted participants' performance, suggesting that blind people did not activate by default a (conflicting) external frame of reference. However, under external instructions, a decrease of performance was observed in both groups, suggesting that even blind people activated an external coordinate system in this condition. In the Simon task, and in contrast with a previous study (Röder, Kusmierek, Spence, & Schicke, 2007), both groups responded more efficiently when the sound was presented from the same side of the response ("Simon effect") independently of the hands position. This was true under the internal and external conditions, therefore suggesting that blind and sighted by default activated an external coordinate system in this task. Together, these data demonstrated that both sighted and blind individuals were able to activate internal and external information for perception and action. (PsycINFO Database Record (c) 2019 APA, all rights reserved).