Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 89
Filter
1.
Proc Natl Acad Sci U S A ; 121(32): e2320251121, 2024 Aug 06.
Article in English | MEDLINE | ID: mdl-39078671

ABSTRACT

The primary visual cortex (V1) in blindness is engaged in a wide spectrum of tasks and sensory modalities, including audition, touch, language, and memory. This widespread involvement raises questions regarding the constancy of its role and whether it might exhibit flexibility in its function over time, connecting to diverse network functions specific to task demands. This would suggest that reorganized V1 assumes a role like multiple-demand system regions. Alternatively, varying patterns of plasticity in blind V1 may be attributed to individual factors, with different blind individuals recruiting V1 preferentially for different functions. In support of this, we recently showed that V1 functional connectivity (FC) varies greatly across blind individuals. But do these represent stable individual patterns of plasticity, or are they driven more by instantaneous changes, like a multiple-demand system now inhabiting V1? Here, we tested whether individual FC patterns from the V1 of blind individuals are stable over time. We show that over two years, FC from the V1 is unique and highly stable in a small sample of repeatedly sampled congenitally blind individuals. Further, using multivoxel pattern analysis, we demonstrate that the unique reorganization patterns of these individuals allow decoding of participant identity. Together with recent evidence for substantial individual differences in V1 connectivity, this indicates that there may be a consistent role for V1 in blindness, which may differ for each individual. Further, it suggests that the variability in visual reorganization in blindness across individuals could be used to seek stable neuromarkers for sight rehabilitation and assistive approaches.


Subject(s)
Blindness , Neuronal Plasticity , Humans , Blindness/physiopathology , Neuronal Plasticity/physiology , Male , Female , Adult , Middle Aged , Magnetic Resonance Imaging , Primary Visual Cortex/physiology , Longitudinal Studies , Visual Cortex/physiopathology , Visual Cortex/physiology , Visual Cortex/diagnostic imaging , Brain Mapping/methods
2.
Sci Rep ; 14(1): 14855, 2024 06 27.
Article in English | MEDLINE | ID: mdl-38937475

ABSTRACT

Exploring a novel approach to mental health technology, this study illuminates the intricate interplay between exteroception (the perception of the external world), and interoception (the perception of the internal world). Drawing on principles of sensory substitution, we investigated how interoceptive signals, particularly respiration, could be conveyed through exteroceptive modalities, namely vision and hearing. To this end, we developed a unique, immersive multisensory environment that translates respiratory signals in real-time into dynamic visual and auditory stimuli. The system was evaluated by employing a battery of various psychological assessments, with the findings indicating a significant increase in participants' interoceptive sensibility and an enhancement of the state of flow, signifying immersive and positive engagement with the experience. Furthermore, a correlation between these two variables emerged, revealing a bidirectional enhancement between the state of flow and interoceptive sensibility. Our research is the first to present a sensory substitution approach for substituting between interoceptive and exteroceptive senses, and specifically as a transformative method for mental health interventions, paving the way for future research.


Subject(s)
Interoception , Humans , Interoception/physiology , Female , Male , Adult , Young Adult , Acoustic Stimulation , Respiration , Photic Stimulation
3.
iScience ; 27(6): 109820, 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38799571

ABSTRACT

Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.

4.
bioRxiv ; 2023 Nov 06.
Article in English | MEDLINE | ID: mdl-37986779

ABSTRACT

The primary visual cortex (V1) in individuals born blind is engaged in a wide spectrum of tasks and sensory modalities, including audition, touch, language, and memory. This widespread involvement raises questions regarding the constancy of its role and whether it might exhibit flexibility in its function over time, connecting to diverse network functions in response to task-specific demands. This would suggest that reorganized V1 takes on a role similar to cognitive multiple-demand system regions. Alternatively, it is possible that the varying patterns of plasticity observed in the blind V1 can be attributed to individual factors, whereby different blind individuals recruit V1 for different functions, highlighting the immense idiosyncrasy of plasticity. In support of this second account, we have recently shown that V1 functional connectivity varies greatly across blind individuals. But do these represent stable individual patterns of plasticity or merely instantaneous changes, for a multiple-demand system now inhabiting V1? Here we tested if individual connectivity patterns from the visual cortex of blind individuals are stable over time. We show that over two years, fMRI functional connectivity from the primary visual cortex is unique and highly stable in a small sample of repeatedly sampled congenitally blind individuals. Further, using multivoxel pattern analysis, we demonstrate that the unique reorganization patterns of these individuals allow decoding of participant identity. Together with recent evidence for substantial individual differences in visual cortex connectivity, this indicates there may be a consistent role for the visual cortex in blindness, which may differ for each individual. Further, it suggests that the variability in visual reorganization in blindness across individuals could be used to seek stable neuromarkers for sight rehabilitation and assistive approaches.

5.
J Neurosci ; 43(46): 7868-7878, 2023 Nov 15.
Article in English | MEDLINE | ID: mdl-37783506

ABSTRACT

Motor actions, such as reaching or grasping, can be decoded from fMRI activity of early visual cortex (EVC) in sighted humans. This effect can depend on vision or visual imagery, or alternatively, could be driven by mechanisms independent of visual experience. Here, we show that the actions of reaching in different directions can be reliably decoded from fMRI activity of EVC in congenitally blind humans (both sexes). Thus, neither visual experience nor visual imagery is necessary for EVC to represent action-related information. We also demonstrate that, within EVC of blind humans, the accuracy of reach direction decoding is highest in areas typically representing foveal vision and gradually decreases in areas typically representing peripheral vision. We propose that this might indicate the existence of a predictive, hard-wired mechanism of aligning action and visual spaces. This mechanism might send action-related information primarily to the high-resolution foveal visual areas, which are critical for guiding and online correction of motor actions. Finally, we show that, beyond EVC, the decoding of reach direction in blind humans is most accurate in dorsal stream areas known to be critical for visuo-spatial and visuo-motor integration in the sighted. Thus, these areas can develop space and action representations even in the lifelong absence of vision. Overall, our findings in congenitally blind humans match previous research on the action system in the sighted, and suggest that the development of action representations in the human brain might be largely independent of visual experience.SIGNIFICANCE STATEMENT Early visual cortex (EVC) was traditionally thought to process only visual signals from the retina. Recent studies proved this account incomplete, and showed EVC involvement in many activities not directly related to incoming visual information, such as memory, sound, or action processing. Is EVC involved in these activities because of visual imagery? Here, we show robust reach direction representation in EVC of humans born blind. This demonstrates that EVC can represent actions independently of vision and visual imagery. Beyond EVC, we found that reach direction representation in blind humans is strongest in dorsal brain areas, critical for action processing in the sighted. This suggests that the development of action representations in the human brain is largely independent of visual experience.


Subject(s)
Visual Cortex , Visual Perception , Male , Female , Humans , Brain , Visual Cortex/diagnostic imaging , Brain Mapping , Blindness , Magnetic Resonance Imaging
6.
Neuropsychologia ; 190: 108685, 2023 Nov 05.
Article in English | MEDLINE | ID: mdl-37741551

ABSTRACT

Accumulating evidence in the last decades has given rise to a new theory of brain organization, positing that cortical regions are recruited for specific tasks irrespective of the sensory modality via which information is channeled. For instance, the visual reading network has been shown to be recruited for reading via the tactile Braille code in congenitally blind adults. Yet, how rapidly non-typical sensory input modulates activity in typically visual regions is yet to be explored. To this aim, we developed a novel reading orthography, termed OVAL, enabling congenitally blind adults to quickly acquire reading via the auditory modality. OVAL uses the EyeMusic, a visual-to-auditory sensory-substitution-device (SSD) to transform visually presented letters optimized for auditory transformation into sound. Using fMRI, we show modulation in the right ventral visual stream following 2-h of same-day training. Crucially, following more extensive training (i.e., ∼12 h) we show that OVAL reading recruits the left ventral visual stream including the location of the Visual Word Form Area, a key graphene-responsive region within the visual reading network. Our results show that while after 2 h of SSD training we can already observe the recruitment of the deprived ventral visual stream by auditory stimuli, computation-selective cross-modal recruitment requires longer training to establish.


Subject(s)
Brain , Learning , Adult , Humans , Touch , Brain Mapping , Sound , Magnetic Resonance Imaging , Blindness
7.
Restor Neurol Neurosci ; 41(3-4): 115-127, 2023.
Article in English | MEDLINE | ID: mdl-37742669

ABSTRACT

BACKGROUND: The default mode network (DMN) is a large-scale brain network tightly correlated with self and self-referential processing, activated by intrinsic tasks and deactivated by externally-directed tasks. OBJECTIVE: In this study, we aim to investigate the novel approach of default mode activation during progressive muscle relaxation and examine whether differential activation patterns result from the movement of different body parts. METHODS: We employed neuroimaging to investigate DMN activity during simple body movements, while performing progressive muscle relaxation. We focused on differentiating the neural response between facial movements and movements of other body parts. RESULTS: Our results show that the movement of different body parts led to deactivation in several DMN nodes, namely the temporal poles, hippocampus, medial prefrontal cortex (mPFC), and posterior cingulate cortex. However, facial movement induced an inverted and selective positive BOLD pattern in some of these areas precisely. Moreover, areas in the temporal poles selective for face movement showed functional connectivity not only with the hippocampus and mPFC but also with the nucleus accumbens. CONCLUSIONS: Our findings suggest that both conceptual and embodied self-related processes, including body movements during progressive muscle relaxation, may be mapped onto shared brain networks. This could enhance our understanding of how practices like PMR influence DMN activity and potentially offer insights to inform therapeutic strategies that rely on mindful body movements.


Subject(s)
Brain Mapping , Default Mode Network , Brain/physiology , Gyrus Cinguli , Hippocampus/diagnostic imaging , Magnetic Resonance Imaging , Nerve Net/diagnostic imaging , Nerve Net/physiology
8.
PLoS One ; 18(6): e0287802, 2023.
Article in English | MEDLINE | ID: mdl-37352216

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0250281.].

9.
Front Hum Neurosci ; 17: 1058617, 2023.
Article in English | MEDLINE | ID: mdl-36936618

ABSTRACT

Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.

10.
Curr Biol ; 33(7): 1211-1219.e5, 2023 04 10.
Article in English | MEDLINE | ID: mdl-36863342

ABSTRACT

V6 is a retinotopic area located in the dorsal visual stream that integrates eye movements with retinal and visuo-motor signals. Despite the known role of V6 in visual motion, it is unknown whether it is involved in navigation and how sensory experiences shape its functional properties. We explored the involvement of V6 in egocentric navigation in sighted and in congenitally blind (CB) participants navigating via an in-house distance-to-sound sensory substitution device (SSD), the EyeCane. We performed two fMRI experiments on two independent datasets. In the first experiment, CB and sighted participants navigated the same mazes. The sighted performed the mazes via vision, while the CB performed them via audition. The CB performed the mazes before and after a training session, using the EyeCane SSD. In the second experiment, a group of sighted participants performed a motor topography task. Our results show that right V6 (rhV6) is selectively involved in egocentric navigation independently of the sensory modality used. Indeed, after training, rhV6 of CB is selectively recruited for auditory navigation, similarly to rhV6 in the sighted. Moreover, we found activation for body movement in area V6, which can putatively contribute to its involvement in egocentric navigation. Taken together, our findings suggest that area rhV6 is a unique hub that transforms spatially relevant sensory information into an egocentric representation for navigation. While vision is clearly the dominant modality, rhV6 is in fact a supramodal area that can develop its selectivity for navigation in the absence of visual experience.


Subject(s)
Auditory Perception , Movement , Humans , Movement/physiology , Motion , Hearing , Eye Movements
11.
Front Neurosci ; 17: 973525, 2023.
Article in English | MEDLINE | ID: mdl-36968509

ABSTRACT

The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA's perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA's connectivity profile in a counterintuitive way-functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.

12.
Front Neurosci ; 16: 970878, 2022.
Article in English | MEDLINE | ID: mdl-36440286

ABSTRACT

Greater cortical gyrification (GY) is linked with enhanced cognitive abilities and is also negatively related to cortical thickness (CT). Individuals who are congenitally blind (CB) exhibits remarkable functional brain plasticity which enables them to perform certain non-visual and cognitive tasks with supranormal abilities. For instance, extensive training using touch and audition enables CB people to develop impressive skills and there is evidence linking these skills to cross-modal activations of primary visual areas. There is a cascade of anatomical, morphometric and functional-connectivity changes in non-visual structures, volumetric reductions in several components of the visual system, and CT is also increased in CB. No study to date has explored GY changes in this population, and no study has explored how variations in CT are related to GY changes in CB. T1-weighted 3D structural magnetic resonance imaging scans were acquired to examine the effects of congenital visual deprivation in cortical structures in a healthy sample of 11 CB individuals (6 male) and 16 age-matched sighted controls (SC) (10 male). In this report, we show for the first time an increase in GY in several brain areas of CB individuals compared to SC, and a negative relationship between GY and CT in the CB brain in several different cortical areas. We discuss the implications of our findings and the contributions of developmental factors and synaptogenesis to the relationship between CT and GY in CB individuals compared to SC. F.

13.
Front Neurosci ; 16: 921321, 2022.
Article in English | MEDLINE | ID: mdl-36263367

ABSTRACT

Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.

14.
Neuropsychologia ; 173: 108305, 2022 08 13.
Article in English | MEDLINE | ID: mdl-35752268

ABSTRACT

The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.


Subject(s)
Visual Prosthesis , Adult , Blindness/surgery , Humans , Male , Phosphenes , Vision Disorders
15.
Sci Rep ; 12(1): 4330, 2022 03 14.
Article in English | MEDLINE | ID: mdl-35288597

ABSTRACT

Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.


Subject(s)
Auditory Perception , Visual Perception , Adult , Blindness , Head , Humans , Learning
16.
Front Hum Neurosci ; 16: 1058093, 2022.
Article in English | MEDLINE | ID: mdl-36776219

ABSTRACT

Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.

17.
Front Neurosci ; 16: 962817, 2022.
Article in English | MEDLINE | ID: mdl-36711132

ABSTRACT

As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).

18.
Front Hum Neurosci ; 15: 713931, 2021.
Article in English | MEDLINE | ID: mdl-34803631

ABSTRACT

Manipulating sensory and motor cues can cause an illusionary perception of ownership of a fake body part. Presumably, the illusion can work as long as the false body part's position and appearance are anatomically plausible. Here, we introduce an illusion that challenges past assumptions on body ownership. We used virtual reality to switch and mirror participants' views of their hands. When a participant moves their physical hand, they see the incongruent virtual hand moving. The result is an anatomically implausible configuration of the fake hand. Despite the hand switch, participants reported significant body ownership sensations over the virtual hands. In the first between-group experiment, we found that the strength of body ownership over the incongruent hands was similar to that of congruent hands. Whereas, in the second within-group experiment, anatomical incongruency significantly decreased body ownership. Still, participants reported significant body ownership sensations of the switched hands. Curiously, we found that perceived levels of agency mediate the effect of anatomical congruency on body ownership. These findings offer a fresh perspective on the relationship between anatomical plausibility and assumed body ownership. We propose that goal-directed and purposeful actions can override anatomical plausibility constraints and discuss this in the context of the immersive properties of virtual reality.

19.
Sci Rep ; 11(1): 11944, 2021 06 07.
Article in English | MEDLINE | ID: mdl-34099756

ABSTRACT

Can humans extend and augment their natural perceptions during adulthood? Here, we address this fascinating question by investigating the extent to which it is possible to successfully augment visual spatial perception to include the backward spatial field (a region where humans are naturally blind) via other sensory modalities (i.e., audition). We thus developed a sensory-substitution algorithm, the "Topo-Speech" which conveys identity of objects through language, and their exact locations via vocal-sound manipulations, namely two key features of visual spatial perception. Using two different groups of blindfolded sighted participants, we tested the efficacy of this algorithm to successfully convey location of objects in the forward or backward spatial fields following ~ 10 min of training. Results showed that blindfolded sighted adults successfully used the Topo-Speech to locate objects on a 3 × 3 grid either positioned in front of them (forward condition), or behind their back (backward condition). Crucially, performances in the two conditions were entirely comparable. This suggests that novel spatial sensory information conveyed via our existing sensory systems can be successfully encoded to extend/augment human perceptions. The implications of these results are discussed in relation to spatial perception, sensory augmentation and sensory rehabilitation.


Subject(s)
Algorithms , Auditory Perception/physiology , Space Perception/physiology , Touch Perception/physiology , Visual Perception/physiology , Adaptation, Physiological/physiology , Adult , Blindness/physiopathology , Female , Humans , Male , Psychomotor Performance/physiology , Visual Cortex/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL