RESUMO
Spatial understanding is a multisensory construct while hearing is the only natural sense enabling the simultaneous perception of the entire 3D space. To test whether such spatial understanding is dependent on auditory experience, we study congenitally hearing-impaired users of assistive devices. We apply an in-house technology, which, inspired by the auditory system, performs intensity-weighting to represent external spatial positions and motion on the fingertips. We see highly impaired auditory spatial capabilities for tracking moving sources, which based on the "critical periods" theory emphasizes the role of nature in sensory development. Meanwhile, for tactile and audio-tactile spatial motion perception, the hearing-impaired show performance similar to typically hearing individuals. The immediate availability of 360° external space representation through touch, despite the lack of such experience during the lifetime, points to the significant role of nurture in spatial perception development, and to its amodal character. The findings show promise toward advancing multisensory solutions for rehabilitation.
RESUMO
This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.
Assuntos
Algoritmos , Cegueira , Sinais (Psicologia) , Percepção de Profundidade , Pessoas com Deficiência Visual , Humanos , Masculino , Feminino , Adulto , Percepção de Profundidade/fisiologia , Cegueira/fisiopatologia , Pessoa de Meia-Idade , Aprendizagem/fisiologia , Adulto JovemRESUMO
The perception of signals from within the body, known as interoception, is increasingly recognized as a prerequisite for physical and mental health. This study is dedicated to the development of effective technological approaches for enhancing interoceptive abilities. We provide evidence of the effectiveness and practical feasibility of a novel real-time haptic heartbeat supplementation technology combining principles of biofeedback and sensory augmentation. In a randomized controlled study, we applied the developed naturalistic haptic feedback on a group of 30 adults, while another group of 30 adults received more traditional real-time visual heartbeat feedback. A single session of haptic, but not visual heartbeat feedback resulted in increased interoceptive accuracy and confidence, as measured by the heart rate discrimination task, and in a shift of attention toward the body. Participants rated the developed technology as more helpful and pleasant than the visual feedback, thus indicating high user satisfaction. The study highlights the importance of matching sensory characteristics of the feedback provided to the natural bodily prototype. Our work suggests that real-time haptic feedback might be a superior approach for strengthening the mind-body connection in interventions for physical and mental health.
Assuntos
Biorretroalimentação Psicológica , Retroalimentação Sensorial , Frequência Cardíaca , Interocepção , Percepção do Tato , Humanos , Interocepção/fisiologia , Masculino , Feminino , Frequência Cardíaca/fisiologia , Adulto , Adulto Jovem , Retroalimentação Sensorial/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologiaRESUMO
People can use their sense of hearing for discerning thermal properties, though they are for the most part unaware that they can do so. While people unequivocally claim that they cannot perceive the temperature of pouring water through the auditory properties of hearing it being poured, our research further strengthens the understanding that they can. This multimodal ability is implicitly acquired in humans, likely through perceptual learning over the lifetime of exposure to the differences in the physical attributes of pouring water. In this study, we explore people's perception of this intriguing cross modal correspondence, and investigate the psychophysical foundations of this complex ecological mapping by employing machine learning. Our results show that not only can the auditory properties of pouring water be classified by humans in practice, the physical characteristics underlying this phenomenon can also be classified by a pre-trained deep neural network.
RESUMO
The primary visual cortex (V1) in blindness is engaged in a wide spectrum of tasks and sensory modalities, including audition, touch, language, and memory. This widespread involvement raises questions regarding the constancy of its role and whether it might exhibit flexibility in its function over time, connecting to diverse network functions specific to task demands. This would suggest that reorganized V1 assumes a role like multiple-demand system regions. Alternatively, varying patterns of plasticity in blind V1 may be attributed to individual factors, with different blind individuals recruiting V1 preferentially for different functions. In support of this, we recently showed that V1 functional connectivity (FC) varies greatly across blind individuals. But do these represent stable individual patterns of plasticity, or are they driven more by instantaneous changes, like a multiple-demand system now inhabiting V1? Here, we tested whether individual FC patterns from the V1 of blind individuals are stable over time. We show that over two years, FC from the V1 is unique and highly stable in a small sample of repeatedly sampled congenitally blind individuals. Further, using multivoxel pattern analysis, we demonstrate that the unique reorganization patterns of these individuals allow decoding of participant identity. Together with recent evidence for substantial individual differences in V1 connectivity, this indicates that there may be a consistent role for V1 in blindness, which may differ for each individual. Further, it suggests that the variability in visual reorganization in blindness across individuals could be used to seek stable neuromarkers for sight rehabilitation and assistive approaches.
Assuntos
Cegueira , Plasticidade Neuronal , Humanos , Cegueira/fisiopatologia , Plasticidade Neuronal/fisiologia , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Imageamento por Ressonância Magnética , Córtex Visual Primário/fisiologia , Estudos Longitudinais , Córtex Visual/fisiopatologia , Córtex Visual/fisiologia , Córtex Visual/diagnóstico por imagem , Mapeamento Encefálico/métodosRESUMO
Exploring a novel approach to mental health technology, this study illuminates the intricate interplay between exteroception (the perception of the external world), and interoception (the perception of the internal world). Drawing on principles of sensory substitution, we investigated how interoceptive signals, particularly respiration, could be conveyed through exteroceptive modalities, namely vision and hearing. To this end, we developed a unique, immersive multisensory environment that translates respiratory signals in real-time into dynamic visual and auditory stimuli. The system was evaluated by employing a battery of various psychological assessments, with the findings indicating a significant increase in participants' interoceptive sensibility and an enhancement of the state of flow, signifying immersive and positive engagement with the experience. Furthermore, a correlation between these two variables emerged, revealing a bidirectional enhancement between the state of flow and interoceptive sensibility. Our research is the first to present a sensory substitution approach for substituting between interoceptive and exteroceptive senses, and specifically as a transformative method for mental health interventions, paving the way for future research.
Assuntos
Interocepção , Humanos , Interocepção/fisiologia , Feminino , Masculino , Adulto , Adulto Jovem , Estimulação Acústica , Respiração , Estimulação LuminosaRESUMO
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
RESUMO
The primary visual cortex (V1) in individuals born blind is engaged in a wide spectrum of tasks and sensory modalities, including audition, touch, language, and memory. This widespread involvement raises questions regarding the constancy of its role and whether it might exhibit flexibility in its function over time, connecting to diverse network functions in response to task-specific demands. This would suggest that reorganized V1 takes on a role similar to cognitive multiple-demand system regions. Alternatively, it is possible that the varying patterns of plasticity observed in the blind V1 can be attributed to individual factors, whereby different blind individuals recruit V1 for different functions, highlighting the immense idiosyncrasy of plasticity. In support of this second account, we have recently shown that V1 functional connectivity varies greatly across blind individuals. But do these represent stable individual patterns of plasticity or merely instantaneous changes, for a multiple-demand system now inhabiting V1? Here we tested if individual connectivity patterns from the visual cortex of blind individuals are stable over time. We show that over two years, fMRI functional connectivity from the primary visual cortex is unique and highly stable in a small sample of repeatedly sampled congenitally blind individuals. Further, using multivoxel pattern analysis, we demonstrate that the unique reorganization patterns of these individuals allow decoding of participant identity. Together with recent evidence for substantial individual differences in visual cortex connectivity, this indicates there may be a consistent role for the visual cortex in blindness, which may differ for each individual. Further, it suggests that the variability in visual reorganization in blindness across individuals could be used to seek stable neuromarkers for sight rehabilitation and assistive approaches.
RESUMO
Motor actions, such as reaching or grasping, can be decoded from fMRI activity of early visual cortex (EVC) in sighted humans. This effect can depend on vision or visual imagery, or alternatively, could be driven by mechanisms independent of visual experience. Here, we show that the actions of reaching in different directions can be reliably decoded from fMRI activity of EVC in congenitally blind humans (both sexes). Thus, neither visual experience nor visual imagery is necessary for EVC to represent action-related information. We also demonstrate that, within EVC of blind humans, the accuracy of reach direction decoding is highest in areas typically representing foveal vision and gradually decreases in areas typically representing peripheral vision. We propose that this might indicate the existence of a predictive, hard-wired mechanism of aligning action and visual spaces. This mechanism might send action-related information primarily to the high-resolution foveal visual areas, which are critical for guiding and online correction of motor actions. Finally, we show that, beyond EVC, the decoding of reach direction in blind humans is most accurate in dorsal stream areas known to be critical for visuo-spatial and visuo-motor integration in the sighted. Thus, these areas can develop space and action representations even in the lifelong absence of vision. Overall, our findings in congenitally blind humans match previous research on the action system in the sighted, and suggest that the development of action representations in the human brain might be largely independent of visual experience.SIGNIFICANCE STATEMENT Early visual cortex (EVC) was traditionally thought to process only visual signals from the retina. Recent studies proved this account incomplete, and showed EVC involvement in many activities not directly related to incoming visual information, such as memory, sound, or action processing. Is EVC involved in these activities because of visual imagery? Here, we show robust reach direction representation in EVC of humans born blind. This demonstrates that EVC can represent actions independently of vision and visual imagery. Beyond EVC, we found that reach direction representation in blind humans is strongest in dorsal brain areas, critical for action processing in the sighted. This suggests that the development of action representations in the human brain is largely independent of visual experience.
Assuntos
Córtex Visual , Percepção Visual , Masculino , Feminino , Humanos , Encéfalo , Córtex Visual/diagnóstico por imagem , Mapeamento Encefálico , Cegueira , Imageamento por Ressonância MagnéticaRESUMO
BACKGROUND: The default mode network (DMN) is a large-scale brain network tightly correlated with self and self-referential processing, activated by intrinsic tasks and deactivated by externally-directed tasks. OBJECTIVE: In this study, we aim to investigate the novel approach of default mode activation during progressive muscle relaxation and examine whether differential activation patterns result from the movement of different body parts. METHODS: We employed neuroimaging to investigate DMN activity during simple body movements, while performing progressive muscle relaxation. We focused on differentiating the neural response between facial movements and movements of other body parts. RESULTS: Our results show that the movement of different body parts led to deactivation in several DMN nodes, namely the temporal poles, hippocampus, medial prefrontal cortex (mPFC), and posterior cingulate cortex. However, facial movement induced an inverted and selective positive BOLD pattern in some of these areas precisely. Moreover, areas in the temporal poles selective for face movement showed functional connectivity not only with the hippocampus and mPFC but also with the nucleus accumbens. CONCLUSIONS: Our findings suggest that both conceptual and embodied self-related processes, including body movements during progressive muscle relaxation, may be mapped onto shared brain networks. This could enhance our understanding of how practices like PMR influence DMN activity and potentially offer insights to inform therapeutic strategies that rely on mindful body movements.
Assuntos
Mapeamento Encefálico , Rede de Modo Padrão , Encéfalo/fisiologia , Giro do Cíngulo , Hipocampo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiologiaRESUMO
Accumulating evidence in the last decades has given rise to a new theory of brain organization, positing that cortical regions are recruited for specific tasks irrespective of the sensory modality via which information is channeled. For instance, the visual reading network has been shown to be recruited for reading via the tactile Braille code in congenitally blind adults. Yet, how rapidly non-typical sensory input modulates activity in typically visual regions is yet to be explored. To this aim, we developed a novel reading orthography, termed OVAL, enabling congenitally blind adults to quickly acquire reading via the auditory modality. OVAL uses the EyeMusic, a visual-to-auditory sensory-substitution-device (SSD) to transform visually presented letters optimized for auditory transformation into sound. Using fMRI, we show modulation in the right ventral visual stream following 2-h of same-day training. Crucially, following more extensive training (i.e., â¼12 h) we show that OVAL reading recruits the left ventral visual stream including the location of the Visual Word Form Area, a key graphene-responsive region within the visual reading network. Our results show that while after 2 h of SSD training we can already observe the recruitment of the deprived ventral visual stream by auditory stimuli, computation-selective cross-modal recruitment requires longer training to establish.
Assuntos
Encéfalo , Aprendizagem , Adulto , Humanos , Tato , Mapeamento Encefálico , Som , Imageamento por Ressonância Magnética , CegueiraRESUMO
[This corrects the article DOI: 10.1371/journal.pone.0250281.].
RESUMO
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
RESUMO
The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA's perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA's connectivity profile in a counterintuitive way-functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.
RESUMO
V6 is a retinotopic area located in the dorsal visual stream that integrates eye movements with retinal and visuo-motor signals. Despite the known role of V6 in visual motion, it is unknown whether it is involved in navigation and how sensory experiences shape its functional properties. We explored the involvement of V6 in egocentric navigation in sighted and in congenitally blind (CB) participants navigating via an in-house distance-to-sound sensory substitution device (SSD), the EyeCane. We performed two fMRI experiments on two independent datasets. In the first experiment, CB and sighted participants navigated the same mazes. The sighted performed the mazes via vision, while the CB performed them via audition. The CB performed the mazes before and after a training session, using the EyeCane SSD. In the second experiment, a group of sighted participants performed a motor topography task. Our results show that right V6 (rhV6) is selectively involved in egocentric navigation independently of the sensory modality used. Indeed, after training, rhV6 of CB is selectively recruited for auditory navigation, similarly to rhV6 in the sighted. Moreover, we found activation for body movement in area V6, which can putatively contribute to its involvement in egocentric navigation. Taken together, our findings suggest that area rhV6 is a unique hub that transforms spatially relevant sensory information into an egocentric representation for navigation. While vision is clearly the dominant modality, rhV6 is in fact a supramodal area that can develop its selectivity for navigation in the absence of visual experience.
Assuntos
Percepção Auditiva , Movimento , Humanos , Movimento/fisiologia , Movimento (Física) , Audição , Movimentos OcularesRESUMO
Greater cortical gyrification (GY) is linked with enhanced cognitive abilities and is also negatively related to cortical thickness (CT). Individuals who are congenitally blind (CB) exhibits remarkable functional brain plasticity which enables them to perform certain non-visual and cognitive tasks with supranormal abilities. For instance, extensive training using touch and audition enables CB people to develop impressive skills and there is evidence linking these skills to cross-modal activations of primary visual areas. There is a cascade of anatomical, morphometric and functional-connectivity changes in non-visual structures, volumetric reductions in several components of the visual system, and CT is also increased in CB. No study to date has explored GY changes in this population, and no study has explored how variations in CT are related to GY changes in CB. T1-weighted 3D structural magnetic resonance imaging scans were acquired to examine the effects of congenital visual deprivation in cortical structures in a healthy sample of 11 CB individuals (6 male) and 16 age-matched sighted controls (SC) (10 male). In this report, we show for the first time an increase in GY in several brain areas of CB individuals compared to SC, and a negative relationship between GY and CT in the CB brain in several different cortical areas. We discuss the implications of our findings and the contributions of developmental factors and synaptogenesis to the relationship between CT and GY in CB individuals compared to SC. F.
RESUMO
Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.
RESUMO
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Assuntos
Próteses Visuais , Adulto , Cegueira/cirurgia , Humanos , Masculino , Fosfenos , Transtornos da VisãoRESUMO
Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.
Assuntos
Percepção Auditiva , Percepção Visual , Adulto , Cegueira , Cabeça , Humanos , AprendizagemRESUMO
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.