Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Neuropsychologia ; 190: 108685, 2023 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-37741551

RESUMEN

Accumulating evidence in the last decades has given rise to a new theory of brain organization, positing that cortical regions are recruited for specific tasks irrespective of the sensory modality via which information is channeled. For instance, the visual reading network has been shown to be recruited for reading via the tactile Braille code in congenitally blind adults. Yet, how rapidly non-typical sensory input modulates activity in typically visual regions is yet to be explored. To this aim, we developed a novel reading orthography, termed OVAL, enabling congenitally blind adults to quickly acquire reading via the auditory modality. OVAL uses the EyeMusic, a visual-to-auditory sensory-substitution-device (SSD) to transform visually presented letters optimized for auditory transformation into sound. Using fMRI, we show modulation in the right ventral visual stream following 2-h of same-day training. Crucially, following more extensive training (i.e., ∼12 h) we show that OVAL reading recruits the left ventral visual stream including the location of the Visual Word Form Area, a key graphene-responsive region within the visual reading network. Our results show that while after 2 h of SSD training we can already observe the recruitment of the deprived ventral visual stream by auditory stimuli, computation-selective cross-modal recruitment requires longer training to establish.


Asunto(s)
Encéfalo , Aprendizaje , Adulto , Humanos , Tacto , Mapeo Encefálico , Sonido , Imagen por Resonancia Magnética , Ceguera
2.
Front Neurosci ; 16: 921321, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36263367

RESUMEN

Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.

3.
Sci Rep ; 12(1): 4330, 2022 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-35288597

RESUMEN

Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.


Asunto(s)
Percepción Auditiva , Percepción Visual , Adulto , Ceguera , Cabeza , Humanos , Aprendizaje
4.
PLoS One ; 15(11): e0242619, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33237931

RESUMEN

Reading is a unique human cognitive skill and its acquisition was proven to extensively affect both brain organization and neuroanatomy. Differently from western sighted individuals, literacy rates via tactile reading systems, such as Braille, are declining, thus imposing an alarming threat to literacy among non-visual readers. This decline is due to many reasons including the length of training needed to master Braille, which must also include extensive tactile sensitivity exercises, the lack of proper Braille instruction and the high costs of Braille devices. The far-reaching consequences of low literacy rates, raise the need to develop alternative, cheap and easy-to-master non-visual reading systems. To this aim, we developed OVAL, a new auditory orthography based on a visual-to-auditory sensory-substitution algorithm. Here we present its efficacy for successful words-reading, and investigation of the extent to which redundant features defining characters (i.e., adding specific colors to letters conveyed into audition via different musical instruments) facilitate or impede auditory reading outcomes. Thus, we tested two groups of blindfolded sighted participants who were either exposed to a monochromatic or to a color version of OVAL. First, we showed that even before training, all participants were able to discriminate between 11 OVAL characters significantly more than chance level. Following 6 hours of specific OVAL training, participants were able to identify all the learned characters, differentiate them from untrained letters, and read short words/pseudo-words of up to 5 characters. The Color group outperformed the Monochromatic group in all tasks, suggesting that redundant characters' features are beneficial for auditory reading. Overall, these results suggest that OVAL is a promising auditory-reading tool that can be used by blind individuals, by people with reading deficits as well as for the investigation of reading specific processing dissociated from the visual modality.


Asunto(s)
Algoritmos , Percepción Auditiva , Mapeo Encefálico , Percepción de Color , Lectura , Auxiliares Sensoriales , Adolescente , Adulto , Ceguera/fisiopatología , Femenino , Humanos , Masculino
5.
Sci Rep ; 2: 949, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23230514

RESUMEN

Visual-to-auditory sensory-substitution devices allow users to perceive a visual image using sound. Using a motor-learning task, we found that new sensory-motor information was generalized across sensory modalities. We imposed a rotation when participants reached to visual targets, and found that not only seeing, but also hearing the location of targets via a sensory-substitution device resulted in biased movements. When the rotation was removed, aftereffects occurred whether the location of targets was seen or heard. Our findings demonstrate that sensory-motor learning was not sensory-modality-specific. We conclude that novel sensory-motor information can be transferred between sensory modalities.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...