Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
2.
Elife ; 112022 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-36070354

RESUMEN

The ventral occipito-temporal cortex (VOTC) reliably encodes auditory categories in people born blind using a representational structure partially similar to the one found in vision (Mattioni et al.,2020). Here, using a combination of uni- and multivoxel analyses applied to fMRI data, we extend our previous findings, comprehensively investigating how early and late acquired blindness impact on the cortical regions coding for the deprived and the remaining senses. First, we show enhanced univariate response to sounds in part of the occipital cortex of both blind groups that is concomitant to reduced auditory responses in temporal regions. We then reveal that the representation of the sound categories in the occipital and temporal regions is more similar in blind subjects compared to sighted subjects. What could drive this enhanced similarity? The multivoxel encoding of the 'human voice' category that we observed in the temporal cortex of all sighted and blind groups is enhanced in occipital regions in blind groups , suggesting that the representation of vocal information is more similar between the occipital and temporal regions in blind compared to sighted individuals. We additionally show that blindness does not affect the encoding of the acoustic properties of our sounds (e.g. pitch, harmonicity) in occipital and in temporal regions but instead selectively alter the categorical coding of the voice category itself. These results suggest a functionally congruent interplay between the reorganization of occipital and temporal regions following visual deprivation, across the lifespan.


Asunto(s)
Ceguera , Lóbulo Temporal , Estimulación Acústica , Humanos , Lóbulo Occipital/diagnóstico por imagen , Lóbulo Occipital/fisiología , Sonido , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología
3.
J Neurosci ; 42(23): 4652-4668, 2022 06 08.
Artículo en Inglés | MEDLINE | ID: mdl-35501150

RESUMEN

hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.


Asunto(s)
Mapeo Encefálico , Percepción de Movimiento , Percepción Auditiva/fisiología , Ceguera , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Percepción de Movimiento/fisiología
4.
J Neurosci ; 41(11): 2393-2405, 2021 03 17.
Artículo en Inglés | MEDLINE | ID: mdl-33514674

RESUMEN

In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT+/V5-hPT connections. Finally, the potential presence of hMT+/V5-hPT connections was corroborated in a large sample of participants (n = 114) from the human connectome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT+/V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.SIGNIFICANCE STATEMENT Perceiving and integrating moving signal across the senses is arguably one of the most important perceptual skills for the survival of living organisms. In order to create a unified representation of movement, the brain must therefore integrate motion information from separate senses. Our study provides support for the potential existence of direct connections between motion-selective regions in the occipital/visual (hMT+/V5) and temporal/auditory (hPT) cortices in humans. This connection could represent the structural scaffolding for the rapid and optimal exchange and integration of multisensory motion information. These findings suggest the existence of computationally specific pathways that allow information flow between areas that share a similar computational goal.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Movimiento/fisiología , Red Nerviosa/fisiología , Percepción Visual/fisiología , Adulto , Animales , Mapeo Encefálico , Conectoma , Imagen de Difusión Tensora , Reconocimiento Facial/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/diagnóstico por imagen , Lóbulo Occipital/diagnóstico por imagen , Lóbulo Occipital/fisiología , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Sustancia Blanca/diagnóstico por imagen , Sustancia Blanca/fisiología , Adulto Joven
5.
Psychol Sci ; 31(9): 1129-1139, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32846109

RESUMEN

Vision is thought to support the development of spatial abilities in the other senses. If this is true, how does spatial hearing develop in people lacking visual experience? We comprehensively addressed this question by investigating auditory-localization abilities in 17 congenitally blind and 17 sighted individuals using a psychophysical minimum-audible-angle task that lacked sensorimotor confounds. Participants were asked to compare the relative position of two sound sources located in central and peripheral, horizontal and vertical, or frontal and rear spaces. We observed unequivocal enhancement of spatial-hearing abilities in congenitally blind people, irrespective of the field of space that was assessed. Our results conclusively demonstrate that visual experience is not a prerequisite for developing optimal spatial-hearing abilities and that, in striking contrast, the lack of vision leads to a general enhancement of auditory-spatial skills.


Asunto(s)
Localización de Sonidos , Personas con Daño Visual , Ceguera , Audición , Humanos , Percepción Espacial , Visión Ocular
6.
Curr Biol ; 30(12): 2289-2299.e8, 2020 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-32442465

RESUMEN

The human occipito-temporal region hMT+/V5 is well known for processing visual motion direction. Here, we demonstrate that hMT+/V5 also represents the direction of auditory motion in a format partially aligned with the one used to code visual motion. We show that auditory and visual motion directions can be reliably decoded in individually localized hMT+/V5 and that motion directions in one modality can be predicted from the activity patterns elicited by the other modality. Despite shared motion-direction information across the senses, vision and audition, however, overall produce opposite voxel-wise responses in hMT+/V5. Our results reveal a multifaced representation of multisensory motion signals in hMT+/V5 and have broader implications for our understanding of how we consider the division of sensory labor between brain regions dedicated to a specific perceptual function.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Movimiento/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
7.
Elife ; 92020 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-32108572

RESUMEN

Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.


The world is full of rich and dynamic visual information. To avoid information overload, the human brain groups inputs into categories such as faces, houses, or tools. A part of the brain called the ventral occipito-temporal cortex (VOTC) helps categorize visual information. Specific parts of the VOTC prefer different types of visual input; for example, one part may tend to respond more to faces, whilst another may prefer houses. However, it is not clear how the VOTC characterizes information. One idea is that similarities between certain types of visual information may drive how information is organized in the VOTC. For example, looking at faces requires using central vision, while looking at houses requires using peripheral vision. Furthermore, all faces have a roundish shape while houses tend to have a more rectangular shape. Another possibility, however, is that the categorization of different inputs cannot be explained just by vision, and is also be driven by higher-level aspects of each category. For instance, how humans use or interact with something may also influence how an input is categorized. If categories are established depending (at least partially) on these higher-level aspects, rather than purely through visual likeness, it is likely that the VOTC would respond similarly to both sounds and images representing these categories. Now, Mattioni et al. have tested how individuals with and without sight respond to eight different categories of information to find out whether or not categorization is driven purely by visual likeness. Each category was presented to participants using sounds while measuring their brain activity. In addition, a group of participants who could see were also presented with the categories visually. Mattioni et al. then compared what happened in the VOTC of the three groups ­ sighted people presented with sounds, blind people presented with sounds, and sighted people presented with images ­ in response to each category. The experiment revealed that the VOTC organizes both auditory and visual information in a similar way. However, there were more similarities between the way blind people categorized auditory information and how sighted people categorized visual information than between how sighted people categorized each type of input. Mattioni et al. also found that the region of the VOTC that responds to inanimate objects massively overlapped across the three groups, whereas the part of the VOTC that responds to living things was more variable. These findings suggest that the way that the VOTC organizes information is, at least partly, independent from vision. The experiments also provide some information about how the brain reorganizes in people who are born blind. Further studies may reveal how differences in the VOTC of people with and without sight affect regions typically associated with auditory categorization, and potentially explain how the brain reorganizes in people who become blind later in life.


Asunto(s)
Percepción Auditiva , Ceguera/fisiopatología , Lóbulo Occipital/fisiopatología , Lóbulo Temporal/fisiopatología , Estimulación Acústica , Estudios de Casos y Controles , Humanos
8.
J Neurosci ; 39(12): 2208-2220, 2019 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-30651333

RESUMEN

The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.


Asunto(s)
Corteza Auditiva/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Adulto Joven
9.
Cognition ; 157: 77-99, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27597646

RESUMEN

How do people use memories to make inferences about real-world objects? We tested three strategies based on predicted patterns of response times and blood-oxygen-level-dependent (BOLD) responses: one strategy that relies solely on recognition memory, a second that retrieves additional knowledge, and a third, lexicographic (i.e., sequential) strategy, that considers knowledge conditionally on the evidence obtained from recognition memory. We implemented the strategies as computational models within the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture, which allowed us to derive behavioral and neural predictions that we then compared to the results of a functional magnetic resonance imaging (fMRI) study in which participants inferred which of two cities is larger. Overall, versions of the lexicographic strategy, according to which knowledge about many but not all alternatives is searched, provided the best account of the joint patterns of response times and BOLD responses. These results provide insights into the interplay between recognition and additional knowledge in memory, hinting at an adaptive use of these two sources of information in decision making. The results highlight the usefulness of implementing models of decision making within a cognitive architecture to derive predictions on the behavioral and neural level.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Toma de Decisiones/fisiología , Heurística/fisiología , Modelos Neurológicos , Modelos Psicológicos , Reconocimiento en Psicología/fisiología , Adulto , Mapeo Encefálico , Femenino , Humanos , Juicio/fisiología , Imagen por Resonancia Magnética , Masculino , Recuerdo Mental/fisiología , Tiempo de Reacción , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...