Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Curr Biol ; 34(1): 46-55.e4, 2024 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-38096819

RESUMEN

Voices are the most relevant social sounds for humans and therefore have crucial adaptive value in development. Neuroimaging studies in adults have demonstrated the existence of regions in the superior temporal sulcus that respond preferentially to voices. Yet, whether voices represent a functionally specific category in the young infant's mind is largely unknown. We developed a highly sensitive paradigm relying on fast periodic auditory stimulation (FPAS) combined with scalp electroencephalography (EEG) to demonstrate that the infant brain implements a reliable preferential response to voices early in life. Twenty-three 4-month-old infants listened to sequences containing non-vocal sounds from different categories presented at 3.33 Hz, with highly heterogeneous vocal sounds appearing every third stimulus (1.11 Hz). We were able to isolate a voice-selective response over temporal regions, and individual voice-selective responses were found in most infants within only a few minutes of stimulation. This selective response was significantly reduced for the same frequency-scrambled sounds, indicating that voice selectivity is not simply driven by the envelope and the spectral content of the sounds. Such a robust selective response to voices as early as 4 months of age suggests that the infant brain is endowed with the ability to rapidly develop a functional selectivity to this socially relevant category of sounds.


Asunto(s)
Percepción Auditiva , Voz , Adulto , Lactante , Humanos , Percepción Auditiva/fisiología , Encéfalo/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Mapeo Encefálico
2.
Neuroimage ; 230: 117816, 2021 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-33524580

RESUMEN

In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the 'deaf' mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the 'deaf' motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the 'deaf' right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.


Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Sordera/fisiopatología , Percepción de Movimiento/fisiología , Estimulación Luminosa/métodos , Localización de Sonidos/fisiología , Adulto , Corteza Auditiva/diagnóstico por imagen , Sordera/diagnóstico por imagen , Diagnóstico Precoz , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino
3.
Cortex ; 126: 253-264, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32092494

RESUMEN

Unequivocally demonstrating the presence of multisensory signals at the earliest stages of cortical processing remains challenging in humans. In our study, we relied on the unique spatio-temporal resolution provided by intracranial stereotactic electroencephalographic (SEEG) recordings in patients with drug-resistant epilepsy to characterize the signal extracted from early visual (calcarine and pericalcarine) and auditory (Heschl's gyrus and planum temporale) regions during a simple audio-visual oddball task. We provide evidences that both cross-modal responses (visual responses in auditory cortex or the reverse) and multisensory processing (alteration of the unimodal responses during bimodal stimulation) can be observed in intracranial event-related potentials (iERPs) and in power modulations of oscillatory activity at different temporal scales within the first 150 msec after stimulus onset. The temporal profiles of the iERPs are compatible with the hypothesis that MSI occurs by means of direct pathways linking early visual and auditory regions. Our data indicate, moreover, that MSI mainly relies on modulations of the low-frequency bands (foremost the theta band in the auditory cortex and the alpha band in the visual cortex), suggesting the involvement of feedback pathways between the two sensory regions. Remarkably, we also observed high-gamma power modulations by sounds in the early visual cortex, thus suggesting the presence of neuronal populations involved in auditory processing in the calcarine and pericalcarine region in humans.


Asunto(s)
Corteza Auditiva , Estimulación Acústica , Percepción Auditiva , Mapeo Encefálico , Electroencefalografía , Humanos , Estimulación Luminosa , Percepción Visual
4.
Sci Rep ; 9(1): 11965, 2019 08 19.
Artículo en Inglés | MEDLINE | ID: mdl-31427634

RESUMEN

Individuals with autism are reported to integrate information from visual and auditory channels in an idiosyncratic way. Multisensory integration (MSI) of simple, non-social stimuli (i.e., flashes and beeps) was evaluated in adolescents and adults with (n = 20) and without autism (n = 19) using a reaction time (RT) paradigm using audio, visual, and audiovisual stimuli. For each participant, the race model analysis compares the RTs on the audiovisual condition to a bound value computed from the unimodal RTs that reflects the effect of redundancy. If the actual audiovisual RTs are significantly faster than this bound, the race model is violated, indicating evidence of MSI. Our results show that the race model violation occurred only for the typically-developing (TD) group. While the TD group shows evidence of MSI, the autism group does not. These results suggest that multisensory integration of simple information, void of social content or complexity, is altered in autism. Individuals with autism may not benefit from the advantage conferred by multisensory stimulation to the same extent as TD individuals. Altered MSI for simple, non-social information may have cascading effects on more complex perceptual processes related to language and behaviour in autism.


Asunto(s)
Trastorno del Espectro Autista/fisiopatología , Sensación , Estimulación Acústica , Adolescente , Adulto , Trastorno del Espectro Autista/diagnóstico , Femenino , Humanos , Masculino , Modelos Teóricos , Percepción , Estimulación Luminosa , Adulto Joven
5.
J Neurosci ; 39(12): 2208-2220, 2019 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-30651333

RESUMEN

The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.


Asunto(s)
Corteza Auditiva/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Adulto Joven
6.
Cereb Cortex ; 29(9): 3590-3605, 2019 08 14.
Artículo en Inglés | MEDLINE | ID: mdl-30272134

RESUMEN

The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


Asunto(s)
Encéfalo/fisiología , Emociones/fisiología , Reconocimiento Facial/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Vías Nerviosas/fisiología , Estimulación Luminosa , Adulto Joven
7.
Curr Biol ; 28(9): 1453-1459.e3, 2018 05 07.
Artículo en Inglés | MEDLINE | ID: mdl-29681475

RESUMEN

Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex-that is, whether this structure tracks the acoustic signal in a functionally relevant manner. To elucidate this, we investigated how the cortex tracks (i.e., entrains to) absent acoustic speech signals carried by silent lip movements. Crucially, we contrasted the entrainment to unheard forward (intelligible) and backward (unintelligible) acoustic speech. We observed that the visual cortex exhibited stronger entrainment to the unheard forward acoustic speech envelope compared to the unheard backward acoustic speech envelope. Supporting the notion of a visuo-phonological mapping process, this forward-backward difference of occipital entrainment was not present for actually observed lip movements. Importantly, the respective occipital region received more top-down input, especially from left premotor, primary motor, and somatosensory regions and, to a lesser extent, also from posterior temporal cortex. Strikingly, across participants, the extent of top-down modulation of the visual cortex stemming from these regions partially correlated with the strength of entrainment to absent acoustic forward speech envelope, but not to present forward lip movements. Our findings demonstrate that a distributed cortical network, including key dorsal stream auditory regions [3-5], influences how the visual cortex shows sensitivity to the intelligibility of speech while tracking silent lip movements.


Asunto(s)
Percepción del Habla/fisiología , Habla/fisiología , Corteza Visual/fisiología , Estimulación Acústica , Adulto , Corteza Auditiva/fisiología , Mapeo Encefálico , Femenino , Humanos , Labio , Lectura de los Labios , Magnetoencefalografía/métodos , Masculino , Corteza Motora/fisiología , Movimiento , Fonética , Inteligibilidad del Habla/fisiología
8.
J Cogn Neurosci ; 30(1): 86-106, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-28891782

RESUMEN

Sounds activate occipital regions in early blind individuals. However, how different sound categories map onto specific regions of the occipital cortex remains a matter of debate. We used fMRI to characterize brain responses of early blind and sighted individuals to familiar object sounds, human voices, and their respective low-level control sounds. In addition, sighted participants were tested while viewing pictures of faces, objects, and phase-scrambled control pictures. In both early blind and sighted, a double dissociation was evidenced in bilateral auditory cortices between responses to voices and object sounds: Voices elicited categorical responses in bilateral superior temporal sulci, whereas object sounds elicited categorical responses along the lateral fissure bilaterally, including the primary auditory cortex and planum temporale. Outside the auditory regions, object sounds also elicited categorical responses in the left lateral and in the ventral occipitotemporal regions in both groups. These regions also showed response preference for images of objects in the sighted group, thus suggesting a functional specialization that is independent of sensory input and visual experience. Between-group comparisons revealed that, only in the blind group, categorical responses to object sounds extended more posteriorly into the occipital cortex. Functional connectivity analyses evidenced a selective increase in the functional coupling between these reorganized regions and regions of the ventral occipitotemporal cortex in the blind group. In contrast, vocal sounds did not elicit preferential responses in the occipital cortex in either group. Nevertheless, enhanced voice-selective connectivity between the left temporal voice area and the right fusiform gyrus were found in the blind group. Altogether, these findings suggest that, in the absence of developmental vision, separate auditory categories are not equipotent in driving selective auditory recruitment of occipitotemporal regions and highlight the presence of domain-selective constraints on the expression of cross-modal plasticity.


Asunto(s)
Percepción Auditiva/fisiología , Ceguera/fisiopatología , Encéfalo/fisiología , Encéfalo/fisiopatología , Estimulación Acústica , Adulto , Ceguera/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Estimulación Luminosa , Percepción Visual/fisiología , Adulto Joven
9.
Proc Natl Acad Sci U S A ; 114(31): E6437-E6446, 2017 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-28652333

RESUMEN

Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.


Asunto(s)
Corteza Auditiva/fisiología , Sordera/fisiopatología , Reconocimiento Facial/fisiología , Plasticidad Neuronal/fisiología , Vías Visuales/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Neuroimagen/métodos , Estimulación Luminosa , Privación Sensorial/fisiología , Percepción Visual/fisiología
10.
Neuroimage ; 134: 630-644, 2016 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-27107468

RESUMEN

How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions that typically support the processing of motion information.


Asunto(s)
Corteza Auditiva/fisiopatología , Percepción Auditiva/fisiología , Ceguera/fisiopatología , Percepción de Movimiento/fisiología , Corteza Visual/fisiopatología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Privación Sensorial , Adulto Joven
11.
J Cogn Neurosci ; 25(12): 2072-85, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23859643

RESUMEN

Light regulates multiple non-image-forming (or nonvisual) circadian, neuroendocrine, and neurobehavioral functions, via outputs from intrinsically photosensitive retinal ganglion cells (ipRGCs). Exposure to light directly enhances alertness and performance, so light is an important regulator of wakefulness and cognition. The roles of rods, cones, and ipRGCs in the impact of light on cognitive brain functions remain unclear, however. A small percentage of blind individuals retain non-image-forming photoreception and offer a unique opportunity to investigate light impacts in the absence of conscious vision, presumably through ipRGCs. Here, we show that three such patients were able to choose nonrandomly about the presence of light despite their complete lack of sight. Furthermore, 2 sec of blue light modified EEG activity when administered simultaneously to auditory stimulations. fMRI further showed that, during an auditory working memory task, less than a minute of blue light triggered the recruitment of supplemental prefrontal and thalamic brain regions involved in alertness and cognition regulation as well as key areas of the default mode network. These results, which have to be considered as a proof of concept, show that non-image-forming photoreception triggers some awareness for light and can have a more rapid impact on human cognition than previously understood, if brain processing is actively engaged. Furthermore, light stimulates higher cognitive brain activity, independently of vision, and engages supplemental brain areas to perform an ongoing cognitive process. To our knowledge, our results constitute the first indication that ipRGC signaling may rapidly affect fundamental cerebral organization, so that it could potentially participate to the regulation of numerous aspects of human brain function.


Asunto(s)
Ceguera/metabolismo , Ceguera/terapia , Encéfalo/metabolismo , Cognición/fisiología , Estimulación Luminosa/métodos , Fototerapia/métodos , Anciano , Potenciales Evocados Auditivos/fisiología , Potenciales Evocados Visuales/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Desempeño Psicomotor/fisiología
12.
Brain ; 136(Pt 9): 2769-83, 2013 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-23831614

RESUMEN

Contrasting the impact of congenital versus late-onset acquired blindness provides a unique model to probe how experience at different developmental periods shapes the functional organization of the occipital cortex. We used functional magnetic resonance imaging to characterize brain activations of congenitally blind, late-onset blind and two groups of sighted control individuals while they processed either the pitch or the spatial attributes of sounds. Whereas both blind groups recruited occipital regions for sound processing, activity in bilateral cuneus was only apparent in the congenitally blind, highlighting the existence of region-specific critical periods for crossmodal plasticity. Most importantly, the preferential activation of the right dorsal stream (middle occipital gyrus and cuneus) for the spatial processing of sounds was only observed in the congenitally blind. This demonstrates that vision has to be lost during an early sensitive period in order to transfer its functional specialization for space processing toward a non-visual modality. We then used a combination of dynamic causal modelling with Bayesian model selection to demonstrate that auditory-driven activity in primary visual cortex is better explained by direct connections with primary auditory cortex in the congenitally blind whereas it relies more on feedback inputs from parietal regions in the late-onset blind group. Taken together, these results demonstrate the crucial role of the developmental period of visual deprivation in (re)shaping the functional architecture and the connectivity of the occipital cortex. Such findings are clinically important now that a growing number of medical interventions may restore vision after a period of visual deprivation.


Asunto(s)
Ceguera/patología , Mapeo Encefálico , Vías Nerviosas/fisiología , Lóbulo Occipital/fisiopatología , Estimulación Acústica , Adulto , Análisis de Varianza , Teorema de Bayes , Causalidad , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Vías Nerviosas/irrigación sanguínea , Lóbulo Occipital/irrigación sanguínea , Oxígeno , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
13.
Neuropsychologia ; 51(5): 1002-10, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23462241

RESUMEN

The abilities to recognize and integrate emotions from another person's facial and vocal expressions are fundamental cognitive skills involved in the effective regulation of social interactions. Deficits in such abilities have been suggested as a possible source for certain atypical social behaviors manifested by persons with autism spectrum disorders (ASD). In the present study, we assessed the recognition and integration of emotional expressions in ASD using a validated set of ecological stimuli comprised of dynamic visual and auditory (non-verbal) vocal clips. Autistic participants and typically developing controls (TD) were asked to discriminate between clips depicting expressions of disgust and fear presented either visually, auditorily or audio-visually. The group of autistic participants was less efficient to discriminate emotional expressions across all conditions (unimodal and bimodal). Moreover, they necessitated a higher signal-to-noise ratio for the discrimination of visual or auditory presentations of disgust versus fear expressions. These results suggest an altered sensitivity to emotion expressions in this population that is not modality-specific. In addition, the group of autistic participants benefited from exposure to bimodal information to a lesser extent than did the TD group, indicative of a decreased multisensory gain in this population. These results are the first to compellingly demonstrate joint alterations for both the perception and the integration of multisensory emotion expressions in ASD.


Asunto(s)
Síntomas Afectivos/etiología , Trastorno Autístico/complicaciones , Emoción Expresada/fisiología , Reconocimiento en Psicología , Sensación/fisiología , Estimulación Acústica , Adolescente , Adulto , Síntomas Afectivos/diagnóstico , Discriminación en Psicología , Femenino , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa , Adulto Joven
14.
J Cogn Neurosci ; 20(8): 1454-63, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18303980

RESUMEN

It has been suggested that both the posterior parietal cortex (PPC) and the extrastriate occipital cortex (OC) participate in the spatial processing of sounds. However, the precise time-course of their contribution remains unknown, which is of particular interest, considering that it could give new insights into the mechanisms underlying auditory space perception. To address this issue, we have used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right PPC or right OC at different delays in subjects performing a sound lateralization task. Our results confirmed that these two areas participate in the spatial processing of sounds. More precisely, we found that TMS applied over the right OC 50 msec after the stimulus onset significantly impaired the localization of sounds presented either to the right or to the left side. Moreover, right PPC virtual lesions induced 100 and 150 msec after sound presentation led to a rightward bias for stimuli delivered on the center and on the left side, reproducing transiently the deficits commonly observed in hemineglect patients. The finding that the right OC is involved in sound processing before the right PPC suggests that the OC exerts a feedforward influence on the PPC during auditory spatial processing.


Asunto(s)
Lóbulo Occipital/fisiología , Lóbulo Parietal/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica/métodos , Adulto , Mapeo Encefálico , Estimulación Eléctrica , Femenino , Humanos , Masculino , Factores de Tiempo , Estimulación Magnética Transcraneal/métodos
15.
Brain Res ; 1075(1): 175-82, 2006 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-16460716

RESUMEN

Spatial attention paradigms using auditory or tactile stimulation were used to explore neural and behavioral reorganization in early blind subjects. Although it is commonly assumed that blind subjects outperform sighted subjects in such tasks, the empirical data to confirm this remain controversial. Moreover, previous studies have often confounded factors of sensory acuity with those of attention. In the present work, we compared the performance of individually matched early blind and sighted subjects during auditory and tactile tasks. These consisted of sensory acuity tests, simple reaction time task as well as selective and divided spatial attention tasks. Based on sensory measurements, we made sure that the reliability and salience of auditory and tactile information were identical between the two populations to estimate attentional performance independently of sensory influence. Results showed no difference between groups in either sensory sensitivity or simple reaction time task for both modalities. However, blind subjects displayed shorter reaction times than sighted subjects in both tactile and auditory selective spatial attention tasks and also in bimodal divided spatial attention tasks. The present study thus demonstrates an enhanced attentional performance in early blind subjects which is independent of sensory influence. These supra-normal abilities could be related to quantitative and qualitative changes in the way early visually deprived subjects process non-visual spatial information.


Asunto(s)
Atención/fisiología , Ceguera/fisiopatología , Percepción Espacial , Estimulación Acústica , Adulto , Anciano , Ceguera/congénito , Femenino , Lateralidad Funcional , Humanos , Masculino , Persona de Mediana Edad , Tacto , Percepción Visual
16.
Brain Res Cogn Brain Res ; 25(3): 650-8, 2005 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-16298112

RESUMEN

Previous neuroimaging studies devoted to auditory motion processing have shown the involvement of a cerebral network encompassing the temporoparietal and premotor areas. Most of these studies were based on a comparison between moving stimuli and static stimuli placed at a single location. However, moving stimuli vary in spatial location, and therefore motion detection can include both spatial localisation and motion processing. In this study, we used fMRI to compare neural processing of moving sounds and static sounds in various spatial locations in blindfolded sighted subjects. The task consisted of simultaneously determining both the nature of a sound stimulus (pure tone or complex sound) and the presence or absence of its movement. When movement was present, subjects had to identify its direction. This comparison of how moving and static stimuli are processed showed the involvement of the parietal lobules, the dorsal and ventral premotor cortex and the planum temporale during auditory motion processing. It also showed the specific recruitment of V5, the visual motion area. These results suggest that the previously proposed network of auditory motion processing is distinct from the network of auditory localisation. In addition, they suggest that the occipital cortex can process non-visual stimuli and that V5 is not restricted to visual processing.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Movimiento/fisiología , Corteza Visual/fisiología , Estimulación Acústica , Adulto , Conducta/fisiología , Movimientos Oculares/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/fisiología
17.
Neuroimage ; 26(2): 573-80, 2005 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-15907314

RESUMEN

Previous neuroimaging studies identified multimodal brain areas in the visual cortex that are specialized for processing specific information, such as visual-haptic object recognition. Here, we test whether visual brain areas are involved in depth perception when auditory substitution of vision is used. Nine sighted volunteers were trained blindfolded to use a prosthesis substituting vision with audition both to recognize two-dimensional figures and to estimate distance of an object in a real three-dimensional environment. Using positron emission tomography, regional cerebral blood flow was assessed while the prosthesis was used to explore virtual 3D images; subjects focused either on 2D features (target search) or on depth (target distance comparison). Activation foci were found in visual association areas during both the target search task, which recruited the occipito-parietal cortex, and the depth perception task, which recruited occipito-parietal and occipito-temporal areas. This indicates that some brain areas of the visual cortex are relatively multimodal and may be recruited for depth processing via a sense other than vision.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Profundidad/fisiología , Corteza Visual/fisiología , Estimulación Acústica , Adulto , Circulación Cerebrovascular , Interpretación Estadística de Datos , Percepción de Forma/fisiología , Humanos , Interpretación de Imagen Asistida por Computador , Masculino , Red Nerviosa/fisiología , Plasticidad Neuronal/fisiología , Tomografía de Emisión de Positrones , Prótesis e Implantes , Desempeño Psicomotor/fisiología , Reclutamiento Neurofisiológico/fisiología , Percepción del Tamaño/fisiología , Corteza Visual/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA