Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
J Neurosci ; 44(7)2024 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-38129133

RESUMEN

Neuroimaging studies suggest cross-sensory visual influences in human auditory cortices (ACs). Whether these influences reflect active visual processing in human ACs, which drives neuronal firing and concurrent broadband high-frequency activity (BHFA; >70 Hz), or whether they merely modulate sound processing is still debatable. Here, we presented auditory, visual, and audiovisual stimuli to 16 participants (7 women, 9 men) with stereo-EEG depth electrodes implanted near ACs for presurgical monitoring. Anatomically normalized group analyses were facilitated by inverse modeling of intracranial source currents. Analyses of intracranial event-related potentials (iERPs) suggested cross-sensory responses to visual stimuli in ACs, which lagged the earliest auditory responses by several tens of milliseconds. Visual stimuli also modulated the phase of intrinsic low-frequency oscillations and triggered 15-30 Hz event-related desynchronization in ACs. However, BHFA, a putative correlate of neuronal firing, was not significantly increased in ACs after visual stimuli, not even when they coincided with auditory stimuli. Intracranial recordings demonstrate cross-sensory modulations, but no indication of active visual processing in human ACs.


Asunto(s)
Corteza Auditiva , Masculino , Humanos , Femenino , Corteza Auditiva/fisiología , Estimulación Acústica/métodos , Potenciales Evocados/fisiología , Electroencefalografía/métodos , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Estimulación Luminosa
2.
Neuroimage ; 230: 117746, 2021 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-33454414

RESUMEN

Intracranial stereoelectroencephalography (sEEG) provides unsurpassed sensitivity and specificity for human neurophysiology. However, functional mapping of brain functions has been limited because the implantations have sparse coverage and differ greatly across individuals. Here, we developed a distributed, anatomically realistic sEEG source-modeling approach for within- and between-subject analyses. In addition to intracranial event-related potentials (iERP), we estimated the sources of high broadband gamma activity (HBBG), a putative correlate of local neural firing. Our novel approach accounted for a significant portion of the variance of the sEEG measurements in leave-one-out cross-validation. After logarithmic transformations, the sensitivity and signal-to-noise ratio were linearly inversely related to the minimal distance between the brain location and electrode contacts (slope≈-3.6). The signa-to-noise ratio and sensitivity in the thalamus and brain stem were comparable to those locations at the vicinity of electrode contact implantation. The HGGB source estimates were remarkably consistent with analyses of intracranial-contact data. In conclusion, distributed sEEG source modeling provides a powerful neuroimaging tool, which facilitates anatomically-normalized functional mapping of human brain using both iERP and HBBG data.


Asunto(s)
Epilepsia Refractaria/diagnóstico por imagen , Epilepsia Refractaria/fisiopatología , Electrodos Implantados/normas , Electroencefalografía/métodos , Electroencefalografía/normas , Técnicas Estereotáxicas/normas , Estimulación Acústica/métodos , Estimulación Acústica/normas , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Distribución Aleatoria
3.
Brain Behav ; 9(5): e01288, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30977309

RESUMEN

INTRODUCTION: When listening to a narrative, the verbal expressions translate into meanings and flow of mental imagery. However, the same narrative can be heard quite differently based on differences in listeners' previous experiences and knowledge. We capitalized on such differences to disclose brain regions that support transformation of narrative into individualized propositional meanings and associated mental imagery by analyzing brain activity associated with behaviorally assessed individual meanings elicited by a narrative. METHODS: Sixteen right-handed female subjects were instructed to list words that best described what had come to their minds while listening to an eight-minute narrative during functional magnetic resonance imaging (fMRI). The fMRI data were analyzed by calculating voxel-wise intersubject correlation (ISC) values. We used latent semantic analysis (LSA) enhanced with Wordnet knowledge to measure semantic similarity of the produced words between subjects. Finally, we predicted the ISC with the semantic similarity using representational similarity analysis. RESULTS: We found that semantic similarity in these word listings between subjects, estimated using LSA combined with WordNet knowledge, predicting similarities in brain hemodynamic activity. Subject pairs whose individual semantics were similar also exhibited similar brain activity in the bilateral supramarginal and angular gyrus of the inferior parietal lobe, and in the occipital pole. CONCLUSIONS: Our results demonstrate, using a novel method to measure interindividual differences in semantics, brain mechanisms giving rise to semantics and associated imagery during narrative listening. During listening to a captivating narrative, the inferior parietal lobe and early visual cortical areas seem, thus, to support elicitation of individual meanings and flow of mental imagery.


Asunto(s)
Percepción Auditiva/fisiología , Imaginación/fisiología , Lóbulo Parietal , Corteza Visual , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Individualidad , Imagen por Resonancia Magnética/métodos , Narración , Lóbulo Parietal/diagnóstico por imagen , Lóbulo Parietal/fisiología , Semántica , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología
4.
Soc Cogn Affect Neurosci ; 13(5): 471-482, 2018 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29618125

RESUMEN

The functional organization of human emotion systems as well as their neuroanatomical basis and segregation in the brain remains unresolved. Here, we used pattern classification and hierarchical clustering to characterize the organization of a wide array of emotion categories in the human brain. We induced 14 emotions (6 'basic', e.g. fear and anger; and 8 'non-basic', e.g. shame and gratitude) and a neutral state using guided mental imagery while participants' brain activity was measured with functional magnetic resonance imaging (fMRI). Twelve out of 14 emotions could be reliably classified from the haemodynamic signals. All emotions engaged a multitude of brain areas, primarily in midline cortices including anterior and posterior cingulate gyri and precuneus, in subcortical regions, and in motor regions including cerebellum and premotor cortex. Similarity of subjective emotional experiences was associated with similarity of the corresponding neural activation patterns. We conclude that different basic and non-basic emotions have distinguishable neural bases characterized by specific, distributed activation patterns in widespread cortical and subcortical circuits. Regionally differentiated engagement of these circuits defines the unique neural activity pattern and the corresponding subjective feeling associated with each emotion.


Asunto(s)
Afecto/clasificación , Afecto/fisiología , Encéfalo/fisiología , Emociones/clasificación , Emociones/fisiología , Adulto , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Cerebelo/diagnóstico por imagen , Cerebelo/fisiología , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/fisiología , Circulación Cerebrovascular/fisiología , Análisis por Conglomerados , Femenino , Hemodinámica/fisiología , Humanos , Imagen por Resonancia Magnética , Adulto Joven
5.
Brain Behav ; 7(9): e00789, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-28948083

RESUMEN

INTRODUCTION: We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically. METHODS: In a functional magnetic resonance imaging (fMRI) experiment, a stimulus set contained a block of six distorted sentences. This was followed by the intact counterparts of the sentences, after which the sentences were presented in distorted form again. A total of 18 such sets were presented to 20 human subjects. RESULTS: The blood oxygenation level dependent (BOLD)-responses elicited by the distorted sentences which came after the disambiguating, intact sentences were contrasted with the responses to the sentences presented before disambiguation. This revealed increased activity in the bilateral frontal pole, the dorsal anterior cingulate/paracingulate cortex, and the right frontal operculum. Decreased BOLD responses were observed in the posterior insula, Heschl's gyrus, and the posterior superior temporal sulcus. CONCLUSIONS: The brain areas that showed BOLD-enhancement for increased sentence comprehension have been associated with executive functions and with the mapping of incoming sensory information to representations stored in episodic memory. Thus, the comprehension of acoustically distorted speech may be associated with the engagement of memory-related subsystems. Further, activity in the primary auditory cortex was modulated by prior experience, possibly in a predictive coding framework. Our results suggest that memory biases the perception of ambiguous sensory information toward interpretations that have the highest probability to be correct based on previous experience.


Asunto(s)
Corteza Auditiva/fisiología , Encéfalo/fisiología , Comprensión/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adulto , Corteza Auditiva/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
6.
J Neurosci ; 37(25): 6125-6131, 2017 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-28536272

RESUMEN

The size of human social networks significantly exceeds the network that can be maintained by social grooming or touching in other primates. It has been proposed that endogenous opioid release after social laughter would provide a neurochemical pathway supporting long-term relationships in humans (Dunbar, 2012), yet this hypothesis currently lacks direct neurophysiological support. We used PET and the µ-opioid-receptor (MOR)-specific ligand [11C]carfentanil to quantify laughter-induced endogenous opioid release in 12 healthy males. Before the social laughter scan, the subjects watched laughter-inducing comedy clips with their close friends for 30 min. Before the baseline scan, subjects spent 30 min alone in the testing room. Social laughter increased pleasurable sensations and triggered endogenous opioid release in thalamus, caudate nucleus, and anterior insula. In addition, baseline MOR availability in the cingulate and orbitofrontal cortices was associated with the rate of social laughter. In a behavioral control experiment, pain threshold-a proxy of endogenous opioidergic activation-was elevated significantly more in both male and female volunteers after watching laughter-inducing comedy versus non-laughter-inducing drama in groups. Modulation of the opioidergic activity by social laughter may be an important neurochemical pathway that supports the formation, reinforcement, and maintenance of human social bonds.SIGNIFICANCE STATEMENT Social contacts are vital to humans. The size of human social networks significantly exceeds the network that can be maintained by social grooming in other primates. Here, we used PET to show that endogenous opioid release after social laughter may provide a neurochemical mechanism supporting long-term relationships in humans. Participants were scanned twice: after a 30 min social laughter session and after spending 30 min alone in the testing room (baseline). Endogenous opioid release was stronger after laughter versus the baseline scan. Opioid receptor density in the frontal cortex predicted social laughter rates. Modulation of the opioidergic activity by social laughter may be an important neurochemical mechanism reinforcing and maintaining social bonds between humans.


Asunto(s)
Química Encefálica/fisiología , Endorfinas/metabolismo , Risa/fisiología , Medio Social , Adulto , Mapeo Encefálico , Femenino , Humanos , Masculino , Apego a Objetos , Placer , Tomografía de Emisión de Positrones , Receptores Opioides mu/efectos de los fármacos , Receptores Opioides mu/metabolismo , Adulto Joven
7.
Neuroimage ; 129: 214-223, 2016 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-26774614

RESUMEN

Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.


Asunto(s)
Corteza Prefrontal/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Magnetoencefalografía , Masculino , Procesamiento de Señales Asistido por Computador , Adulto Joven
8.
Neuroimage ; 129: 428-438, 2016 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-26826515

RESUMEN

In the perceptual domain, it has been shown that the human brain is strongly shaped through experience, leading to expertise in highly-skilled professionals. What has remained unclear is whether specialization also shapes brain networks underlying mental imagery. In our fMRI study, we aimed to uncover modality-specific mental imagery specialization of film experts. Using multi-voxel pattern analysis we decoded from brain activity of professional cinematographers and sound designers whether they were imagining sounds or images of particular film clips. In each expert group distinct multi-voxel patterns, specific for the modality of their expertise, were found during classification of imagery modality. These patterns were mainly localized in the occipito-temporal and parietal cortex for cinematographers and in the auditory cortex for sound designers. We also found generalized patterns across perception and imagery that were distinct for the two expert groups: they involved frontal cortex for the cinematographers and temporal cortex for the sound designers. Notably, the mental representations of film clips and sounds of cinematographers contained information that went beyond modality-specificity. We were able to successfully decode the implicit presence of film genre from brain activity during mental imagery in cinematographers. The results extend existing neuroimaging literature on expertise into the domain of mental imagery and show that experience in visual versus auditory imagery can alter the representation of information in modality-specific association cortices.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Imaginación/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino
9.
Neuroimage ; 124(Pt A): 858-868, 2016 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-26419388

RESUMEN

Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Animales , Gatos , Sincronización Cortical , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Corteza Visual/fisiología , Vocalización Animal , Adulto Joven
10.
Cereb Cortex ; 26(6): 2563-2573, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-25924952

RESUMEN

Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Emociones/fisiología , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Adulto , Femenino , Humanos , Imaginación/fisiología , Masculino , Percepción de Movimiento/fisiología , Análisis Multivariante , Pruebas Neuropsicológicas , Estimulación Luminosa , Adulto Joven
11.
Neuroimage ; 125: 131-143, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26477651

RESUMEN

Recent studies have shown that acoustically distorted sentences can be perceived as either unintelligible or intelligible depending on whether one has previously been exposed to the undistorted, intelligible versions of the sentences. This allows studying processes specifically related to speech intelligibility since any change between the responses to the distorted stimuli before and after the presentation of their undistorted counterparts cannot be attributed to acoustic variability but, rather, to the successful mapping of sensory information onto memory representations. To estimate how the complexity of the message is reflected in speech comprehension, we applied this rapid change in perception to behavioral and magnetoencephalography (MEG) experiments using vowels, words and sentences. In the experiments, stimuli were initially presented to the subject in a distorted form, after which undistorted versions of the stimuli were presented. Finally, the original distorted stimuli were presented once more. The resulting increase in intelligibility observed for the second presentation of the distorted stimuli depended on the complexity of the stimulus: vowels remained unintelligible (behaviorally measured intelligibility 27%) whereas the intelligibility of the words increased from 19% to 45% and that of the sentences from 31% to 65%. This increase in the intelligibility of the degraded stimuli was reflected as an enhancement of activity in the auditory cortex and surrounding areas at early latencies of 130-160ms. In the same regions, increasing stimulus complexity attenuated mean currents at latencies of 130-160ms whereas at latencies of 200-270ms the mean currents increased. These modulations in cortical activity may reflect feedback from top-down mechanisms enhancing the extraction of information from speech. The behavioral results suggest that memory-driven expectancies can have a significant effect on speech comprehension, especially in acoustically adverse conditions where the bottom-up information is decreased.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Magnetoencefalografía , Masculino , Procesamiento de Señales Asistido por Computador , Inteligibilidad del Habla/fisiología , Adulto Joven
12.
Hear Res ; 307: 86-97, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23886698

RESUMEN

Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory "where" pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. This article is part of a Special Issue entitled Human Auditory Neuroimaging.


Asunto(s)
Corteza Auditiva/fisiología , Mapeo Encefálico , Psicoacústica , Localización de Sonidos , Estimulación Acústica , Corteza Auditiva/anatomía & histología , Vías Auditivas/fisiología , Mapeo Encefálico/métodos , Señales (Psicología) , Humanos , Modelos Neurológicos , Modelos Psicológicos
13.
Nat Commun ; 4: 2585, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24121634

RESUMEN

Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55-145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC.


Asunto(s)
Percepción Auditiva/fisiología , Patrones de Reconocimiento Fisiológico/fisiología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Estimulación Acústica/métodos , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Desempeño Psicomotor/fisiología , Tiempo de Reacción , Sonido , Estimulación Magnética Transcraneal
14.
PLoS One ; 7(10): e46872, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23071654

RESUMEN

Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally (p = 0.1) replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at ~100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300-400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (~100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (~300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Desempeño Psicomotor/fisiología , Sonido , Estimulación Acústica , Adulto , Análisis de Varianza , Percepción Auditiva/fisiología , Umbral Auditivo , Mapeo Encefálico , Discriminación en Psicología/fisiología , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Magnetoencefalografía/métodos , Masculino , Ruido , Estimulación Luminosa , Factores de Tiempo , Adulto Joven
15.
PLoS One ; 7(6): e38511, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22693642

RESUMEN

Given that both auditory and visual systems have anatomically separate object identification ("what") and spatial ("where") pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG) oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what") vs. spatial ("where") aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz) power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex), as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI) analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what") vs. sound location ("where"). The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.


Asunto(s)
Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa , Percepción Espacial/fisiología , Adulto Joven
16.
Neuroimage ; 60(4): 1937-46, 2012 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-22361165

RESUMEN

Sensory-motor interactions between auditory and articulatory representations in the dorsal auditory processing stream are suggested to contribute to speech perception, especially when bottom-up information alone is insufficient for purely auditory perceptual mechanisms to succeed. Here, we hypothesized that the dorsal stream responds more vigorously to auditory syllables when one is engaged in a phonetic identification/repetition task subsequent to perception compared to passive listening, and that this effect is further augmented when the syllables are embedded in noise. To this end, we recorded magnetoencephalography while twenty subjects listened to speech syllables, with and without noise masking, in four conditions: passive perception; overt repetition; covert repetition; and overt imitation. Compared to passive listening, left-hemispheric N100m equivalent current dipole responses were amplified and shifted posteriorly when perception was followed by covert repetition task. Cortically constrained minimum-norm estimates showed amplified left supramarginal and angylar gyri responses in the covert repetition condition at ~100ms from stimulus onset. Longer-latency responses at ~200ms were amplified in the covert repetition condition in the left angular gyrus and in all three active conditions in the left premotor cortex, with further enhancements when the syllables were embedded in noise. Phonetic categorization accuracy and magnitude of voice pitch change between overt repetition and imitation conditions correlated with left premotor cortex responses at ~100 and ~200ms, respectively. Together, these results suggest that the dorsal stream involvement in speech perception is dependent on perceptual task demands and that phonetic categorization performance is influenced by the left premotor cortex.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Femenino , Lateralidad Funcional/fisiología , Humanos , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Fonética , Adulto Joven
17.
Neuroreport ; 21(12): 822-6, 2010 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-20588202

RESUMEN

The superior temporal sulcus has been suggested to play a significant role in the integration of auditory and visual sensory information. Here, we presented vowels and short video clips of the corresponding articulatory gestures to healthy adult humans with two auditory-visual stimulus intervals during sparse sampling 3-T functional magnetic resonance imaging to detect which brain areas are sensitive to synchrony of speech sounds and associated articulatory gestures. The upper bank of the left middle superior temporal sulcus showed stronger activation during naturally asynchronous stimulation than during simultaneous stimulus presentation. It is possible that this reflects sensitivity of the left middle superior temporal sulcus to temporal synchrony of audio-visual speech stimuli.


Asunto(s)
Percepción Auditiva/fisiología , Dominancia Cerebral/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/anatomía & histología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico/métodos , Femenino , Lateralidad Funcional/fisiología , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Adulto Joven
18.
Eur J Neurosci ; 31(10): 1772-82, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-20584181

RESUMEN

Here we report early cross-sensory activations and audiovisual interactions at the visual and auditory cortices using magnetoencephalography (MEG) to obtain accurate timing information. Data from an identical fMRI experiment were employed to support MEG source localization results. Simple auditory and visual stimuli (300-ms noise bursts and checkerboards) were presented to seven healthy humans. MEG source analysis suggested generators in the auditory and visual sensory cortices for both within-modality and cross-sensory activations. fMRI cross-sensory activations were strong in the visual but almost absent in the auditory cortex; this discrepancy with MEG possibly reflects the influence of acoustical scanner noise in fMRI. In the primary auditory cortices (Heschl's gyrus) the onset of activity to auditory stimuli was observed at 23 ms in both hemispheres, and to visual stimuli at 82 ms in the left and at 75 ms in the right hemisphere. In the primary visual cortex (Calcarine fissure) the activations to visual stimuli started at 43 ms and to auditory stimuli at 53 ms. Cross-sensory activations thus started later than sensory-specific activations, by 55 ms in the auditory cortex and by 10 ms in the visual cortex, suggesting that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 30-35 ms. Audiovisual interactions started at 85 ms in the left auditory, 80 ms in the right auditory and 74 ms in the visual cortex, i.e., 3-21 ms after inputs from the two modalities converged.


Asunto(s)
Corteza Auditiva/fisiología , Corteza Somatosensorial/fisiología , Corteza Visual/fisiología , Estimulación Acústica , Adulto , Potenciales Evocados/fisiología , Femenino , Lateralidad Funcional/fisiología , Humanos , Imagen por Resonancia Magnética , Magnetoencefalografía , Masculino , Estimulación Luminosa , Tiempo de Reacción , Adulto Joven
19.
J Neurosci ; 30(4): 1314-21, 2010 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-20107058

RESUMEN

Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125-8000 Hz) (1) during "lipreading," i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Lectura de los Labios , Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Vías Auditivas/fisiología , Mapeo Encefálico , Potenciales Evocados Auditivos/fisiología , Femenino , Lateralidad Funcional/fisiología , Humanos , Magnetoencefalografía , Masculino , Red Nerviosa/fisiología , Inhibición Neural/fisiología , Pruebas Neuropsicológicas , Estimulación Luminosa , Discriminación de la Altura Tonal/fisiología , Tiempo de Reacción/fisiología , Acústica del Lenguaje , Adulto Joven
20.
Hum Brain Mapp ; 31(4): 526-38, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19780040

RESUMEN

Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion.


Asunto(s)
Adaptación Psicológica/fisiología , Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Ilusiones/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Fonética , Estimulación Luminosa , Habla , Grabación en Video , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA