Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Neuroimage ; 219: 116936, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32474080

RESUMO

Natural speech builds on contextual relations that can prompt predictions of upcoming utterances. To study the neural underpinnings of such predictive processing we asked 10 healthy adults to listen to a 1-h-long audiobook while their magnetoencephalographic (MEG) brain activity was recorded. We correlated the MEG signals with acoustic speech envelope, as well as with estimates of Bayesian word probability with and without the contextual word sequence (N-gram and Unigram, respectively), with a focus on time-lags. The MEG signals of auditory and sensorimotor cortices were strongly coupled to the speech envelope at the rates of syllables (4-8 â€‹Hz) and of prosody and intonation (0.5-2 â€‹Hz). The probability structure of word sequences, independently of the acoustical features, affected the ≤ 2-Hz signals extensively in auditory and rolandic regions, in precuneus, occipital cortices, and lateral and medial frontal regions. Fine-grained temporal progression patterns occurred across brain regions 100-1000 â€‹ms after word onsets. Although the acoustic effects were observed in both hemispheres, the contextual influences were statistically significantly lateralized to the left hemisphere. These results serve as a brain signature of the predictability of word sequences in listened continuous speech, confirming and extending previous results to demonstrate that deeply-learned knowledge and recent contextual information are employed dynamically and in a left-hemisphere-dominant manner in predicting the forthcoming words in natural speech.


Assuntos
Encéfalo/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Fala/fisiologia , Adulto Jovem
2.
J Neurosci ; 37(43): 10421-10437, 2017 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-28951449

RESUMO

To gain fundamental knowledge on how the brain controls motor actions, we studied in detail the interplay between MEG signals from the primary sensorimotor (SM1) cortex and the contraction force of 17 healthy adult humans (7 females, 10 males). SM1 activity was coherent at ∼20 Hz with surface electromyogram (as already extensively reported) but also with contraction force. In both cases, the effective coupling was dominant in the efferent direction. Across subjects, the level of ∼20 Hz coherence between cortex and periphery positively correlated with the "burstiness" of ∼20 Hz SM1 (Pearson r ≈ 0.65) and peripheral fluctuations (r ≈ 0.9). Thus, ∼20 Hz coherence between cortex and periphery is tightly linked to the presence of ∼20 Hz bursts in SM1 and peripheral activity. However, the very high correlation with peripheral fluctuations suggests that the periphery is the limiting factor. At frequencies <3 Hz, both SM1 signals and ∼20 Hz SM1 envelope were coherent with both force and its absolute change rate. The effective coupling dominated in the efferent direction between (1) force and the ∼20 Hz SM1 envelope and (2) the absolute change rate of the force and SM1 signals. Together, our data favor the view that ∼20 Hz coherence between cortex and periphery during isometric contraction builds on the presence of ∼20 Hz SM1 oscillations and needs not rely on feedback from the periphery. They also suggest that effective cortical proprioceptive processing operates at <3 Hz frequencies, even during steady isometric contractions.SIGNIFICANCE STATEMENT Accurate motor actions are made possible by continuous communication between the cortex and spinal motoneurons, but the neurophysiological basis of this communication is poorly understood. Using MEG recordings in humans maintaining steady isometric muscle contractions, we found evidence that the cortex sends population-level motor commands that tend to structure according to the ∼20 Hz sensorimotor rhythm, and that it dynamically adapts these commands based on the <3 Hz fluctuations of proprioceptive feedback. To our knowledge, this is the first report to give a comprehensive account of how the human brain dynamically handles the flow of proprioceptive information and converts it into appropriate motor command to keep the contraction force steady.


Assuntos
Retroalimentação Sensorial/fisiologia , Força da Mão/fisiologia , Contração Isométrica/fisiologia , Magnetoencefalografia/métodos , Músculo Esquelético/fisiologia , Córtex Sensório-Motor/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Neurorretroalimentação/métodos , Estimulação Luminosa/métodos , Adulto Jovem
3.
J Neurosci ; 37(25): 6125-6131, 2017 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-28536272

RESUMO

The size of human social networks significantly exceeds the network that can be maintained by social grooming or touching in other primates. It has been proposed that endogenous opioid release after social laughter would provide a neurochemical pathway supporting long-term relationships in humans (Dunbar, 2012), yet this hypothesis currently lacks direct neurophysiological support. We used PET and the µ-opioid-receptor (MOR)-specific ligand [11C]carfentanil to quantify laughter-induced endogenous opioid release in 12 healthy males. Before the social laughter scan, the subjects watched laughter-inducing comedy clips with their close friends for 30 min. Before the baseline scan, subjects spent 30 min alone in the testing room. Social laughter increased pleasurable sensations and triggered endogenous opioid release in thalamus, caudate nucleus, and anterior insula. In addition, baseline MOR availability in the cingulate and orbitofrontal cortices was associated with the rate of social laughter. In a behavioral control experiment, pain threshold-a proxy of endogenous opioidergic activation-was elevated significantly more in both male and female volunteers after watching laughter-inducing comedy versus non-laughter-inducing drama in groups. Modulation of the opioidergic activity by social laughter may be an important neurochemical pathway that supports the formation, reinforcement, and maintenance of human social bonds.SIGNIFICANCE STATEMENT Social contacts are vital to humans. The size of human social networks significantly exceeds the network that can be maintained by social grooming in other primates. Here, we used PET to show that endogenous opioid release after social laughter may provide a neurochemical mechanism supporting long-term relationships in humans. Participants were scanned twice: after a 30 min social laughter session and after spending 30 min alone in the testing room (baseline). Endogenous opioid release was stronger after laughter versus the baseline scan. Opioid receptor density in the frontal cortex predicted social laughter rates. Modulation of the opioidergic activity by social laughter may be an important neurochemical mechanism reinforcing and maintaining social bonds between humans.


Assuntos
Química Encefálica/fisiologia , Endorfinas/metabolismo , Riso/fisiologia , Meio Social , Adulto , Mapeamento Encefálico , Feminino , Humanos , Masculino , Apego ao Objeto , Prazer , Tomografia por Emissão de Pósitrons , Receptores Opioides mu/efeitos dos fármacos , Receptores Opioides mu/metabolismo , Adulto Jovem
4.
J Neurosci ; 36(5): 1596-606, 2016 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-26843641

RESUMO

Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. SIGNIFICANCE STATEMENT: When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole auditory scene and how increasing background noise corrupts this process is still debated. In this magnetoencephalography study, subjects had to attend a speech stream with or without multitalker background noise. Results argue for frequency-dependent cortical tracking mechanisms for the attended speech stream. The left superior temporal gyrus tracked the ∼0.5 Hz modulations of the attended speech stream only when the speech was embedded in multitalker background, whereas the right supratemporal auditory cortex tracked 4-8 Hz modulations during both noiseless and cocktail-party conditions.


Assuntos
Estimulação Acústica/métodos , Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Magnetoencefalografia/métodos , Masculino , Adulto Jovem
5.
Cereb Cortex ; 26(6): 2563-2573, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-25924952

RESUMO

Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Emoções/fisiologia , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Adulto , Feminino , Humanos , Imaginação/fisiologia , Masculino , Percepção de Movimento/fisiologia , Análise Multivariada , Testes Neuropsicológicos , Estimulação Luminosa , Adulto Jovem
6.
Hum Brain Mapp ; 36(12): 5168-82, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26415889

RESUMO

To maintain steady motor output, distracting sensory stimuli need to be blocked. To study the effects of brief auditory and visual distractors on the human primary motor (M1) cortex, we monitored magnetoencephalographic (MEG) cortical rhythms, electromyogram (EMG) of finger flexors, and corticomuscular coherence (CMC) during right-hand pinch (force 5-7% of maximum) while 1-kHz tones and checkerboard patterns were presented for 100 ms once every 3.5-5 s. Twenty-one subjects (out of twenty-two) showed statistically significant ∼20-Hz CMC. Both distractors elicited a covert startle-like response evident in changes of force and EMG (∼50% of the background variation) but without any visible movement, followed by ∼1-s enhancement of CMC (auditory on average by 75%, P < 0.001; visual by 33%, P < 0.05) and rolandic ∼20-Hz rhythm (auditory by 14%, P < 0.05; visual by 11%, P < 0.01). Directional coupling of coherence from muscle to the M1 cortex (EMG→MEG) increased for ∼0.5 s at the onset of the CMC enhancement, but only after auditory distractor (by 105%; P < 0.05), likely reflecting startle-related proprioceptive afference. The 20-Hz enhancements occurred in the left M1 cortex and were for the auditory stimuli preceded by an early suppression (by 7%, P < 0.05). Task-unrelated distractors modulated corticospinal coupling at ∼20 Hz. We propose that the distractors triggered covert startle-like responses, resulting in proprioceptive afference to the cortex, and that they also transiently disengaged the subject's attention from the fine-motor task. As a result, the corticospinal output was readjusted to keep the contraction force stable.


Assuntos
Estimulação Acústica , Mapeamento Encefálico , Potencial Evocado Motor/fisiologia , Força da Mão/fisiologia , Córtex Motor/fisiologia , Estimulação Luminosa , Adulto , Eletromiografia , Feminino , Dedos/inervação , Humanos , Contração Isométrica , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Músculo Esquelético/fisiologia , Adulto Jovem
7.
Eur J Neurosci ; 42(8): 2508-14, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26132210

RESUMO

An observer's brain is known to respond to another person's small nonverbal signals, such as gaze shifts and eye blinks. Here we aimed to find out how an observer's brain reacts to a speaker's eye blinks in the presence of other audiovisual information. Magnetoencephalographic brain responses along with eye gaze were recorded from 13 adults who watched a video of a person telling a story. The video was presented first without sound (visual), then with sound (audiovisual), and finally the audio story was presented with a still-frame picture on the screen (audio control). The viewers mainly gazed at the eye region of the speaker. Their saccades were suppressed at about 180 ms after the start of the speaker's blinks, a subsequent increase of saccade occurence to the base level, or higher, at around 340 ms. The suppression occurred in visual and audiovisual conditions but not during the control audio presentation. Prominent brain responses to blinks peaked in the viewer's occipital cortex at about 250 ms, with no differences in mean peak amplitudes or latencies between visual and audiovisual conditions. During the audiovisual, but not visual-only, presentation, the responses were the stronger the more empathetic the subject was according to the Empathic Concern score of the Interpersonal Reactivity Index questionnaire (Spearman's rank correlation, 0.73). The other person's eye blinks, nonverbal signs that often go unnoticed, thus elicited clear brain responses even in the presence of attention-attracting audiovisual information from the narrative, with stronger responses in people with higher empathy scores.


Assuntos
Piscadela , Encéfalo/fisiologia , Empatia/fisiologia , Percepção de Movimento/fisiologia , Percepção Social , Estimulação Acústica/métodos , Adulto , Percepção Auditiva/fisiologia , Medições dos Movimentos Oculares , Movimentos Oculares/fisiologia , Reconhecimento Facial/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Narração , Testes Neuropsicológicos , Estimulação Luminosa/métodos , Gravação em Vídeo , Adulto Jovem
8.
Ear Hear ; 35(4): 461-7, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24603544

RESUMO

OBJECTIVES: Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears' inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs. DESIGN: MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales. RESULTS: The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth. CONCLUSIONS: The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.


Assuntos
Córtex Cerebral/fisiologia , Magnetoencefalografia , Música , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Razão Sinal-Ruído , Adulto Jovem
9.
PLoS One ; 8(11): e80284, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24278270

RESUMO

To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.


Assuntos
Estimulação Acústica , Córtex Auditivo/fisiologia , Emoções , Magnetoencefalografia , Feminino , Humanos , Masculino
10.
PLoS One ; 8(5): e64489, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23734202

RESUMO

Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI). An intersubject-correlation (ISC) map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC) analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Som , Fala , Estimulação Acústica , Adulto , Análise de Variância , Encéfalo/fisiologia , Mapeamento Encefálico , Feminino , Lateralidade Funcional/fisiologia , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética , Masculino , Fonética , Percepção da Fala/fisiologia , Adulto Jovem
11.
J Acoust Soc Am ; 132(3): 1747-53, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22978901

RESUMO

The auditory octave illusion arises when dichotically presented tones, one octave apart, alternate rapidly between the ears. Most subjects perceive an illusory sequence of monaural tones: A high tone in the right ear (RE) alternates with a low tone, incorrectly localized to the left ear (LE). Behavioral studies suggest that the perceived pitch follows the RE input, and the perceived location the higher-frequency sound. To explore the link between the perceived pitches and brain-level interactions of dichotic tones, magnetoencephalographic responses were recorded to 4 binaural combinations of 2-min long continuous 400- and 800-Hz tones and to 4 monaural tones. Responses to LE and RE inputs were distinguished by frequency-tagging the ear-specific stimuli at different modulation frequencies. During dichotic presentation, ipsilateral LE tones elicited weaker and ipsilateral RE tones stronger responses than when both ears received the same tone. During the most paradoxical stimulus-high tone to LE and low tone to RE perceived as a low tone in LE during the illusion-also the contralateral responses to LE tones were diminished. The results demonstrate modified binaural interaction of dichotic tones one octave apart, suggesting that this interaction contributes to pitch perception during the octave illusion.


Assuntos
Vias Auditivas/fisiologia , Cérebro/fisiologia , Ilusões , Música , Percepção da Altura Sonora , Estimulação Acústica , Adulto , Limiar Auditivo , Mapeamento Encefálico/métodos , Ondas Encefálicas , Testes com Listas de Dissílabos , Feminino , Lateralidade Funcional , Humanos , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Psicoacústica , Fatores de Tempo , Adulto Jovem
12.
J Neurosci ; 32(3): 966-71, 2012 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-22262894

RESUMO

In rodents, the Robo1 gene regulates midline crossing of major nerve tracts, a fundamental property of the mammalian CNS. However, the neurodevelopmental function of the human ROBO1 gene remains unknown, apart from a suggested role in dyslexia. We therefore studied axonal crossing with a functional approach, based on magnetoencephalography, in 10 dyslexic individuals who all share the same rare, weakly expressing haplotype of the ROBO1 gene. Auditory-cortex responses were recorded separately to left- and right-ear sounds that were amplitude modulated at different frequencies. We found impaired interaural interaction that depended on the ROBO1 in a dose-dependent manner. Our results indicate that normal crossing of the auditory pathways requires an adequate ROBO1 expression level.


Assuntos
Córtex Auditivo/fisiopatologia , Vias Auditivas/fisiopatologia , Dislexia , Orelha/fisiopatologia , Potenciais Evocados Auditivos/genética , Proteínas do Tecido Nervoso/genética , Receptores Imunológicos/genética , Estimulação Acústica/métodos , Adulto , Análise de Variância , Análise Mutacional de DNA , Dislexia/genética , Dislexia/patologia , Dislexia/fisiopatologia , Eletroencefalografia , Saúde da Família , Feminino , Lateralidade Funcional/genética , Regulação da Expressão Gênica/genética , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Tempo de Reação/genética , Adulto Jovem , Proteínas Roundabout
13.
Hum Brain Mapp ; 33(7): 1648-62, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21915941

RESUMO

Independent component analysis (ICA) of electroencephalographic (EEG) and magnetoencephalographic (MEG) data is usually performed over the temporal dimension: each channel is one row of the data matrix, and a linear transformation maximizing the independence of component time courses is sought. In functional magnetic resonance imaging (fMRI), by contrast, most studies use spatial ICA: each time point constitutes a row of the data matrix, and independence of the spatial patterns is maximized. Here, we show the utility of spatial ICA in characterizing oscillatory neuromagnetic signals. We project the sensor data into cortical space using a standard minimum-norm estimate and apply a sparsifying transform to focus on oscillatory signals. The resulting method, spatial Fourier-ICA, provides a concise summary of the spatiotemporal and spectral content of spontaneous neuromagnetic oscillations in cortical source space over time scales of minutes. Spatial Fourier-ICA applied to resting-state and naturalistic stimulation MEG data from nine healthy subjects revealed consistent components covering the early visual, somatosensory and motor cortices with spectral peaks at ∼10 and ∼20 Hz. The proposed method seems valuable for inferring functional connectivity, stimulus-related modulation of rhythmic activity, and their commonalities across subjects from nonaveraged MEG data.


Assuntos
Análise de Fourier , Magnetoencefalografia/métodos , Córtex Motor/fisiologia , Análise de Componente Principal/métodos , Córtex Somatossensorial/fisiologia , Córtex Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Ondas Encefálicas/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Fatores de Tempo , Adulto Jovem
14.
J Neurosci ; 30(4): 1314-21, 2010 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-20107058

RESUMO

Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125-8000 Hz) (1) during "lipreading," i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Leitura Labial , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Magnetoencefalografia , Masculino , Rede Nervosa/fisiologia , Inibição Neural/fisiologia , Testes Neuropsicológicos , Estimulação Luminosa , Discriminação da Altura Tonal/fisiologia , Tempo de Reação/fisiologia , Acústica da Fala , Adulto Jovem
15.
Neuroimage ; 48(1): 176-85, 2009 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-19344775

RESUMO

Natural stimuli are increasingly used in functional magnetic resonance imaging (fMRI) studies to imitate real-life situations. Consequently, challenges are created for novel analysis methods, including new machine-learning tools. With natural stimuli it is no longer feasible to assume single features of the experimental design alone to account for the brain activity. Instead, relevant combinations of rich enough stimulus features could explain the more complex activation patterns. We propose a novel two-step approach, where independent component analysis is first used to identify spatially independent brain processes, which we refer to as functional patterns. As the second step, temporal dependencies between stimuli and functional patterns are detected using canonical correlation analysis. Our proposed method looks for combinations of stimulus features and the corresponding combinations of functional patterns. This two-step approach was used to analyze measurements from an fMRI study during multi-modal stimulation. The detected complex activation patterns were explained as resulting from interactions of multiple brain processes. Our approach seems promising for analysis of data from studies with natural stimuli.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Humanos , Imageamento por Ressonância Magnética , Estimulação Luminosa , Estimulação Física , Fatores de Tempo
16.
Hum Brain Mapp ; 30(9): 2890-7, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19184995

RESUMO

Suggestion, a powerful factor in everyday social interaction, is most effective during hypnosis. Subjective evaluations and brain-imaging findings converge to propose that hypnotic suggestion strongly modulates sensory processing. To reveal the brain regions that mediate such a modulation, we analyzed data from a functional-magnetic-resonance-imaging study on hypnotic-suggestion-induced pain on 14 suggestible subjects. Activation strengths in the right dorsolateral prefrontal cortex (DLPFC) during initiation of suggestion for pain correlated positively with the subjective intensity of the subsequent suggestion-induced pain, as well as with the strengths of the maximum pain-related activation in the in the secondary somatosensory (SII) cortex. Furthermore, activation of the insula and the anterior cingulate cortex predicted the pain-related SII activation. The right DLPFC, as an area important for executive functions, likely contributes to functional modulation in the modality-specific target areas of given suggestions.


Assuntos
Hipnose , Imaginação/fisiologia , Dor/fisiopatologia , Dor/psicologia , Córtex Pré-Frontal/fisiologia , Sugestão , Adulto , Mapeamento Encefálico , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Giro do Cíngulo/anatomia & histologia , Giro do Cíngulo/fisiologia , Humanos , Ilusões/fisiologia , Ilusões/psicologia , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/anatomia & histologia , Rede Nervosa/fisiologia , Testes Neuropsicológicos , Medição da Dor , Efeito Placebo , Córtex Pré-Frontal/anatomia & histologia , Córtex Somatossensorial/anatomia & histologia , Córtex Somatossensorial/fisiologia , Adulto Jovem
17.
Hum Brain Mapp ; 30(5): 1524-34, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-18661502

RESUMO

Magnetic interference signals often hamper analysis of magnetoencephalographic (MEG) measurements. Artifact sources in the proximity of the sensors cause strong and spatially complex signals that are particularly challenging for the existing interference-suppression methods. Here we demonstrate the performance of the temporally extended signal space separation method (tSSS) in removing strong interference caused by external and nearby sources on auditory-evoked magnetic fields-the sources of which are well established. The MEG signals were contaminated by normal environmental interference, by artificially produced additional external interference, and by nearby artifacts produced by a piece of magnetized wire in the subject's lip. After tSSS processing, even the single-trial auditory responses had a good-enough signal-to-noise ratio for detailed waveform and source analysis. Waveforms and source locations of the tSSS-reconstructed data were in good agreement with the responses from the control condition without extra interference. Our results demonstrate that tSSS is a robust and efficient method for removing a wide range of different types of interference signals in neuromagnetic multichannel measurements.


Assuntos
Artefatos , Mapeamento Encefálico , Encéfalo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Processamento de Sinais Assistido por Computador , Estimulação Acústica/métodos , Simulação por Computador , Eletroencefalografia , Campos Eletromagnéticos/efeitos adversos , Humanos , Magnetoencefalografia , Modelos Neurológicos , Tempo de Reação/fisiologia , Fatores de Tempo
18.
Proc Natl Acad Sci U S A ; 105(51): 20500-4, 2008 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-19074267

RESUMO

When a visual scene allows multiple interpretations, the percepts may spontaneously alternate despite the stable retinal image and the invariant sensory input transmitted to the brain. To study the brain basis of such multi-stable percepts, we superimposed rapidly changing dynamic noise as regional tags to the Rubin vase-face figure and followed the corresponding tag-related cortical signals with magnetoencephalography. The activity already in the earliest visual cortical areas, the primary visual cortex included, varied with the perceptual states reported by the observers. These percept-related modulations most likely reflect top-down influences that accentuate the neural representation of the perceived object in the early visual cortex and maintain the segregation of objects from the background.


Assuntos
Córtex Visual/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Magnetoencefalografia , Masculino , Adulto Jovem
19.
Soc Neurosci ; 3(3-4): 401-9, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18633834

RESUMO

Human footsteps carry a vast amount of social information, which is often unconsciously noted. Using functional magnetic resonance imaging, we analyzed brain networks activated by footstep sounds of one or two persons walking. Listening to two persons walking together activated brain areas previously associated with affective states and social interaction, such as the subcallosal gyrus bilaterally, the right temporal pole, and the right amygdala. These areas seem to be involved in the analysis of persons' identity and complex social stimuli on the basis of auditory cues. Single footsteps activated only the biological motion area in the posterior STS region. Thus, hearing two persons walking together involved a more widespread brain network than did hearing footsteps from a single person.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Audição/fisiologia , Caminhada , Estimulação Acústica/métodos , Adulto , Encéfalo/anatomia & histologia , Encéfalo/irrigação sanguínea , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Oxigênio/sangue , Descanso/fisiologia , Inquéritos e Questionários , Adulto Jovem
20.
Proc Natl Acad Sci U S A ; 104(21): 9058-62, 2007 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-17470782

RESUMO

We quantified rhythmic brain activity, recorded with whole-scalp magnetoencephalography (MEG), of 13 healthy subjects who were performing, seeing, or hearing the tapping of a drum membrane with the right index finger. In the actor's primary motor (M1) cortex, the level of the approximately 20-Hz brain rhythms started to decrease, as a sign of M1 activation, approximately 2 s before the action and then increased, with a clear rebound approximately 0.6 s after the tapping, as a sign of M1 stabilization. A very similar time course occurred in the M1 cortex of the observer: the activation, although less vigorous than in the actor, started approximately 0.8 s before the action and was followed by a rebound. When the subject just heard the tapping sound, no preaction activation was visible, but a rebound followed the sound. The approximately 10-Hz somatosensory rhythm, which also started to decrease before own and viewed actions, returned to the baseline level approximately 0.6 s later after own actions than observed actions. This delay likely reflects proprioceptive input to the cortex, available only during own actions, and therefore could be related to the brain signature of the sense of agency. The strikingly similar motor cortex reactivity during the first and third person actions expands previous data on brain mechanisms of intersubjective understanding. Besides motor cortex activation before own and observed (predicted) actions, the M1 cortex of both the viewer and the listener stabilized in a very similar manner after brisk motor actions.


Assuntos
Estimulação Acústica , Córtex Motor/fisiologia , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Observação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA