Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Psychophysiology ; 60(11): e14362, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37350379

RESUMO

The most prominent acoustic features in speech are intensity modulations, represented by the amplitude envelope of speech. Synchronization of neural activity with these modulations supports speech comprehension. As the acoustic modulation of speech is related to the production of syllables, investigations of neural speech tracking commonly do not distinguish between lower-level acoustic (envelope modulation) and higher-level linguistic (syllable rate) information. Here we manipulated speech intelligibility using noise-vocoded speech and investigated the spectral dynamics of neural speech processing, across two studies at cortical and subcortical levels of the auditory hierarchy, using magnetoencephalography. Overall, cortical regions mostly track the syllable rate, whereas subcortical regions track the acoustic envelope. Furthermore, with less intelligible speech, tracking of the modulation rate becomes more dominant. Our study highlights the importance of distinguishing between envelope modulation and syllable rate and provides novel possibilities to better understand differences between auditory processing and speech/language processing disorders.


Assuntos
Percepção da Fala , Fala , Humanos , Magnetoencefalografia , Ruído , Cognição , Estimulação Acústica , Inteligibilidade da Fala
2.
Proc Biol Sci ; 290(1994): 20222410, 2023 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-36855868

RESUMO

When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability-particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.


Assuntos
Compreensão , Fala , Humanos , Acústica , Extremidades , Linguística
3.
Neuroimage ; 268: 119894, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36693596

RESUMO

Listening to speech with poor signal quality is challenging. Neural speech tracking of degraded speech has been used to advance the understanding of how brain processes and speech intelligibility are interrelated. However, the temporal dynamics of neural speech tracking and their relation to speech intelligibility are not clear. In the present MEG study, we exploited temporal response functions (TRFs), which has been used to describe the time course of speech tracking on a gradient from intelligible to unintelligible degraded speech. In addition, we used inter-related facets of neural speech tracking (e.g., speech envelope reconstruction, speech-brain coherence, and components of broadband coherence spectra) to endorse our findings in TRFs. Our TRF analysis yielded marked temporally differential effects of vocoding: ∼50-110 ms (M50TRF), ∼175-230 ms (M200TRF), and ∼315-380 ms (M350TRF). Reduction of intelligibility went along with large increases of early peak responses M50TRF, but strongly reduced responses in M200TRF. In the late responses M350TRF, the maximum response occurred for degraded speech that was still comprehensible then declined with reduced intelligibility. Furthermore, we related the TRF components to our other neural "tracking" measures and found that M50TRF and M200TRF play a differential role in the shifting center frequency of the broadband coherence spectra. Overall, our study highlights the importance of time-resolved computation of neural speech tracking and decomposition of coherence spectra and provides a better understanding of degraded speech processing.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Humanos , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Encéfalo/fisiologia , Percepção Auditiva , Cognição , Estimulação Acústica
4.
eNeuro ; 9(3)2022.
Artigo em Inglês | MEDLINE | ID: mdl-35728955

RESUMO

Speech is an intrinsically multisensory signal, and seeing the speaker's lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how the brain exploits visual speech for comprehension. Previous work debated whether lip signals are mainly processed along the auditory pathways or whether the visual system directly implements speech-related processes. To probe this, we systematically characterized dynamic representations of multiple acoustic and visual speech-derived features in source localized MEG recordings that were obtained while participants listened to speech or viewed silent speech. Using a mutual-information framework we provide a comprehensive assessment of how well temporal and occipital cortices reflect the physically presented signals and unique aspects of acoustic features that were physically absent but may be critical for comprehension. Our results demonstrate that both cortices feature a functionally specific form of multisensory restoration: during lip reading, they reflect unheard acoustic features, independent of co-existing representations of the visible lip movements. This restoration emphasizes the unheard pitch signature in occipital cortex and the speech envelope in temporal cortex and is predictive of lip-reading performance. These findings suggest that when seeing the speaker's lips, the brain engages both visual and auditory pathways to support comprehension by exploiting multisensory correspondences between lip movements and spectro-temporal acoustic cues.


Assuntos
Leitura Labial , Percepção da Fala , Estimulação Acústica , Acústica , Humanos , Fala
5.
Elife ; 112022 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-35133276

RESUMO

Fluctuations in arousal, controlled by subcortical neuromodulatory systems, continuously shape cortical state, with profound consequences for information processing. Yet, how arousal signals influence cortical population activity in detail has so far only been characterized for a few selected brain regions. Traditional accounts conceptualize arousal as a homogeneous modulator of neural population activity across the cerebral cortex. Recent insights, however, point to a higher specificity of arousal effects on different components of neural activity and across cortical regions. Here, we provide a comprehensive account of the relationships between fluctuations in arousal and neuronal population activity across the human brain. Exploiting the established link between pupil size and central arousal systems, we performed concurrent magnetoencephalographic (MEG) and pupillographic recordings in a large number of participants, pooled across three laboratories. We found a cascade of effects relative to the peak timing of spontaneous pupil dilations: Decreases in low-frequency (2-8 Hz) activity in temporal and lateral frontal cortex, followed by increased high-frequency (>64 Hz) activity in mid-frontal regions, followed by monotonic and inverted U relationships with intermediate frequency-range activity (8-32 Hz) in occipito-parietal regions. Pupil-linked arousal also coincided with widespread changes in the structure of the aperiodic component of cortical population activity, indicative of changes in the excitation-inhibition balance in underlying microcircuits. Our results provide a novel basis for studying the arousal modulation of cognitive computations in cortical circuits.


Assuntos
Nível de Alerta/fisiologia , Encéfalo/fisiologia , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Magnetoencefalografia/métodos , Neurônios/fisiologia , Pupila/fisiologia , Adulto , Encéfalo/diagnóstico por imagem , Cognição , Feminino , Humanos , Masculino
6.
Cereb Cortex ; 32(21): 4818-4833, 2022 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-35062025

RESUMO

The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.


Assuntos
Percepção da Fala , Humanos , Estimulação Acústica , Lábio , Fala , Movimento
7.
Eur J Neurosci ; 55(11-12): 3288-3302, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-32687616

RESUMO

Making sense of a poor auditory signal can pose a challenge. Previous attempts to quantify speech intelligibility in neural terms have usually focused on one of two measures, namely low-frequency speech-brain synchronization or alpha power modulations. However, reports have been mixed concerning the modulation of these measures, an issue aggravated by the fact that they have normally been studied separately. We present two MEG studies analyzing both measures. In study 1, participants listened to unimodal auditory speech with three different levels of degradation (original, 7-channel and 3-channel vocoding). Intelligibility declined with declining clarity, but speech was still intelligible to some extent even for the lowest clarity level (3-channel vocoding). Low-frequency (1-7 Hz) speech tracking suggested a U-shaped relationship with strongest effects for the medium-degraded speech (7-channel) in bilateral auditory and left frontal regions. To follow up on this finding, we implemented three additional vocoding levels (5-channel, 2-channel and 1-channel) in a second MEG study. Using this wider range of degradation, the speech-brain synchronization showed a similar pattern as in study 1, but further showed that when speech becomes unintelligible, synchronization declines again. The relationship differed for alpha power, which continued to decrease across vocoding levels reaching a floor effect for 5-channel vocoding. Predicting subjective intelligibility based on models either combining both measures or each measure alone showed superiority of the combined model. Our findings underline that speech tracking and alpha power are modified differently by the degree of degradation of continuous speech but together contribute to the subjective speech understanding.


Assuntos
Percepção da Fala , Encéfalo , Mapeamento Encefálico , Humanos , Inteligibilidade da Fala
8.
Cereb Cortex ; 31(5): 2505-2522, 2021 03 31.
Artigo em Inglês | MEDLINE | ID: mdl-33338212

RESUMO

Congenital blindness has been shown to result in behavioral adaptation and neuronal reorganization, but the underlying neuronal mechanisms are largely unknown. Brain rhythms are characteristic for anatomically defined brain regions and provide a putative mechanistic link to cognitive processes. In a novel approach, using magnetoencephalography resting state data of congenitally blind and sighted humans, deprivation-related changes in spectral profiles were mapped to the cortex using clustering and classification procedures. Altered spectral profiles in visual areas suggest changes in visual alpha-gamma band inhibitory-excitatory circuits. Remarkably, spectral profiles were also altered in auditory and right frontal areas showing increased power in theta-to-beta frequency bands in blind compared with sighted individuals, possibly related to adaptive auditory and higher cognitive processing. Moreover, occipital alpha correlated with microstructural white matter properties extending bilaterally across posterior parts of the brain. We provide evidence that visual deprivation selectively modulates spectral profiles, possibly reflecting structural and functional adaptation.


Assuntos
Vias Auditivas/fisiopatologia , Cegueira/fisiopatologia , Lobo Frontal/fisiopatologia , Vias Visuais/fisiopatologia , Adulto , Vias Auditivas/diagnóstico por imagem , Vias Auditivas/fisiologia , Cegueira/diagnóstico por imagem , Imagem de Tensor de Difusão , Feminino , Lobo Frontal/diagnóstico por imagem , Lobo Frontal/fisiologia , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Plasticidade Neuronal/fisiologia , Lobo Occipital/diagnóstico por imagem , Lobo Occipital/fisiologia , Lobo Occipital/fisiopatologia , Vias Visuais/diagnóstico por imagem , Vias Visuais/fisiologia , Substância Branca/diagnóstico por imagem , Substância Branca/fisiologia , Substância Branca/fisiopatologia , Adulto Jovem
9.
Elife ; 92020 08 21.
Artigo em Inglês | MEDLINE | ID: mdl-32820722

RESUMO

The human cortex is characterized by local morphological features such as cortical thickness, myelin content, and gene expression that change along the posterior-anterior axis. We investigated if some of these structural gradients are associated with a similar gradient in a prominent feature of brain activity - namely the frequency of oscillations. In resting-state MEG recordings from healthy participants (N = 187) using mixed effect models, we found that the dominant peak frequency in a brain area decreases significantly along the posterior-anterior axis following the global hierarchy from early sensory to higher order areas. This spatial gradient of peak frequency was significantly anticorrelated with that of cortical thickness, representing a proxy of the cortical hierarchical level. This result indicates that the dominant frequency changes systematically and globally along the spatial and hierarchical gradients and establishes a new structure-function relationship pertaining to brain oscillations as a core organization that may underlie hierarchical specialization in the brain.


Assuntos
Ondas Encefálicas/fisiologia , Córtex Cerebral/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
10.
Elife ; 92020 08 24.
Artigo em Inglês | MEDLINE | ID: mdl-32831168

RESUMO

Visual speech carried by lip movements is an integral part of communication. Yet, it remains unclear in how far visual and acoustic speech comprehension are mediated by the same brain regions. Using multivariate classification of full-brain MEG data, we first probed where the brain represents acoustically and visually conveyed word identities. We then tested where these sensory-driven representations are predictive of participants' trial-wise comprehension. The comprehension-relevant representations of auditory and visual speech converged only in anterior angular and inferior frontal regions and were spatially dissociated from those representations that best reflected the sensory-driven word identity. These results provide a neural explanation for the behavioural dissociation of acoustic and visual speech comprehension and suggest that cerebral representations encoding word identities may be more modality-specific than often upheld.


Assuntos
Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Fonética , Fala , Estimulação Acústica/métodos , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Estimulação Luminosa , Leitura , Percepção da Fala , Adulto Jovem
11.
Neurosci Biobehav Rev ; 107: 136-142, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31518638

RESUMO

In the motor cortex, beta oscillations (∼12-30 Hz) are generally considered a principal rhythm contributing to movement planning and execution. Beta oscillations cohabit and dynamically interact with slow delta oscillations (0.5-4 Hz), but the role of delta oscillations and the subordinate relationship between these rhythms in the perception-action loop remains unclear. Here, we review evidence that motor delta oscillations shape the dynamics of motor behaviors and sensorimotor processes, in particular during auditory perception. We describe the functional coupling between delta and beta oscillations in the motor cortex during spontaneous and planned motor acts. In an active sensing framework, perception is strongly shaped by motor activity, in particular in the delta band, which imposes temporal constraints on the sampling of sensory information. By encoding temporal contextual information, delta oscillations modulate auditory processing and impact behavioral outcomes. Finally, we consider the contribution of motor delta oscillations in the perceptual analysis of speech signals, providing a contextual temporal frame to optimize the parsing and processing of slow linguistic information.


Assuntos
Percepção Auditiva/fisiologia , Ritmo Delta/fisiologia , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Humanos , Fala
12.
J Neurosci ; 39(16): 3119-3129, 2019 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-30770401

RESUMO

Two largely independent research lines use rhythmic sensory stimulation to study visual processing. Despite the use of strikingly similar experimental paradigms, they differ crucially in their notion of the stimulus-driven periodic brain responses: one regards them mostly as synchronized (entrained) intrinsic brain rhythms; the other assumes they are predominantly evoked responses [classically termed steady-state responses (SSRs)] that add to the ongoing brain activity. This conceptual difference can produce contradictory predictions about, and interpretations of, experimental outcomes. The effect of spatial attention on brain rhythms in the alpha band (8-13 Hz) is one such instance: alpha-range SSRs have typically been found to increase in power when participants focus their spatial attention on laterally presented stimuli, in line with a gain control of the visual evoked response. In nearly identical experiments, retinotopic decreases in entrained alpha-band power have been reported, in line with the inhibitory function of intrinsic alpha. Here we reconcile these contradictory findings by showing that they result from a small but far-reaching difference between two common approaches to EEG spectral decomposition. In a new analysis of previously published human EEG data, recorded during bilateral rhythmic visual stimulation, we find the typical SSR gain effect when emphasizing stimulus-locked neural activity and the typical retinotopic alpha suppression when focusing on ongoing rhythms. These opposite but parallel effects suggest that spatial attention may bias the neural processing of dynamic visual stimulation via two complementary neural mechanisms.SIGNIFICANCE STATEMENT Attending to a visual stimulus strengthens its representation in visual cortex and leads to a retinotopic suppression of spontaneous alpha rhythms. To further investigate this process, researchers often attempt to phase lock, or entrain, alpha through rhythmic visual stimulation under the assumption that this entrained alpha retains the characteristics of spontaneous alpha. Instead, we show that the part of the brain response that is phase locked to the visual stimulation increased with attention (as do steady-state evoked potentials), while the typical suppression was only present in non-stimulus-locked alpha activity. The opposite signs of these effects suggest that attentional modulation of dynamic visual stimulation relies on two parallel cortical mechanisms-retinotopic alpha suppression and increased temporal tracking.


Assuntos
Ritmo alfa/fisiologia , Atenção/fisiologia , Encéfalo/fisiologia , Sincronização Cortical/fisiologia , Potenciais Evocados Visuais/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
13.
Behav Res Methods ; 51(3): 1258-1270, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30206797

RESUMO

The Glasgow Norms are a set of normative ratings for 5,553 English words on nine psycholinguistic dimensions: arousal, valence, dominance, concreteness, imageability, familiarity, age of acquisition, semantic size, and gender association. The Glasgow Norms are unique in several respects. First, the corpus itself is relatively large, while simultaneously providing norms across a substantial number of lexical dimensions. Second, for any given subset of words, the same participants provided ratings across all nine dimensions (33 participants/word, on average). Third, two novel dimensions-semantic size and gender association-are included. Finally, the corpus contains a set of 379 ambiguous words that are presented either alone (e.g., toast) or with information that selects an alternative sense (e.g., toast (bread), toast (speech)). The relationships between the dimensions of the Glasgow Norms were initially investigated by assessing their correlations. In addition, a principal component analysis revealed four main factors, accounting for 82% of the variance (Visualization, Emotion, Salience, and Exposure). The validity of the Glasgow Norms was established via comparisons of our ratings to 18 different sets of current psycholinguistic norms. The dimension of size was tested with megastudy data, confirming findings from past studies that have explicitly examined this variable. Alternative senses of ambiguous words (i.e., disambiguated forms), when discordant on a given dimension, seemingly led to appropriately distinct ratings. Informal comparisons between the ratings of ambiguous words and of their alternative senses showed different patterns that likely depended on several factors (the number of senses, their relative strengths, and the rating scales themselves). Overall, the Glasgow Norms provide a valuable resource-in particular, for researchers investigating the role of word recognition in language comprehension.


Assuntos
Semântica , Adolescente , Adulto , Nível de Alerta , Emoções , Feminino , Humanos , Masculino , Psicolinguística , Reconhecimento Psicológico , Fala , Adulto Jovem
14.
PLoS Biol ; 16(3): e2004473, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29529019

RESUMO

During online speech processing, our brain tracks the acoustic fluctuations in speech at different timescales. Previous research has focused on generic timescales (for example, delta or theta bands) that are assumed to map onto linguistic features such as prosody or syllables. However, given the high intersubject variability in speaking patterns, such a generic association between the timescales of brain activity and speech properties can be ambiguous. Here, we analyse speech tracking in source-localised magnetoencephalographic data by directly focusing on timescales extracted from statistical regularities in our speech material. This revealed widespread significant tracking at the timescales of phrases (0.6-1.3 Hz), words (1.8-3 Hz), syllables (2.8-4.8 Hz), and phonemes (8-12.4 Hz). Importantly, when examining its perceptual relevance, we found stronger tracking for correctly comprehended trials in the left premotor (PM) cortex at the phrasal scale as well as in left middle temporal cortex at the word scale. Control analyses using generic bands confirmed that these effects were specific to the speech regularities in our stimuli. Furthermore, we found that the phase at the phrasal timescale coupled to power at beta frequency (13-30 Hz) in motor areas. This cross-frequency coupling presumably reflects top-down temporal prediction in ongoing speech perception. Together, our results reveal specific functional and perceptually relevant roles of distinct tracking and cross-frequency processes along the auditory-motor pathway.


Assuntos
Córtex Auditivo/fisiologia , Córtex Motor/fisiologia , Percepção da Fala , Fala , Estimulação Acústica , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Magnetoencefalografia , Masculino
16.
J Exp Psychol Learn Mem Cogn ; 44(7): 1064-1074, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29431458

RESUMO

Emotion (positive and negative) words are typically recognized faster than neutral words. Recent research suggests that emotional valence, while often treated as a unitary semantic property, may be differentially represented in concrete and abstract words. Studies that have explicitly examined the interaction of emotion and concreteness, however, have demonstrated inconsistent patterns of results. Moreover, these findings may be limited as certain key lexical variables (e.g., familiarity, age of acquisition) were not taken into account. We investigated the emotion-concreteness interaction in a large-scale, highly controlled lexical decision experiment. A 3 (Emotion: negative, neutral, positive) × 2 (Concreteness: abstract, concrete) design was used, with 45 items per condition and 127 participants. We found a significant interaction between emotion and concreteness. Although positive and negative valenced words were recognized faster than neutral words, this emotion advantage was significantly larger in concrete than in abstract words. We explored potential contributions of participant alexithymia level and item imageability to this interactive pattern. We found that only word imageability significantly modulated the emotion-concreteness interaction. While both concrete and abstract emotion words are advantageously processed relative to comparable neutral words, the mechanisms of this facilitation are paradoxically more dependent on imageability in abstract words. (PsycINFO Database Record


Assuntos
Emoções , Imaginação , Psicolinguística , Adolescente , Adulto , Sintomas Afetivos/psicologia , Tomada de Decisões , Feminino , Humanos , Masculino , Adulto Jovem
17.
Neuroimage ; 147: 32-42, 2017 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-27903440

RESUMO

The timing of slow auditory cortical activity aligns to the rhythmic fluctuations in speech. This entrainment is considered to be a marker of the prosodic and syllabic encoding of speech, and has been shown to correlate with intelligibility. Yet, whether and how auditory cortical entrainment is influenced by the activity in other speech-relevant areas remains unknown. Using source-localized MEG data, we quantified the dependency of auditory entrainment on the state of oscillatory activity in fronto-parietal regions. We found that delta band entrainment interacted with the oscillatory activity in three distinct networks. First, entrainment in the left anterior superior temporal gyrus (STG) was modulated by beta power in orbitofrontal areas, possibly reflecting predictive top-down modulations of auditory encoding. Second, entrainment in the left Heschl's Gyrus and anterior STG was dependent on alpha power in central areas, in line with the importance of motor structures for phonological analysis. And third, entrainment in the right posterior STG modulated theta power in parietal areas, consistent with the engagement of semantic memory. These results illustrate the topographical network interactions of auditory delta entrainment and reveal distinct cross-frequency mechanisms by which entrainment can interact with different cognitive processes underlying speech perception.


Assuntos
Córtex Auditivo/fisiologia , Ritmo Delta/fisiologia , Lobo Frontal/fisiologia , Magnetoencefalografia , Lobo Parietal/fisiologia , Estimulação Acústica , Adulto , Ritmo alfa/fisiologia , Ritmo beta/fisiologia , Feminino , Humanos , Masculino , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Ritmo Teta/fisiologia , Adulto Jovem
18.
PLoS Biol ; 14(6): e1002498, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27355236

RESUMO

The human brain can be parcellated into diverse anatomical areas. We investigated whether rhythmic brain activity in these areas is characteristic and can be used for automatic classification. To this end, resting-state MEG data of 22 healthy adults was analysed. Power spectra of 1-s long data segments for atlas-defined brain areas were clustered into spectral profiles ("fingerprints"), using k-means and Gaussian mixture (GM) modelling. We demonstrate that individual areas can be identified from these spectral profiles with high accuracy. Our results suggest that each brain area engages in different spectral modes that are characteristic for individual areas. Clustering of brain areas according to similarity of spectral profiles reveals well-known brain networks. Furthermore, we demonstrate task-specific modulations of auditory spectral profiles during auditory processing. These findings have important implications for the classification of regional spectral activity and allow for novel approaches in neuroimaging and neurostimulation in health and disease.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Magnetoencefalografia/métodos , Rede Nervosa/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/anatomia & histologia , Córtex Auditivo/fisiologia , Encéfalo/anatomia & histologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Modelos Anatômicos , Modelos Neurológicos , Rede Nervosa/anatomia & histologia , Adulto Jovem
19.
Front Psychol ; 6: 327, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25859233

RESUMO

The development of action and perception, and their relation in infancy is a central research area in socio-cognitive sciences. In this Perspective Article, we focus on the developmental variability and continuity of action and perception. At group level, these skills have been shown to consistently improve with age. We would like to raise awareness for the issue that, at individual level, development might be subject to more variable changes. We present data from a longitudinal study on the perception and production of contralateral reaching skills of infants aged 7, 8, 9, and 12 months. Our findings suggest that individual development does not increase linearly for action or for perception, but instead changes dynamically. These non-continuous changes substantially affect the relation between action and perception at each measuring point and the respective direction of causality. This suggests that research on the development of action and perception and their interrelations needs to take into account individual variability and continuity more progressively.

20.
Front Psychol ; 6: 108, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25713548

RESUMO

The anticipation of a speaker's next turn is a key element of successful conversation. This can be achieved using a multitude of cues. In natural conversation, the most important cue for adults to anticipate the end of a turn (and therefore the beginning of the next turn) is the semantic and syntactic content. In addition, prosodic cues, such as intonation, or visual signals that occur before a speaker starts speaking (e.g., opening the mouth) help to identify the beginning and the end of a speaker's turn. Early in life, prosodic cues seem to be more important than in adulthood. For example, it was previously shown that 3-year-old children anticipated more turns in observed conversations when intonation was available compared with when not, and this beneficial effect was present neither in younger children nor in adults (Keitel et al., 2013). In the present study, we investigated this effect in greater detail. Videos of conversations between puppets with either normal or flattened intonation were presented to children (1-year-olds and 3-year-olds) and adults. The use of puppets allowed the control of visual signals: the verbal signals (speech) started exactly at the same time as the visual signals (mouth opening). With respect to the children, our findings replicate the results of the previous study: 3-year-olds anticipated more turns with normal intonation than with flattened intonation, whereas 1-year-olds did not show this effect. In contrast to our previous findings, the adults showed the same intonation effect as the 3-year-olds. This suggests that adults' cue use varies depending on the characteristics of a conversation. Our results further support the notion that the cues used to anticipate conversational turns differ in development.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...