Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Dev Sci ; 27(1): e13428, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37381667

RESUMO

The prevalent "core phonological deficit" model of dyslexia proposes that the reading and spelling difficulties characterizing affected children stem from prior developmental difficulties in processing speech sound structure, for example, perceiving and identifying syllable stress patterns, syllables, rhymes and phonemes. Yet spoken word production appears normal. This suggests an unexpected disconnect between speech input and speech output processes. Here we investigated the output side of this disconnect from a speech rhythm perspective by measuring the speech amplitude envelope (AE) of multisyllabic spoken phrases. The speech AE contains crucial information regarding stress patterns, speech rate, tonal contrasts and intonational information. We created a novel computerized speech copying task in which participants copied aloud familiar spoken targets like "Aladdin." Seventy-five children with and without dyslexia were tested, some of whom were also receiving an oral intervention designed to enhance multi-syllabic processing. Similarity of the child's productions to the target AE was computed using correlation and mutual information metrics. Similarity of pitch contour, another acoustic cue to speech rhythm, was used for control analyses. Children with dyslexia were significantly worse at producing the multi-syllabic targets as indexed by both similarity metrics for computing the AE. However, children with dyslexia were not different from control children in producing pitch contours. Accordingly, the spoken production of multisyllabic phrases by children with dyslexia is atypical regarding the AE. Children with dyslexia may not appear to listeners to exhibit speech production difficulties because their pitch contours are intact. RESEARCH HIGHLIGHTS: Speech production of syllable stress patterns is atypical in children with dyslexia. Children with dyslexia are significantly worse at producing the amplitude envelope of multi-syllabic targets compared to both age-matched and reading-level-matched control children. No group differences were found for pitch contour production between children with dyslexia and age-matched control children. It may be difficult to detect speech output problems in dyslexia as pitch contours are relatively accurate.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Fala , Leitura , Fonética
2.
J Cogn Neurosci ; 35(11): 1741-1759, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37677057

RESUMO

In face-to-face conversations, listeners gather visual speech information from a speaker's talking face that enhances their perception of the incoming auditory speech signal. This auditory-visual (AV) speech benefit is evident even in quiet environments but is stronger in situations that require greater listening effort such as when the speech signal itself deviates from listeners' expectations. One example is infant-directed speech (IDS) presented to adults. IDS has exaggerated acoustic properties that are easily discriminable from adult-directed speech (ADS). Although IDS is a speech register that adults typically use with infants, no previous neurophysiological study has directly examined whether adult listeners process IDS differently from ADS. To address this, the current study simultaneously recorded EEG and eye-tracking data from adult participants as they were presented with auditory-only (AO), visual-only, and AV recordings of IDS and ADS. Eye-tracking data were recorded because looking behavior to the speaker's eyes and mouth modulates the extent of AV speech benefit experienced. Analyses of cortical tracking accuracy revealed that cortical tracking of the speech envelope was significant in AO and AV modalities for IDS and ADS. However, the AV speech benefit [i.e., AV > (A + V)] was only present for IDS trials. Gaze behavior analyses indicated differences in looking behavior during IDS and ADS trials. Surprisingly, looking behavior to the speaker's eyes and mouth was not correlated with cortical tracking accuracy. Additional exploratory analyses indicated that attention to the whole display was negatively correlated with cortical tracking accuracy of AO and visual-only trials in IDS. Our results underscore the nuances involved in the relationship between neurophysiological AV speech benefit and looking behavior.


Assuntos
Percepção da Fala , Fala , Humanos , Adulto , Lactente , Fala/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Comunicação
3.
J Neurosci ; 41(35): 7449-7460, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34341154

RESUMO

During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.SIGNIFICANCE STATEMENT Music perception depends on our ability to learn and detect melodic structures. It has been suggested that our brain does so by actively predicting upcoming music notes, a process inducing instantaneous neural responses as the music confronts these expectations. Here, we studied this prediction process using EEGs recorded while participants listen to and imagine Bach melodies. Specifically, we examined neural signals during the ubiquitous musical pauses (or silent intervals) in a music stream and analyzed them in contrast to the imagery responses. We find that imagined predictive responses are routinely co-opted during ongoing music listening. These conclusions are revealed by a new paradigm using listening and imagery of naturalistic melodies.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Imaginação/fisiologia , Motivação/fisiologia , Música/psicologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Aprendizagem/fisiologia , Masculino , Cadeias de Markov , Ocupações , Adulto Jovem
4.
J Neurosci ; 41(35): 7435-7448, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34341155

RESUMO

Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico , Lobo Frontal/fisiologia , Imaginação/fisiologia , Motivação/fisiologia , Música/psicologia , Estimulação Acústica , Adulto , Eletroencefalografia , Eletromiografia , Potenciais Evocados/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Cadeias de Markov , Ocupações , Simbolismo , Adulto Jovem
5.
Neuroimage ; 256: 119217, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35436614

RESUMO

An auditory-visual speech benefit, the benefit that visual speech cues bring to auditory speech perception, is experienced from early on in infancy and continues to be experienced to an increasing degree with age. While there is both behavioural and neurophysiological evidence for children and adults, only behavioural evidence exists for infants - as no neurophysiological study has provided a comprehensive examination of the auditory-visual speech benefit in infants. It is also surprising that most studies on auditory-visual speech benefit do not concurrently report looking behaviour especially since the auditory-visual speech benefit rests on the assumption that listeners attend to a speaker's talking face and that there are meaningful individual differences in looking behaviour. To address these gaps, we simultaneously recorded electroencephalographic (EEG) and eye-tracking data of 5-month-olds, 4-year-olds and adults as they were presented with a speaker in auditory-only (AO), visual-only (VO), and auditory-visual (AV) modes. Cortical tracking analyses that involved forward encoding models of the speech envelope revealed that there was an auditory-visual speech benefit [i.e., AV > (A + V)], evident in 5-month-olds and adults but not 4-year-olds. Examination of cortical tracking accuracy in relation to looking behaviour, showed that infants' relative attention to the speaker's mouth (vs. eyes) was positively correlated with cortical tracking accuracy of VO speech, whereas adults' attention to the display overall was negatively correlated with cortical tracking accuracy of VO speech. This study provides the first neurophysiological evidence of auditory-visual speech benefit in infants and our results suggest ways in which current models of speech processing can be fine-tuned.


Assuntos
Percepção da Fala , Fala , Adulto , Percepção Auditiva/fisiologia , Criança , Pré-Escolar , Humanos , Lactente , Boca , Percepção da Fala/fisiologia , Percepção Visual/fisiologia
6.
Neuroimage ; 247: 118698, 2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-34798233

RESUMO

The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding at multiple time scales. Neurophysiological signals are known to track the amplitude envelope of adult-directed speech (ADS), particularly in the theta-band. Acoustic analysis of infant-directed speech (IDS) has revealed significantly greater modulation energy than ADS in an amplitude-modulation (AM) band centred on ∼2 Hz. Accordingly, cortical tracking of IDS by delta-band neural signals may be key to language acquisition. Speech also contains acoustic information within its higher-frequency bands (beta, gamma). Adult EEG and MEG studies reveal an oscillatory hierarchy, whereby low-frequency (delta, theta) neural phase dynamics temporally organize the amplitude of high-frequency signals (phase amplitude coupling, PAC). Whilst consensus is growing around the role of PAC in the matured adult brain, its role in the development of speech processing is unexplored. Here, we examined the presence and maturation of low-frequency (<12 Hz) cortical speech tracking in infants by recording EEG longitudinally from 60 participants when aged 4-, 7- and 11- months as they listened to nursery rhymes. After establishing stimulus-related neural signals in delta and theta, cortical tracking at each age was assessed in the delta, theta and alpha [control] bands using a multivariate temporal response function (mTRF) method. Delta-beta, delta-gamma, theta-beta and theta-gamma phase-amplitude coupling (PAC) was also assessed. Significant delta and theta but not alpha tracking was found. Significant PAC was present at all ages, with both delta and theta -driven coupling observed.


Assuntos
Ritmo Delta/fisiologia , Percepção da Fala/fisiologia , Ritmo Teta/fisiologia , Estimulação Acústica , Córtex Auditivo/fisiologia , Encéfalo/fisiologia , Eletroencefalografia , Humanos , Lactente , Estudos Longitudinais , Reino Unido
7.
Neuroimage ; 196: 237-247, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-30991126

RESUMO

Humans comprehend speech despite the various challenges such as mispronunciation and noisy environments. Our auditory system is robust to these thanks to the integration of the sensory input with prior knowledge and expectations built on language-specific regularities. One such regularity regards the permissible phoneme sequences, which determine the likelihood that a word belongs to a given language (phonotactic probability; "blick" is more likely to be an English word than "bnick"). Previous research demonstrated that violations of these rules modulate brain-evoked responses. However, several fundamental questions remain unresolved, especially regarding the neural encoding and integration strategy of phonotactics in naturalistic conditions, when there are no (or few) violations. Here, we used linear modelling to assess the influence of phonotactic probabilities on the brain responses to narrative speech measured with non-invasive EEG. We found that the relationship between continuous speech and EEG responses is best described when the stimulus descriptor includes phonotactic probabilities. This indicates that low-frequency cortical signals (<9 Hz) reflect the integration of phonotactic information during natural speech perception, providing us with a measure of phonotactic processing at the individual subject-level. Furthermore, phonotactics-related signals showed the strongest speech-EEG interactions at latencies of 100-500 ms, supporting a pre-lexical role of phonotactic information.


Assuntos
Córtex Cerebral/fisiologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Adulto Jovem
8.
Neuroimage ; 186: 728-740, 2019 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-30496819

RESUMO

Brain data recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratios due to the presence of multiple competing sources and artifacts. A common remedy is to average responses over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that are presented only once (speech, music, movies, natural sound). An alternative is to average responses over multiple subjects that were presented with identical stimuli, but differences in geometry of brain sources and sensors reduce the effectiveness of this solution. Multiway canonical correlation analysis (MCCA) brings a solution to this problem by allowing data from multiple subjects to be fused in such a way as to extract components common to all. This paper reviews the method, offers application examples that illustrate its effectiveness, and outlines the caveats and risks entailed by the method.


Assuntos
Encéfalo/fisiologia , Interpretação Estatística de Dados , Eletroencefalografia/métodos , Magnetoencefalografia/métodos , Modelos Teóricos , Adulto , Humanos
9.
Neuroimage ; 166: 247-258, 2018 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-29102808

RESUMO

Speech perception may be underpinned by a hierarchical cortical system, which attempts to match "external" incoming sensory inputs with "internal" top-down predictions. Prior knowledge modulates internal predictions of an upcoming stimulus and exerts its effects in temporal and inferior frontal cortex. Here, we used source-space magnetoencephalography (MEG) to study the spatiotemporal dynamics underpinning the integration of prior knowledge in the speech processing network. Prior knowledge was manipulated to i) increase the perceived intelligibility of speech sentences, and ii) dissociate the perceptual effects of changes in speech intelligibility from acoustical differences in speech stimuli. Cortical entrainment to the speech temporal envelope, which accounts for neural activity specifically related to sensory information, was affected by prior knowledge: This effect emerged early (∼50 ms) in left inferior frontal gyrus (IFG) and then (∼100 ms) in Heschl's gyrus (HG), and was sustained until latencies of ∼250 ms. Directed transfer function (DTF) measures were used for estimating direct Granger causal relations between locations of interest. In line with the cortical entrainment result, this analysis indicated that prior knowledge enhanced top-down connections from left IFG to all the left temporal areas of interest - namely HG, superior temporal sulcus (STS), and middle temporal gyrus (MTG). In addition, intelligible speech increased top-down information flow between left STS and left HG, and increased bottom-up flow in higher-order temporal cortex, specifically between STS and MTG. These results are compatible with theories that explain this mechanism as a result of both ascending and descending cortical interactions, such as predictive coding. Altogether, this study provides a detailed view of how, where and when prior knowledge influences continuous speech perception.


Assuntos
Compreensão/fisiologia , Magnetoencefalografia/métodos , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adulto , Córtex Auditivo/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Tempo , Adulto Jovem
10.
Neuroimage ; 175: 70-79, 2018 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-29609008

RESUMO

Developmental dyslexia is a multifaceted disorder of learning primarily manifested by difficulties in reading, spelling, and phonological processing. Neural studies suggest that phonological difficulties may reflect impairments in fundamental cortical oscillatory mechanisms. Here we examine cortical mechanisms in children (6-12 years of age) with or without dyslexia (utilising both age- and reading-level-matched controls) using electroencephalography (EEG). EEG data were recorded as participants listened to an audio-story. Novel electrophysiological measures of phonemic processing were derived by quantifying how well the EEG responses tracked phonetic features of speech. Our results provide, for the first time, evidence for impaired low-frequency cortical tracking to phonetic features during natural speech perception in dyslexia. Atypical phonological tracking was focused on the right hemisphere, and correlated with traditional psychometric measures of phonological skills used in diagnostic dyslexia assessments. Accordingly, the novel indices developed here may provide objective metrics to investigate language development and language impairment across languages.


Assuntos
Dislexia/fisiopatologia , Eletroencefalografia/métodos , Lateralidade Funcional/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Psicolinguística , Percepção da Fala/fisiologia , Criança , Feminino , Humanos , Masculino
11.
Neuroimage ; 172: 206-216, 2018 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-29378317

RESUMO

The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Processamento de Sinais Assistido por Computador , Estimulação Acústica , Eletroencefalografia/métodos , Potenciais Evocados Auditivos/fisiologia , Humanos , Magnetoencefalografia/métodos
12.
J Neurosci ; 36(38): 9888-95, 2016 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-27656026

RESUMO

UNLABELLED: Speech comprehension is improved by viewing a speaker's face, especially in adverse hearing conditions, a principle known as inverse effectiveness. However, the neural mechanisms that help to optimize how we integrate auditory and visual speech in such suboptimal conversational environments are not yet fully understood. Using human EEG recordings, we examined how visual speech enhances the cortical representation of auditory speech at a signal-to-noise ratio that maximized the perceptual benefit conferred by multisensory processing relative to unisensory processing. We found that the influence of visual input on the neural tracking of the audio speech signal was significantly greater in noisy than in quiet listening conditions, consistent with the principle of inverse effectiveness. Although envelope tracking during audio-only speech was greatly reduced by background noise at an early processing stage, it was markedly restored by the addition of visual speech input. In background noise, multisensory integration occurred at much lower frequencies and was shown to predict the multisensory gain in behavioral performance at a time lag of ∼250 ms. Critically, we demonstrated that inverse effectiveness, in the context of natural audiovisual (AV) speech processing, relies on crossmodal integration over long temporal windows. Our findings suggest that disparate integration mechanisms contribute to the efficient processing of AV speech in background noise. SIGNIFICANCE STATEMENT: The behavioral benefit of seeing a speaker's face during conversation is especially pronounced in challenging listening environments. However, the neural mechanisms underlying this phenomenon, known as inverse effectiveness, have not yet been established. Here, we examine this in the human brain using natural speech-in-noise stimuli that were designed specifically to maximize the behavioral benefit of audiovisual (AV) speech. We find that this benefit arises from our ability to integrate multimodal information over longer periods of time. Our data also suggest that the addition of visual speech restores early tracking of the acoustic speech signal during excessive background noise. These findings support and extend current mechanistic perspectives on AV speech perception.


Assuntos
Potenciais Evocados/fisiologia , Modelos Neurológicos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Espectrografia do Som , Fatores de Tempo , Adulto Jovem
13.
Adv Exp Med Biol ; 894: 337-345, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27080674

RESUMO

The human ability to understand speech across an enormous range of listening conditions is underpinned by a hierarchical auditory processing system whose successive stages process increasingly complex attributes of the acoustic input. In order to produce a categorical perception of words and phonemes, it has been suggested that, while earlier areas of the auditory system undoubtedly respond to acoustic differences in speech tokens, later areas must exhibit consistent neural responses to those tokens. Neural indices of such hierarchical processing in the context of continuous speech have been identified using low-frequency scalp-recorded electroencephalography (EEG) data. The relationship between continuous speech and its associated neural responses has been shown to be best described when that speech is represented using both its low-level spectrotemporal information and also the categorical labelling of its phonetic features (Di Liberto et al., Curr Biol 25(19):2457-2465, 2015). While the phonetic features have been proven to carry extra-information not captured by the speech spectrotemporal representation, the causes of this EEG activity remain unclear. This study aims to demonstrate a framework for examining speech-specific processing and for disentangling high-level neural activity related to intelligibility from low-level activity in response to spectrotemporal fluctuations of speech. Preliminary results suggest that neural measure of processing at the phonetic level can be isolated.


Assuntos
Eletroencefalografia , Fonética , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Inteligibilidade da Fala
14.
Front Hum Neurosci ; 18: 1403677, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38911229

RESUMO

Slow cortical oscillations play a crucial role in processing the speech amplitude envelope, which is perceived atypically by children with developmental dyslexia. Here we use electroencephalography (EEG) recorded during natural speech listening to identify neural processing patterns involving slow oscillations that may characterize children with dyslexia. In a story listening paradigm, we find that atypical power dynamics and phase-amplitude coupling between delta and theta oscillations characterize dyslexic versus other child control groups (typically-developing controls, other language disorder controls). We further isolate EEG common spatial patterns (CSP) during speech listening across delta and theta oscillations that identify dyslexic children. A linear classifier using four delta-band CSP variables predicted dyslexia status (0.77 AUC). Crucially, these spatial patterns also identified children with dyslexia when applied to EEG measured during a rhythmic syllable processing task. This transfer effect (i.e., the ability to use neural features derived from a story listening task as input features to a classifier based on a rhythmic syllable task) is consistent with a core developmental deficit in neural processing of speech rhythm. The findings are suggestive of distinct atypical neurocognitive speech encoding mechanisms underlying dyslexia, which could be targeted by novel interventions.

15.
ArXiv ; 2024 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-37744463

RESUMO

Neurophysiology research has demonstrated that it is possible and valuable to investigate sensory processing in scenarios involving continuous sensory streams, such as speech and music. Over the past 10 years or so, novel analytic frameworks combined with the growing participation in data sharing has led to a surge of publicly available datasets involving continuous sensory experiments. However, open science efforts in this domain of research remain scattered, lacking a cohesive set of guidelines. This paper presents an end-to-end open science framework for the storage, analysis, sharing, and re-analysis of neural data recorded during continuous sensory experiments. The framework has been designed to interface easily with existing toolboxes, such as EelBrain, NapLib, MNE, and the mTRF-Toolbox. We present guidelines by taking both the user view (how to rapidly re-analyse existing data) and the experimenter view (how to store, analyse, and share), making the process as straightforward and accessible as possible for all users. Additionally, we introduce a web-based data browser that enables the effortless replication of published results and data re-analysis.

16.
Nat Commun ; 14(1): 7789, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38040720

RESUMO

Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed with behavioural investigations, are likely to rely on increasingly sophisticated neural underpinnings. The infant brain is known to robustly track the speech envelope, however previous cortical tracking studies were unable to demonstrate the presence of phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses to nursery rhymes to investigate the cortical encoding of phonetic features in a longitudinal cohort of infants when aged 4, 7 and 11 months, as well as adults. The analyses reveal an increasingly detailed and acoustically invariant phonetic encoding emerging over the first year of life, providing neurophysiological evidence that the pre-verbal human cortex learns phonetic categories. By contrast, we found no credible evidence for age-related increases in cortical tracking of the acoustic spectrogram.


Assuntos
Córtex Auditivo , Percepção da Fala , Adulto , Lactente , Humanos , Fonética , Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Acústica , Estimulação Acústica
17.
Front Neurosci ; 16: 842447, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35495026

RESUMO

Here we duplicate a neural tracking paradigm, previously published with infants (aged 4 to 11 months), with adult participants, in order to explore potential developmental similarities and differences in entrainment. Adults listened and watched passively as nursery rhymes were sung or chanted in infant-directed speech. Whole-head EEG (128 channels) was recorded, and cortical tracking of the sung speech in the delta (0.5-4 Hz), theta (4-8 Hz) and alpha (8-12 Hz) frequency bands was computed using linear decoders (multivariate Temporal Response Function models, mTRFs). Phase-amplitude coupling (PAC) was also computed to assess whether delta and theta phases temporally organize higher-frequency amplitudes for adults in the same pattern as found in the infant brain. Similar to previous infant participants, the adults showed significant cortical tracking of the sung speech in both delta and theta bands. However, the frequencies associated with peaks in stimulus-induced spectral power (PSD) in the two populations were different. PAC was also different in the adults compared to the infants. PAC was stronger for theta- versus delta- driven coupling in adults but was equal for delta- versus theta-driven coupling in infants. Adults also showed a stimulus-induced increase in low alpha power that was absent in infants. This may suggest adult recruitment of other cognitive processes, possibly related to comprehension or attention. The comparative data suggest that while infant and adult brains utilize essentially the same cortical mechanisms to track linguistic input, the operation of and interplay between these mechanisms may change with age and language experience.

18.
Front Neurosci ; 15: 673401, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34421512

RESUMO

Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Here we tested the possibility of assessing the neural processing of individual musical units (bars) with electroencephalography (EEG). We devised a decoding methodology based on a maximum correlation metric across EEG segments (maxCorr) and used it to decode melodies from EEG based on an experiment where professional musicians listened and imagined four Bach melodies multiple times. We demonstrate here that accurate decoding of melodies in single-subjects and at the level of individual musical units is possible, both from EEG signals recorded during listening and imagination. Furthermore, we find that greater decoding accuracies are measured for the maxCorr method than for an envelope reconstruction approach based on backward temporal response functions (bTRF env ). These results indicate that low-frequency neural signals encode information beyond note timing, especially with respect to low-frequency cortical signals below 1 Hz, which are shown to encode pitch-related information. Along with the theoretical implications of these results, we discuss the potential applications of this decoding methodology in the context of novel brain-computer interface solutions.

19.
Sci Rep ; 11(1): 4963, 2021 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-33654202

RESUMO

Healthy ageing leads to changes in the brain that impact upon sensory and cognitive processing. It is not fully clear how these changes affect the processing of everyday spoken language. Prediction is thought to play an important role in language comprehension, where information about upcoming words is pre-activated across multiple representational levels. However, evidence from electrophysiology suggests differences in how older and younger adults use context-based predictions, particularly at the level of semantic representation. We investigate these differences during natural speech comprehension by presenting older and younger subjects with continuous, narrative speech while recording their electroencephalogram. We use time-lagged linear regression to test how distinct computational measures of (1) semantic dissimilarity and (2) lexical surprisal are processed in the brains of both groups. Our results reveal dissociable neural correlates of these two measures that suggest differences in how younger and older adults successfully comprehend speech. Specifically, our results suggest that, while younger and older subjects both employ context-based lexical predictions, older subjects are significantly less likely to pre-activate the semantic features relating to upcoming words. Furthermore, across our group of older adults, we show that the weaker the neural signature of this semantic pre-activation mechanism, the lower a subject's semantic verbal fluency score. We interpret these findings as prediction playing a generally reduced role at a semantic level in the brains of older listeners during speech comprehension and that these changes may be part of an overall strategy to successfully comprehend speech with reduced cognitive resources.


Assuntos
Envelhecimento/fisiologia , Encéfalo/fisiologia , Compreensão/fisiologia , Eletroencefalografia , Percepção da Fala/fisiologia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
20.
Front Neurosci ; 15: 705621, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34880719

RESUMO

Cognitive neuroscience, in particular research on speech and language, has seen an increase in the use of linear modeling techniques for studying the processing of natural, environmental stimuli. The availability of such computational tools has prompted similar investigations in many clinical domains, facilitating the study of cognitive and sensory deficits under more naturalistic conditions. However, studying clinical (and often highly heterogeneous) cohorts introduces an added layer of complexity to such modeling procedures, potentially leading to instability of such techniques and, as a result, inconsistent findings. Here, we outline some key methodological considerations for applied research, referring to a hypothetical clinical experiment involving speech processing and worked examples of simulated electrophysiological (EEG) data. In particular, we focus on experimental design, data preprocessing, stimulus feature extraction, model design, model training and evaluation, and interpretation of model weights. Throughout the paper, we demonstrate the implementation of each step in MATLAB using the mTRF-Toolbox and discuss how to address issues that could arise in applied research. In doing so, we hope to provide better intuition on these more technical points and provide a resource for applied and clinical researchers investigating sensory and cognitive processing using ecologically rich stimuli.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA