Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 87
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Neurosci ; 42(41): 7782-7798, 2022 10 12.
Artículo en Inglés | MEDLINE | ID: mdl-36041853

RESUMEN

In recent years research on natural speech processing has benefited from recognizing that low-frequency cortical activity tracks the amplitude envelope of natural speech. However, it remains unclear to what extent this tracking reflects speech-specific processing beyond the analysis of the stimulus acoustics. In the present study, we aimed to disentangle contributions to cortical envelope tracking that reflect general acoustic processing from those that are functionally related to processing speech. To do so, we recorded EEG from subjects as they listened to auditory chimeras, stimuli composed of the temporal fine structure of one speech stimulus modulated by the amplitude envelope (ENV) of another speech stimulus. By varying the number of frequency bands used in making the chimeras, we obtained some control over which speech stimulus was recognized by the listener. No matter which stimulus was recognized, envelope tracking was always strongest for the ENV stimulus, indicating a dominant contribution from acoustic processing. However, there was also a positive relationship between intelligibility and the tracking of the perceived speech, indicating a contribution from speech-specific processing. These findings were supported by a follow-up analysis that assessed envelope tracking as a function of the (estimated) output of the cochlea rather than the original stimuli used in creating the chimeras. Finally, we sought to isolate the speech-specific contribution to envelope tracking using forward encoding models and found that indices of phonetic feature processing tracked reliably with intelligibility. Together these results show that cortical speech tracking is dominated by acoustic processing but also reflects speech-specific processing.SIGNIFICANCE STATEMENT Activity in auditory cortex is known to dynamically track the energy fluctuations, or amplitude envelope, of speech. Measures of this tracking are now widely used in research on hearing and language and have had a substantial influence on theories of how auditory cortex parses and processes speech. But how much of this speech tracking is actually driven by speech-specific processing rather than general acoustic processing is unclear, limiting its interpretability and its usefulness. Here, by merging two speech stimuli together to form so-called auditory chimeras, we show that EEG tracking of the speech envelope is dominated by acoustic processing but also reflects linguistic analysis. This has important implications for theories of cortical speech tracking and for using measures of that tracking in applied research.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Habla , Estimulación Acústica/métodos , Fonética
2.
J Neurosci ; 42(4): 682-691, 2022 01 26.
Artículo en Inglés | MEDLINE | ID: mdl-34893546

RESUMEN

Humans have the remarkable ability to selectively focus on a single talker in the midst of other competing talkers. The neural mechanisms that underlie this phenomenon remain incompletely understood. In particular, there has been longstanding debate over whether attention operates at an early or late stage in the speech processing hierarchy. One way to better understand this is to examine how attention might differentially affect neurophysiological indices of hierarchical acoustic and linguistic speech representations. In this study, we do this by using encoding models to identify neural correlates of speech processing at various levels of representation. Specifically, we recorded EEG from fourteen human subjects (nine female and five male) during a "cocktail party" attention experiment. Model comparisons based on these data revealed phonetic feature processing for attended, but not unattended speech. Furthermore, we show that attention specifically enhances isolated indices of phonetic feature processing, but that such attention effects are not apparent for isolated measures of acoustic processing. These results provide new insights into the effects of attention on different prelexical representations of speech, insights that complement recent anatomic accounts of the hierarchical encoding of attended speech. Furthermore, our findings support the notion that, for attended speech, phonetic features are processed as a distinct stage, separate from the processing of the speech acoustics.SIGNIFICANCE STATEMENT Humans are very good at paying attention to one speaker in an environment with multiple speakers. However, the details of how attended and unattended speech are processed differently by the brain is not completely clear. Here, we explore how attention affects the processing of the acoustic sounds of speech as well as the mapping of those sounds onto categorical phonetic features. We find evidence of categorical phonetic feature processing for attended, but not unattended speech. Furthermore, we find evidence that categorical phonetic feature processing is enhanced by attention, but acoustic processing is not. These findings add an important new layer in our understanding of how the human brain solves the cocktail party problem.


Asunto(s)
Estimulación Acústica/métodos , Atención/fisiología , Fonética , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
3.
Neuroimage ; 282: 120391, 2023 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-37757989

RESUMEN

There is considerable debate over how visual speech is processed in the absence of sound and whether neural activity supporting lipreading occurs in visual brain areas. Much of the ambiguity stems from a lack of behavioral grounding and neurophysiological analyses that cannot disentangle high-level linguistic and phonetic/energetic contributions from visual speech. To address this, we recorded EEG from human observers as they watched silent videos, half of which were novel and half of which were previously rehearsed with the accompanying audio. We modeled how the EEG responses to novel and rehearsed silent speech reflected the processing of low-level visual features (motion, lip movements) and a higher-level categorical representation of linguistic units, known as visemes. The ability of these visemes to account for the EEG - beyond the motion and lip movements - was significantly enhanced for rehearsed videos in a way that correlated with participants' trial-by-trial ability to lipread that speech. Source localization of viseme processing showed clear contributions from visual cortex, with no strong evidence for the involvement of auditory areas. We interpret this as support for the idea that the visual system produces its own specialized representation of speech that is (1) well-described by categorical linguistic features, (2) dissociable from lip movements, and (3) predictive of lipreading ability. We also suggest a reinterpretation of previous findings of auditory cortical activation during silent speech that is consistent with hierarchical accounts of visual and audiovisual speech perception.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Lectura de los Labios , Percepción del Habla/fisiología , Encéfalo/fisiología , Corteza Auditiva/fisiología , Fonética , Percepción Visual/fisiología
4.
Neuroimage ; 274: 120143, 2023 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-37121375

RESUMEN

In noisy environments, our ability to understand speech benefits greatly from seeing the speaker's face. This is attributed to the brain's ability to integrate audio and visual information, a process known as multisensory integration. In addition, selective attention plays an enormous role in what we understand, the so-called cocktail-party phenomenon. But how attention and multisensory integration interact remains incompletely understood, particularly in the case of natural, continuous speech. Here, we addressed this issue by analyzing EEG data recorded from participants who undertook a multisensory cocktail-party task using natural speech. To assess multisensory integration, we modeled the EEG responses to the speech in two ways. The first assumed that audiovisual speech processing is simply a linear combination of audio speech processing and visual speech processing (i.e., an A + V model), while the second allows for the possibility of audiovisual interactions (i.e., an AV model). Applying these models to the data revealed that EEG responses to attended audiovisual speech were better explained by an AV model, providing evidence for multisensory integration. In contrast, unattended audiovisual speech responses were best captured using an A + V model, suggesting that multisensory integration is suppressed for unattended speech. Follow up analyses revealed some limited evidence for early multisensory integration of unattended AV speech, with no integration occurring at later levels of processing. We take these findings as evidence that the integration of natural audio and visual speech occurs at multiple levels of processing in the brain, each of which can be differentially affected by attention.


Asunto(s)
Percepción del Habla , Humanos , Percepción del Habla/fisiología , Habla , Atención/fisiología , Percepción Visual/fisiología , Encéfalo/fisiología , Estimulación Acústica , Percepción Auditiva
5.
J Neurosci ; 41(23): 4991-5003, 2021 06 09.
Artículo en Inglés | MEDLINE | ID: mdl-33824190

RESUMEN

Seeing a speaker's face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker's face provides temporal cues to auditory cortex, and articulatory information from the speaker's mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how the integration of these cues varies as a function of listening conditions. Here, we sought to provide insight on these questions by examining EEG responses in humans (males and females) to natural audiovisual (AV), audio, and visual speech in quiet and in noise. We represented our speech stimuli in terms of their spectrograms and their phonetic features and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis (CCA). The encoding of both spectrotemporal and phonetic features was shown to be more robust in AV speech responses than what would have been expected from the summation of the audio and visual speech responses, suggesting that multisensory integration occurs at both spectrotemporal and phonetic stages of speech processing. We also found evidence to suggest that the integration effects may change with listening conditions; however, this was an exploratory analysis and future work will be required to examine this effect using a within-subject design. These findings demonstrate that integration of audio and visual speech occurs at multiple stages along the speech processing hierarchy.SIGNIFICANCE STATEMENT During conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here, we examine audiovisual (AV) integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how AV integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions. These findings reveal neural indices of multisensory interactions at different stages of processing and provide support for the multistage integration framework.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Señales (Psicología) , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Mapeo Encefálico , Electroencefalografía , Femenino , Humanos , Masculino , Fonética , Estimulación Luminosa
6.
J Neurosci ; 41(18): 4100-4119, 2021 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-33753548

RESUMEN

Understanding how and where in the brain sentence-level meaning is constructed from words presents a major scientific challenge. Recent advances have begun to explain brain activation elicited by sentences using vector models of word meaning derived from patterns of word co-occurrence in text corpora. These studies have helped map out semantic representation across a distributed brain network spanning temporal, parietal, and frontal cortex. However, it remains unclear whether activation patterns within regions reflect unified representations of sentence-level meaning, as opposed to superpositions of context-independent component words. This is because models have typically represented sentences as "bags-of-words" that neglect sentence-level structure. To address this issue, we interrogated fMRI activation elicited as 240 sentences were read by 14 participants (9 female, 5 male), using sentences encoded by a recurrent deep artificial neural-network trained on a sentence inference task (InferSent). Recurrent connections and nonlinear filters enable InferSent to transform sequences of word vectors into unified "propositional" sentence representations suitable for evaluating intersentence entailment relations. Using voxelwise encoding modeling, we demonstrate that InferSent predicts elements of fMRI activation that cannot be predicted by bag-of-words models and sentence models using grammatical rules to assemble word vectors. This effect occurs throughout a distributed network, which suggests that propositional sentence-level meaning is represented within and across multiple cortical regions rather than at any single site. In follow-up analyses, we place results in the context of other deep network approaches (ELMo and BERT) and estimate the degree of unpredicted neural signal using an "experiential" semantic model and cross-participant encoding.SIGNIFICANCE STATEMENT A modern-day scientific challenge is to understand how the human brain transforms word sequences into representations of sentence meaning. A recent approach, emerging from advances in functional neuroimaging, big data, and machine learning, is to computationally model meaning, and use models to predict brain activity. Such models have helped map a cortical semantic information-processing network. However, how unified sentence-level information, as opposed to word-level units, is represented throughout this network remains unclear. This is because models have typically represented sentences as unordered "bags-of-words." Using a deep artificial neural network that recurrently and nonlinearly combines word representations into unified propositional sentence representations, we provide evidence that sentence-level information is encoded throughout a cortical network, rather than in a single region.


Asunto(s)
Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/fisiología , Comprensión/fisiología , Lenguaje , Redes Neurales de la Computación , Semántica , Adulto , Simulación por Computador , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Lectura , Adulto Joven
7.
Eur J Neurosci ; 56(8): 5201-5214, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35993240

RESUMEN

Speech comprehension relies on the ability to understand words within a coherent context. Recent studies have attempted to obtain electrophysiological indices of this process by modelling how brain activity is affected by a word's semantic dissimilarity to preceding words. Although the resulting indices appear robust and are strongly modulated by attention, it remains possible that, rather than capturing the contextual understanding of words, they may actually reflect word-to-word changes in semantic content without the need for a narrative-level understanding on the part of the listener. To test this, we recorded electroencephalography from subjects who listened to speech presented in either its original, narrative form, or after scrambling the word order by varying amounts. This manipulation affected the ability of subjects to comprehend the speech narrative but not the ability to recognise individual words. Neural indices of semantic understanding and low-level acoustic processing were derived for each scrambling condition using the temporal response function. Signatures of semantic processing were observed when speech was unscrambled or minimally scrambled and subjects understood the speech. The same markers were absent for higher scrambling levels as speech comprehension dropped. In contrast, word recognition remained high and neural measures related to envelope tracking did not vary significantly across scrambling conditions. This supports the previous claim that electrophysiological indices based on the semantic dissimilarity of words to their context reflect a listener's understanding of those words relative to that context. It also highlights the relative insensitivity of neural measures of low-level speech processing to speech comprehension.


Asunto(s)
Semántica , Percepción del Habla , Percepción Auditiva/fisiología , Comprensión/fisiología , Electroencefalografía , Humanos , Habla/fisiología , Percepción del Habla/fisiología
8.
PLoS Comput Biol ; 17(9): e1009358, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34534211

RESUMEN

The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Música , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Biología Computacional , Simulación por Computador , Electroencefalografía/estadística & datos numéricos , Femenino , Humanos , Modelos Lineales , Masculino , Modelos Neurológicos , Análisis de Componente Principal , Acústica del Lenguaje , Adulto Joven
9.
J Neurosci ; 39(38): 7564-7575, 2019 09 18.
Artículo en Inglés | MEDLINE | ID: mdl-31371424

RESUMEN

Speech perception involves the integration of sensory input with expectations based on the context of that speech. Much debate surrounds the issue of whether or not prior knowledge feeds back to affect early auditory encoding in the lower levels of the speech processing hierarchy, or whether perception can be best explained as a purely feedforward process. Although there has been compelling evidence on both sides of this debate, experiments involving naturalistic speech stimuli to address these questions have been lacking. Here, we use a recently introduced method for quantifying the semantic context of speech and relate it to a commonly used method for indexing low-level auditory encoding of speech. The relationship between these measures is taken to be an indication of how semantic context leading up to a word influences how its low-level acoustic and phonetic features are processed. We record EEG from human participants (both male and female) listening to continuous natural speech and find that the early cortical tracking of a word's speech envelope is enhanced by its semantic similarity to its sentential context. Using a forward modeling approach, we find that prediction accuracy of the EEG signal also shows the same effect. Furthermore, this effect shows distinct temporal patterns of correlation depending on the type of speech input representation (acoustic or phonological) used for the model, implicating a top-down propagation of information through the processing hierarchy. These results suggest a mechanism that links top-down prior information with the early cortical entrainment of words in natural, continuous speech.SIGNIFICANCE STATEMENT During natural speech comprehension, we use semantic context when processing information about new incoming words. However, precisely how the neural processing of bottom-up sensory information is affected by top-down context-based predictions remains controversial. We address this discussion using a novel approach that indexes a word's similarity to context and how well a word's acoustic and phonetic features are processed by the brain at the time of its utterance. We relate these two measures and show that lower-level auditory tracking of speech improves for words that are more related to their preceding context. These results suggest a mechanism that links top-down prior information with bottom-up sensory processing in the context of natural, narrative speech listening.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Modelos Neurológicos , Semántica , Percepción del Habla/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
10.
J Neurosci ; 39(45): 8969-8987, 2019 11 06.
Artículo en Inglés | MEDLINE | ID: mdl-31570538

RESUMEN

The brain is thought to combine linguistic knowledge of words and nonlinguistic knowledge of their referents to encode sentence meaning. However, functional neuroimaging studies aiming at decoding language meaning from neural activity have mostly relied on distributional models of word semantics, which are based on patterns of word co-occurrence in text corpora. Here, we present initial evidence that modeling nonlinguistic "experiential" knowledge contributes to decoding neural representations of sentence meaning. We model attributes of peoples' sensory, motor, social, emotional, and cognitive experiences with words using behavioral ratings. We demonstrate that fMRI activation elicited in sentence reading is more accurately decoded when this experiential attribute model is integrated with a text-based model than when either model is applied in isolation (participants were 5 males and 9 females). Our decoding approach exploits a representation-similarity-based framework, which benefits from being parameter free, while performing at accuracy levels comparable with those from parameter fitting approaches, such as ridge regression. We find that the text-based model contributes particularly to the decoding of sentences containing linguistically oriented "abstract" words and reveal tentative evidence that the experiential model improves decoding of more concrete sentences. Finally, we introduce a cross-participant decoding method to estimate an upper bound on model-based decoding accuracy. We demonstrate that a substantial fraction of neural signal remains unexplained, and leverage this gap to pinpoint characteristics of weakly decoded sentences and hence identify model weaknesses to guide future model development.SIGNIFICANCE STATEMENT Language gives humans the unique ability to communicate about historical events, theoretical concepts, and fiction. Although words are learned through language and defined by their relations to other words in dictionaries, our understanding of word meaning presumably draws heavily on our nonlinguistic sensory, motor, interoceptive, and emotional experiences with words and their referents. Behavioral experiments lend support to the intuition that word meaning integrates aspects of linguistic and nonlinguistic "experiential" knowledge. However, behavioral measures do not provide a window on how meaning is represented in the brain and tend to necessitate artificial experimental paradigms. We present a model-based approach that reveals early evidence that experiential and linguistically acquired knowledge can be detected in brain activity elicited in reading natural sentences.


Asunto(s)
Comprensión , Modelos Neurológicos , Lectura , Adulto , Encéfalo/fisiología , Femenino , Humanos , Conocimiento , Aprendizaje , Masculino , Semántica
11.
Neuroimage ; 205: 116283, 2020 01 15.
Artículo en Inglés | MEDLINE | ID: mdl-31629828

RESUMEN

Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.


Asunto(s)
Ritmo alfa/fisiología , Atención/fisiología , Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Ritmo Delta/fisiología , Electroencefalografía , Percepción Espacial/fisiología , Adulto , Femenino , Humanos , Masculino , Localización de Sonidos/fisiología , Percepción del Habla/fisiología , Adulto Joven
12.
Neuroimage ; 210: 116558, 2020 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-31962174

RESUMEN

Humans can easily distinguish many sounds in the environment, but speech and music are uniquely important. Previous studies, mostly using fMRI, have identified separate regions of the brain that respond selectively for speech and music. Yet there is little evidence that brain responses are larger and more temporally precise for human-specific sounds like speech and music compared to other types of sounds, as has been found for responses to species-specific sounds in other animals. We recorded EEG as healthy, adult subjects listened to various types of two-second-long natural sounds. By classifying each sound based on the EEG response, we found that speech, music, and impact sounds were classified better than other natural sounds. But unlike impact sounds, the classification accuracy for speech and music dropped for synthesized sounds that have identical frequency and modulation statistics based on a subcortical model, indicating a selectivity for higher-order features in these sounds. Lastly, the patterns in average power and phase consistency of the two-second EEG responses to each sound replicated the patterns of speech and music selectivity observed with classification accuracy. Together with the classification results, this suggests that the brain produces temporally individualized responses to speech and music sounds that are stronger than the responses to other natural sounds. In addition to highlighting the importance of speech and music for the human brain, the techniques used here could be a cost-effective, temporally precise, and efficient way to study the human brain's selectivity for speech and music in other populations.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Electroencefalografía/métodos , Neuroimagen Funcional/métodos , Música , Adulto , Femenino , Humanos , Masculino , Percepción del Habla/fisiología , Adulto Joven
13.
Cereb Cortex ; 29(6): 2396-2411, 2019 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-29771323

RESUMEN

Deciphering how sentence meaning is represented in the brain remains a major challenge to science. Semantically related neural activity has recently been shown to arise concurrently in distributed brain regions as successive words in a sentence are read. However, what semantic content is represented by different regions, what is common across them, and how this relates to words in different grammatical positions of sentences is weakly understood. To address these questions, we apply a semantic model of word meaning to interpret brain activation patterns elicited in sentence reading. The model is based on human ratings of 65 sensory/motor/emotional and cognitive features of experience with words (and their referents). Through a process of mapping functional Magnetic Resonance Imaging activation back into model space we test: which brain regions semantically encode content words in different grammatical positions (e.g., subject/verb/object); and what semantic features are encoded by different regions. In left temporal, inferior parietal, and inferior/superior frontal regions we detect the semantic encoding of words in all grammatical positions tested and reveal multiple common components of semantic representation. This suggests that sentence comprehension involves a common core representation of multiple words' meaning being encoded in a network of regions distributed across the brain.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Modelos Neurológicos , Semántica , Percepción del Habla/fisiología , Mapeo Encefálico/métodos , Humanos , Lenguaje , Imagen por Resonancia Magnética/métodos
14.
Cereb Cortex ; 29(1): 27-41, 2019 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-29136131

RESUMEN

Amyotrophic lateral sclerosis (ALS) is a terminal progressive adult-onset neurodegeneration of the motor system. Although originally considered a pure motor degeneration, there is increasing evidence of disease heterogeneity with varying degrees of extra-motor involvement. How the combined motor and nonmotor degeneration occurs in the context of broader disruption in neural communication across brain networks has not been well characterized. Here, we have performed high-density crossectional and longitudinal resting-state electroencephalography (EEG) recordings on 100 ALS patients and 34 matched controls, and have identified characteristic patterns of altered EEG connectivity that have persisted in longitudinal analyses. These include strongly increased EEG coherence between parietal-frontal scalp regions (in γ-band) and between bilateral regions over motor areas (in θ-band). Correlation with structural MRI from the same patients shows that disease-specific structural degeneration in motor areas and corticospinal tracts parallels a decrease in neural activity over scalp motor areas, while the EEG over the scalp regions associated with less extensively involved extra-motor regions on MRI exhibit significantly increased neural communication. Our findings demonstrate that EEG-based connectivity mapping can provide novel insights into progressive network decline in ALS. These data pave the way for development of validated cost-effective spectral EEG-based biomarkers that parallel changes in structural imaging.


Asunto(s)
Esclerosis Amiotrófica Lateral/diagnóstico por imagen , Corteza Cerebral/diagnóstico por imagen , Electroencefalografía/tendencias , Imagen por Resonancia Magnética/tendencias , Red Nerviosa/diagnóstico por imagen , Tractos Piramidales/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Esclerosis Amiotrófica Lateral/fisiopatología , Corteza Cerebral/fisiopatología , Estudios de Cohortes , Electroencefalografía/métodos , Femenino , Humanos , Estudios Longitudinales , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Red Nerviosa/fisiopatología , Tractos Piramidales/fisiopatología
15.
Eur J Neurosci ; 50(11): 3831-3842, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31287601

RESUMEN

Speech is central to communication among humans. Meaning is largely conveyed by the selection of linguistic units such as words, phrases and sentences. However, prosody, that is the variation of acoustic cues that tie linguistic segments together, adds another layer of meaning. There are various features underlying prosody, one of the most important being pitch and how it is modulated. Recent fMRI and ECoG studies have suggested that there are cortical regions for pitch which respond primarily to resolved harmonics and that high-gamma cortical activity encodes intonation as represented by relative pitch. Importantly, this latter result was shown to be independent of the cortical tracking of the acoustic energy of speech, a commonly used measure. Here, we investigate whether we can isolate low-frequency EEG indices of pitch processing of continuous narrative speech from those reflecting the tracking of other acoustic and phonetic features. Harmonic resolvability was found to contain unique predictive power in delta and theta phase, but it was highly correlated with the envelope and tracked even when stimuli were pitch-impoverished. As such, we are circumspect about whether its contribution is truly pitch-specific. Crucially however, we found a unique contribution of relative pitch to EEG delta-phase prediction, and this tracking was absent when subjects listened to pitch-impoverished stimuli. This finding suggests the possibility of a separate processing stream for prosody that might operate in parallel to acoustic-linguistic processing. Furthermore, it provides a novel neural index that could be useful for testing prosodic encoding in populations with speech processing deficits and for improving cognitively controlled hearing aids.


Asunto(s)
Corteza Auditiva/fisiología , Ritmo Delta/fisiología , Fonética , Percepción de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Electroencefalografía/métodos , Femenino , Humanos , Magnetoencefalografía/métodos , Masculino
16.
Eur J Neurosci ; 50(8): 3282-3295, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31013361

RESUMEN

Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope-based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV-facing the attended speaker while ignoring another speaker represented by the audio-only stream and (b) Incongruent AV (eavesdropping)-attending the audio-only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto-occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near-ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy.


Asunto(s)
Ritmo alfa , Atención/fisiología , Encéfalo/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
17.
Hum Brain Mapp ; 40(16): 4827-4842, 2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31348605

RESUMEN

Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease primarily affecting motor function, with additional evidence of extensive nonmotor involvement. Despite increasing recognition of the disease as a multisystem network disorder characterised by impaired connectivity, the precise neuroelectric characteristics of impaired cortical communication remain to be fully elucidated. Here, we characterise changes in functional connectivity using beamformer source analysis on resting-state electroencephalography recordings from 74 ALS patients and 47 age-matched healthy controls. Spatiospectral characteristics of network changes in the ALS patient group were quantified by spectral power, amplitude envelope correlation (co-modulation) and imaginary coherence (synchrony). We show patterns of decreased spectral power in the occipital and temporal (δ- to ß-band), lateral/orbitofrontal (δ- to θ-band) and sensorimotor (ß-band) regions of the brain in patients with ALS. Furthermore, we show increased co-modulation of neural oscillations in the central and posterior (δ-, θ- and γl -band) and frontal (δ- and γl -band) regions, as well as decreased synchrony in the temporal and frontal (δ- to ß-band) and sensorimotor (ß-band) regions. Factorisation of these complex connectivity patterns reveals a distinct disruption of both motor and nonmotor networks. The observed changes in connectivity correlated with structural MRI changes, functional motor scores and cognitive scores. Characteristic patterned changes of cortical function in ALS signify widespread disease-associated network disruption, pointing to extensive dysfunction of both motor and cognitive networks. These statistically robust findings, that correlate with clinical scores, provide a strong rationale for further development as biomarkers of network disruption for future clinical trials.


Asunto(s)
Esclerosis Amiotrófica Lateral/fisiopatología , Red Nerviosa/fisiopatología , Adulto , Anciano , Esclerosis Amiotrófica Lateral/diagnóstico por imagen , Esclerosis Amiotrófica Lateral/psicología , Ritmo beta , Mapeo Encefálico , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/fisiopatología , Cognición , Ritmo Delta , Electroencefalografía , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Pruebas Neuropsicológicas , Desempeño Psicomotor , Ritmo Teta
18.
Neuroimage ; 181: 683-691, 2018 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-30053517

RESUMEN

It is of increasing practical interest to be able to decode the spatial characteristics of an auditory scene from electrophysiological signals. However, the cortical representation of auditory space is not well characterized, and it is unclear how cortical activity reflects the time-varying location of a moving sound. Recently, we demonstrated that cortical response measures to discrete noise bursts can be decoded to determine their origin in space. Here we build on these findings to investigate the cortical representation of a continuously moving auditory stimulus using scalp recorded electroencephalography (EEG). In a first experiment, subjects listened to pink noise over headphones which was spectro-temporally modified to be perceived as randomly moving on a semi-circular trajectory in the horizontal plane. While subjects listened to the stimuli, we recorded their EEG using a 128-channel acquisition system. The data were analysed by 1) building a linear regression model (decoder) mapping the relationship between the stimulus location and a training set of EEG data, and 2) using the decoder to reconstruct an estimate of the time-varying sound source azimuth from the EEG data. The results showed that we can decode sound trajectory with a reconstruction accuracy significantly above chance level. Specifically, we found that the phase of delta (<2 Hz) and power of alpha (8-12 Hz) EEG track the dynamics of a moving auditory object. In a follow-up experiment, we replaced the noise with pulse train stimuli containing only interaural level and time differences (ILDs and ITDs respectively). This allowed us to investigate whether our trajectory decoding is sensitive to both acoustic cues. We found that the sound trajectory can be decoded for both ILD and ITD stimuli. Moreover, their neural signatures were similar and even allowed successful cross-cue classification. This supports the notion of integrated processing of ILD and ITD at the cortical level. These results are particularly relevant for application in devices such as cognitively controlled hearing aids and for the evaluation of virtual acoustic environments.


Asunto(s)
Ritmo alfa/fisiología , Corteza Cerebral/fisiología , Ritmo Delta/fisiología , Electroencefalografía/métodos , Neuroimagen Funcional/métodos , Procesamiento de Señales Asistido por Computador , Localización de Sonidos/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
19.
Neuroimage ; 166: 247-258, 2018 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-29102808

RESUMEN

Speech perception may be underpinned by a hierarchical cortical system, which attempts to match "external" incoming sensory inputs with "internal" top-down predictions. Prior knowledge modulates internal predictions of an upcoming stimulus and exerts its effects in temporal and inferior frontal cortex. Here, we used source-space magnetoencephalography (MEG) to study the spatiotemporal dynamics underpinning the integration of prior knowledge in the speech processing network. Prior knowledge was manipulated to i) increase the perceived intelligibility of speech sentences, and ii) dissociate the perceptual effects of changes in speech intelligibility from acoustical differences in speech stimuli. Cortical entrainment to the speech temporal envelope, which accounts for neural activity specifically related to sensory information, was affected by prior knowledge: This effect emerged early (∼50 ms) in left inferior frontal gyrus (IFG) and then (∼100 ms) in Heschl's gyrus (HG), and was sustained until latencies of ∼250 ms. Directed transfer function (DTF) measures were used for estimating direct Granger causal relations between locations of interest. In line with the cortical entrainment result, this analysis indicated that prior knowledge enhanced top-down connections from left IFG to all the left temporal areas of interest - namely HG, superior temporal sulcus (STS), and middle temporal gyrus (MTG). In addition, intelligible speech increased top-down information flow between left STS and left HG, and increased bottom-up flow in higher-order temporal cortex, specifically between STS and MTG. These results are compatible with theories that explain this mechanism as a result of both ascending and descending cortical interactions, such as predictive coding. Altogether, this study provides a detailed view of how, where and when prior knowledge influences continuous speech perception.


Asunto(s)
Comprensión/fisiología , Magnetoencefalografía/métodos , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Corteza Auditiva/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Factores de Tiempo , Adulto Joven
20.
Neuroimage ; 175: 70-79, 2018 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-29609008

RESUMEN

Developmental dyslexia is a multifaceted disorder of learning primarily manifested by difficulties in reading, spelling, and phonological processing. Neural studies suggest that phonological difficulties may reflect impairments in fundamental cortical oscillatory mechanisms. Here we examine cortical mechanisms in children (6-12 years of age) with or without dyslexia (utilising both age- and reading-level-matched controls) using electroencephalography (EEG). EEG data were recorded as participants listened to an audio-story. Novel electrophysiological measures of phonemic processing were derived by quantifying how well the EEG responses tracked phonetic features of speech. Our results provide, for the first time, evidence for impaired low-frequency cortical tracking to phonetic features during natural speech perception in dyslexia. Atypical phonological tracking was focused on the right hemisphere, and correlated with traditional psychometric measures of phonological skills used in diagnostic dyslexia assessments. Accordingly, the novel indices developed here may provide objective metrics to investigate language development and language impairment across languages.


Asunto(s)
Dislexia/fisiopatología , Electroencefalografía/métodos , Lateralidad Funcional/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Psicolingüística , Percepción del Habla/fisiología , Niño , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA