Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 10.185
Filtrar
1.
PLoS Comput Biol ; 17(2): e1008155, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33617548

RESUMO

Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Modelos Neurológicos , Estimulação Acústica , Animais , Chinchila/fisiologia , Nervo Coclear/fisiologia , Biologia Computacional , Modelos Animais de Doenças , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Humanos , Modelos Animais , Dinâmica não Linear , Psicoacústica , Som , Análise Espaço-Temporal , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Pesquisa Médica Translacional
2.
Nat Commun ; 12(1): 861, 2021 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-33558510

RESUMO

The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners' perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature. Using a data-driven method, we separately decode the prosodic features driving listeners' perceptions of a speaker's certainty and honesty across pitch, duration and loudness. We find that these two kinds of judgments rely on a common prosodic signature that is perceived independently from individuals' conceptual knowledge and native language. Finally, we show that listeners extract this prosodic signature automatically, and that this impacts the way they memorize spoken words. These findings shed light on a unique auditory adaptation that enables human listeners to quickly detect and react to unreliability during linguistic interactions.


Assuntos
Percepção da Fala/fisiologia , Fala/fisiologia , Comportamento de Escolha , Tomada de Decisões , Feminino , Humanos , Julgamento , Conhecimento , Idioma , Linguística , Masculino , Memória de Curto Prazo , Análise e Desempenho de Tarefas , Adulto Jovem
3.
PLoS One ; 16(2): e0246842, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33626073

RESUMO

Face masks are an important tool for preventing the spread of COVID-19. However, it is unclear how different types of masks affect speech recognition in different levels of background noise. To address this, we investigated the effects of four masks (a surgical mask, N95 respirator, and two cloth masks) on recognition of spoken sentences in multi-talker babble. In low levels of background noise, masks had little to no effect, with no more than a 5.5% decrease in mean accuracy compared to a no-mask condition. In high levels of noise, mean accuracy was 2.8-18.2% lower than the no-mask condition, but the surgical mask continued to show no significant difference. The results demonstrate that different types of masks generally yield similar accuracy in low levels of background noise, but differences between masks become more apparent in high levels of noise.


Assuntos
Percepção Auditiva/fisiologia , Máscaras , Percepção da Fala/fisiologia , Adulto , COVID-19/prevenção & controle , COVID-19/psicologia , COVID-19/transmissão , Feminino , Humanos , Idioma , Masculino , Máscaras/efeitos adversos , Respiradores N95/efeitos adversos , Ruído , SARS-CoV-2/isolamento & purificação , Fala/fisiologia
4.
J Laryngol Otol ; 135(2): 125-129, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33568241

RESUMO

OBJECTIVE: This study details the intra-operative complications, and compares auditory scales post-implantation of either profoundly deaf young children with radiologically normal inner ears (group A) or children with Mondini dysplasia (group B). METHODS: A retrospective survey was carried out of 338 patients with severe to profound sensorineural hearing loss who underwent cochlear implant surgery from February 2015 to May 2017. Patients were divided into 2 groups of 27 patients each. Both groups were followed up to three years post-implantation. RESULTS: Cerebrospinal fluid ooze developed in 12 patients, and 2 patients had a cerebrospinal fluid 'gusher', one of which had to be explored within 24 hours. After implant use for one year, both groups had similar speech perception scores. CONCLUSION: The cerebrospinal fluid gusher in Mondini dysplasia should be anticipated and adequately managed intra-operatively. This study highlights the tailoring of a post-implantation rehabilitation programme according to individual needs.


Assuntos
Vazamento de Líquido Cefalorraquidiano/epidemiologia , Implante Coclear/métodos , Orelha Interna/anormalidades , Perda Auditiva Neurossensorial/cirurgia , Complicações Intraoperatórias/epidemiologia , Adulto , Estudos de Casos e Controles , Criança , Implantes Cocleares/efeitos adversos , Orelha Interna/diagnóstico por imagem , Orelha Interna/patologia , Orelha Interna/cirurgia , Perda Auditiva Neurossensorial/reabilitação , Humanos , Índia/epidemiologia , Complicações Intraoperatórias/patologia , Masculino , Estudos Retrospectivos , Percepção da Fala/fisiologia , Tomógrafos Computadorizados
5.
Laryngoscope ; 131(6): E2038-E2043, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33590898

RESUMO

OBJECTIVES: The objectives were to characterize the effects of wearing face coverings on: 1) acoustic speech cues, and 2) speech recognition of patients with hearing loss who listen with a cochlear implant. METHODS: A prospective cohort study was performed in a tertiary referral center between July and September 2020. A female talker recorded sentences in three conditions: no face covering, N95 mask, and N95 mask plus a face shield. Spectral differences were analyzed between speech produced in each condition. The speech recognition in each condition for twenty-three adult patients with at least 6 months of cochlear implant use was assessed. RESULTS: Spectral analysis demonstrated preferential attenuation of high-frequency speech information with the N95 mask plus face shield condition compared to the other conditions. Speech recognition did not differ significantly between the uncovered (median 90% [IQR 89%-94%]) and N95 mask conditions (91% [IQR 86%-94%]; P = .253); however, speech recognition was significantly worse in the N95 mask plus face shield condition (64% [IQR 48%-75%]) compared to the uncovered (P < .001) or N95 mask (P < .001) conditions. CONCLUSIONS: The type and combination of protective face coverings used have differential effects on attenuation of speech information, influencing speech recognition of patients with hearing loss. In the face of the COVID-19 pandemic, there is a need to protect patients and clinicians from spread of disease while maximizing patient speech recognition. The disruptive effect of wearing a face shield in conjunction with a mask may prompt clinicians to consider alternative eye protection, such as goggles, in appropriate clinical situations. LEVEL OF EVIDENCE: 3 Laryngoscope, 131:E2038-E2043, 2021.


Assuntos
Implantes Cocleares , Respiradores N95 , Mascaramento Perceptivo , Percepção da Fala , Adulto , Estudos de Coortes , Sinais (Psicologia) , Feminino , Perda Auditiva/fisiopatologia , Humanos , Masculino , Mascaramento Perceptivo/fisiologia , Estudos Prospectivos , Espectrografia do Som , Acústica da Fala , Testes de Discriminação da Fala , Percepção da Fala/fisiologia
6.
Neural Netw ; 136: 17-27, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33422929

RESUMO

The goal of monaural speech enhancement is to separate clean speech from noisy speech. Recently, many studies have employed generative adversarial networks (GAN) to deal with monaural speech enhancement tasks. When using generative adversarial networks for this task, the output of the generator is a speech waveform or a spectrum, such as a magnitude spectrum, a mel-spectrum or a complex-valued spectrum. The spectra generated by current speech enhancement methods in the time-frequency domain usually lack details, such as consonants and harmonics with low energy. In this paper, we propose a new type of adversarial training framework for spectrum generation, named µ-law spectrum generative adversarial networks (µ-law SGAN). We introduce a trainable µ-law spectrum compression layer (USCL) into the proposed discriminator to compress the dynamic range of the spectrum. As a result, the compressed spectrum can display more detailed information. In addition, we use the spectrum transformed by USCL to regularize the generator's training, so that the generator can pay more attention to the details of the spectrum. Experimental results on the open dataset Voice Bank + DEMAND show that µ-law SGAN is an effective generative adversarial architecture for speech enhancement. Moreover, visual spectrogram analysis suggests that µ-law SGAN pays more attention to the enhancement of low energy harmonics and consonants.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Percepção da Fala/fisiologia , Interface para o Reconhecimento da Fala , Compressão de Dados/métodos , Humanos , Fala/fisiologia
7.
J Acoust Soc Am ; 149(1): 142, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33514131

RESUMO

The effect of face covering masks on listeners' recall of spoken sentences was investigated. Thirty-two German native listeners watched video recordings of a native speaker producing German sentences with and without a face mask, and then completed a cued-recall task. Listeners recalled significantly fewer words when the sentences had been spoken with a face mask. This might suggest that face masks increase processing demands, which in turn leaves fewer resources for encoding speech in memory. The result is also informative for policy-makers during the COVID-19 pandemic, regarding the impact of face masks on oral communication.


Assuntos
COVID-19/prevenção & controle , Máscaras/tendências , Rememoração Mental/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Masculino , Máscaras/efeitos adversos , Estimulação Luminosa/métodos , Adulto Jovem
8.
Neuroimage ; 228: 117710, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33385557

RESUMO

Understanding others' speech while individuals simultaneously produce speech utterances implies neural competition and requires specific mechanisms for a neural resolution given that previous studies proposed opposing signal dynamics for both processes in the auditory cortex (AC). We here used neuroimaging in humans to investigate this neural competition by lateralized stimulations with other speech samples and ipsilateral or contralateral lateralized feedback of actively produced self speech utterances in the form of various speech vowels. In experiment 1, we show, first, that others' speech classifications during active self speech lead to activity in the planum temporale (PTe) when both self and other speech samples were presented together to only the left or right ear. The contralateral PTe also seemed to indifferently respond to single self and other speech samples. Second, specific activity in the left anterior superior temporal cortex (STC) was found during dichotic stimulations (i.e. self and other speech presented to separate ears). Unlike previous studies, this left anterior STC activity supported self speech rather than other speech processing. Furthermore, right mid and anterior STC was more involved in other speech processing. These results signify specific mechanisms for self and other speech processing in the left and right STC beyond a more general speech processing in PTe. Third, other speech recognition in the context of listening to recorded self speech in experiment 2 led to largely symmetric activity in STC and additionally in inferior frontal subregions. The latter was previously reported to be generally relevant for other speech perception and classification, but we found frontal activity only when other speech classification was challenged by recorded but not by active self speech samples. Altogether, unlike formerly established brain networks for uncompetitive other speech perception, active self speech during other speech perception seemingly leads to a neural reordering, functional reassignment, and unusual lateralization of AC and frontal brain activations.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Neuroimagem/métodos
9.
Neuroimage ; 228: 117699, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33387631

RESUMO

Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.


Assuntos
Atenção/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto , Córtex Auditivo , Limiar Auditivo/fisiologia , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído , Adulto Jovem
10.
Neurosci Lett ; 746: 135664, 2021 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-33497718

RESUMO

Scalp-recorded frequency-following responses (FFRs) reflect a mixture of phase-locked activity across the auditory pathway. FFRs have been widely used as a neural barometer of complex listening skills, especially speech-in noise (SIN) perception. Applying individually optimized source reconstruction to speech-FFRs recorded via EEG (FFREEG), we assessed the relative contributions of subcortical [auditory nerve (AN), brainstem/midbrain (BS)] and cortical [bilateral primary auditory cortex, PAC] source generators with the aim of identifying which source(s) drive the brain-behavior relation between FFRs and SIN listening skills. We found FFR strength declined precipitously from AN to PAC, consistent with diminishing phase-locking along the ascending auditory neuroaxis. FFRs to the speech fundamental (F0) were robust to noise across sources, but were largest in subcortical sources (BS > AN > PAC). PAC FFRs were only weakly observed above the noise floor and only at the low pitch of speech (F0≈100 Hz). Brain-behavior regressions revealed (i) AN and BS FFRs were sufficient to describe listeners' QuickSIN scores and (ii) contrary to neuromagnetic (MEG) FFRs, neither left nor right PAC FFREEG related to SIN performance. Our findings suggest subcortical sources not only dominate the electrical FFR but also the link between speech-FFRs and SIN processing in normal-hearing adults as observed in previous EEG studies.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Audição/fisiologia , Ruído/efeitos adversos , Percepção da Fala/fisiologia , Adulto , Eletroencefalografia/métodos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
11.
JAMA Otolaryngol Head Neck Surg ; 147(3): 280-286, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-33410869

RESUMO

Importance: Cochlear implantation is highly effective at improving hearing outcomes, but results have been limited to groupwise analysis. That is, limited data are available for individual patients that report comparisons of preoperative aided speech recognition and postimplantation speech recognition. Objective: To assess changes in preoperative aided vs postoperative speech recognition scores for individual patients receiving cochlear implants when considering the measurement error for each speech recognition test. Design, Setting, and Participants: This cross-sectional study used a prospectively maintained database of patients who received cochlear implants between January 1, 2012, and December 31, 2017, at a tertiary, university-based referral center. Adults with bilateral sensorineural hearing loss undergoing cochlear implantation with 6- or 12-month postoperative measures using 1 or more speech recognition tests were studied. Exposures: Cochlear implantation. Main Outcomes and Measures: Postoperative word recognition (consonant-nucleus-consonant word test), sentence recognition (AzBio sentences in quiet), and sentence recognition in noise (AzBio sentences in +10-dB signal-to-noise ratio) scores, and association of each speech recognition score change with aided preoperative score to each test's measurement error. Results: Analysis of data from a total of 470 implants from 323 patients included 253 male (53.8%) patients; the mean (SD) age was 61.2 (18.3) years. Most patients had statistically significant improvement in all speech recognition tests postoperatively beyond measurement error, including 262 (84.8%) for word recognition, 226 (87.6%) for sentence recognition, and 33 (78.6%) for sentence recognition in noise. A small number of patients had equivalent preoperative and postoperative scores, including 45 (14.5%) for word recognition, 28 (10.9%) for sentence recognition, and 9 (21.4%) for sentence recognition in noise. Four patients (1.6%) had significantly poorer scores in sentence recognition after implantation. The associations between age at implantation and change in speech recognition scores were -0.12 (95% CI, -0.23 to -0.01) for word recognition, -0.22 (95% CI, -0.34 to -0.10) for sentence recognition, and -0.10 (95% CI, -0.39 to 0.21) for sentence recognition in noise. Patients with no significant improvement were similarly distributed between all preoperative aided speech scores for word recognition (range, 0%-58%) and sentence recognition (range, 0%-56%) testing. Conclusions and Relevance: In this cross-sectional study, with respect to preoperative aided speech recognition, postoperative cochlear implant outcomes for individual patients were largely encouraging. However, improvements in scores for individual patients remained highly variable, which may not be adequately represented in groupwise analyses and reporting of mean scores. Presenting individual patient data from a large sample of individuals with cochlear implants provides a better understanding of individual differences in speech recognition outcomes and contributes to more complete interpretations of successful outcomes after cochlear implantation.


Assuntos
Implantes Cocleares , Perda Auditiva Neurossensorial/cirurgia , Audição/fisiologia , Percepção da Fala/fisiologia , Estudos Transversais , Feminino , Perda Auditiva Bilateral/fisiopatologia , Perda Auditiva Bilateral/cirurgia , Perda Auditiva Neurossensorial/fisiopatologia , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Período Pós-Operatório , Estudos Retrospectivos , Resultado do Tratamento
12.
PLoS Biol ; 19(1): e3001038, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33497384

RESUMO

Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like "gardener" in "the gardener crouched" and in "the gardener planted trees." A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world's languages.


Assuntos
Compreensão/fisiologia , Idioma , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Atenção/fisiologia , Encéfalo/fisiologia , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Índia , Linguística , Masculino , Memória de Curto Prazo/fisiologia , Tempo de Reação , Semântica , Adulto Jovem
13.
Neuroimage ; 227: 117586, 2021 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-33346131

RESUMO

Acquiring a new language requires individuals to simultaneously and gradually learn linguistic attributes on multiple levels. Here, we investigated how this learning process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals were recorded from native Mandarin speakers with varied English proficiency and from native English speakers while they listened to audio-stories in English. We measured the temporal response functions (TRFs) for acoustic, phonemic, phonotactic, and semantic features in individual participants and found a main effect of proficiency on linguistic encoding. This effect of second-language proficiency was particularly prominent on the neural encoding of phonemes, showing stronger encoding of "new" phonemic contrasts (i.e., English contrasts that do not exist in Mandarin) with increasing proficiency. Overall, we found that the nonnative listeners with higher proficiency levels had a linguistic feature representation more similar to that of native listeners, which enabled the accurate decoding of language proficiency. This result advances our understanding of the cortical processing of linguistic information in second-language learners and provides an objective measure of language proficiency.


Assuntos
Encéfalo/fisiologia , Compreensão/fisiologia , Multilinguismo , Percepção da Fala/fisiologia , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Fonética , Adulto Jovem
14.
Neuroimage ; 228: 117670, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359352

RESUMO

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Adulto Jovem
15.
Neuroimage ; 227: 117675, 2021 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-33359849

RESUMO

Speech perception can be challenging, especially for older adults. Despite the importance of speech perception in social interactions, the mechanisms underlying these difficulties remain unclear and treatment options are scarce. While several studies have suggested that decline within cortical auditory regions may be a hallmark of these difficulties, a growing number of studies have reported decline in regions beyond the auditory processing network, including regions involved in speech processing and executive control, suggesting a potentially diffuse underlying neural disruption, though no consensus exists regarding underlying dysfunctions. To address this issue, we conducted two experiments in which we investigated age differences in speech perception when background noise and talker variability are manipulated, two factors known to be detrimental to speech perception. In Experiment 1, we examined the relationship between speech perception, hearing and auditory attention in 88 healthy participants aged 19 to 87 years. In Experiment 2, we examined cortical thickness and BOLD signal using magnetic resonance imaging (MRI) and related these measures to speech perception performance using a simple mediation approach in 32 participants from Experiment 1. Our results show that, even after accounting for hearing thresholds and two measures of auditory attention, speech perception significantly declined with age. Age-related decline in speech perception in noise was associated with thinner cortex in auditory and speech processing regions (including the superior temporal cortex, ventral premotor cortex and inferior frontal gyrus) as well as in regions involved in executive control (including the dorsal anterior insula, the anterior cingulate cortex and medial frontal cortex). Further, our results show that speech perception performance was associated with reduced brain response in the right superior temporal cortex in older compared to younger adults, and to an increase in response to noise in older adults in the left anterior temporal cortex. Talker variability was not associated with different activation patterns in older compared to younger adults. Together, these results support the notion of a diffuse rather than a focal dysfunction underlying speech perception in noise difficulties in older adults.


Assuntos
Envelhecimento/fisiologia , Atenção/fisiologia , Encéfalo/fisiologia , Percepção da Fala/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Ruído , Adulto Jovem
16.
Neurosci Lett ; 740: 135430, 2021 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-33075423

RESUMO

Cognitive decline is evident in the elderly and it affects speech perception and foreign language learning. A listen-and-repeat training with a challenging speech sound contrast was earlier found to be effective in young monolingual adults and even in advanced L2 university students at the attentive and pre-attentive levels. This study investigates foreign language speech perception in the elderly with the same protocol used with the young adults. Training effects were measured with attentive behavioural measures (N = 9) and with electroencephalography measuring the pre-attentive mismatch negativity (MMN) response (N = 10). Training was effective in identification, but not in discrimination and there were no changes in the MMN. The most attention demanding perceptual functions which benefit from experience-based linguistic knowledge were facilitated through training, whereas pre-attentive processing was unaffected. The elderly would probably benefit from different training types compared to younger adults.


Assuntos
Envelhecimento/psicologia , Fonética , Percepção da Fala/fisiologia , Idoso , Atenção , Disfunção Cognitiva/psicologia , Discriminação Psicológica , Eletroencefalografia , Feminino , Humanos , Idioma , Aprendizagem , Masculino , Pessoa de Meia-Idade , Multilinguismo , Desempenho Psicomotor/fisiologia , Tempo de Reação , Adulto Jovem
17.
Medicine (Baltimore) ; 99(51): e23658, 2020 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-33371101

RESUMO

OBJECTIVES: This study aimed to clarify the neural correlates and underlying mechanisms of the subject's own name (SON) and the unique name derived from the SON (SDN). METHODS: A name that was most familiar to the subject (SFN) was added as a self-related reference. We used 4 auditory stimuli-pure tone (1000 Hz), SON, SDN, and SFN-to evaluate the corresponding activated brain areas in 19 healthy subjects by using functional magnetic resonance imaging. RESULTS: Our results demonstrated that pure tone activated the fewest brain regions. Although SFN was a very strong self-related stimulus, it failed to activate many midline structures. The brain regions activated by SON and SDN were very similar. SFN as a self-related stimulus was less self-related compared with SDN. What's more, the additionally activated fusiform gyrus and parahippocampal gyrus of SDN might revealed its processing path. CONCLUSIONS: SDN, which has created by us, is a new and self-related stimulus similar to SON. They might provide a useful reference for consciousness assessment with SON and SDN.


Assuntos
Encéfalo/fisiologia , Nomes , Percepção da Fala/fisiologia , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Voluntários Saudáveis , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Inconsciência , Adulto Jovem
18.
PLoS One ; 15(12): e0243436, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33332419

RESUMO

High frequency words play a key role in language acquisition, with recent work suggesting they may serve both speech segmentation and lexical categorisation. However, it is not yet known whether infants can detect novel high frequency words in continuous speech, nor whether they can use them to help learning for segmentation and categorisation at the same time. For instance, when hearing "you eat the biscuit", can children use the high-frequency words "you" and "the" to segment out "eat" and "biscuit", and determine their respective lexical categories? We tested this in two experiments. In Experiment 1, we familiarised 12-month-old infants with continuous artificial speech comprising repetitions of target words, which were preceded by high-frequency marker words that distinguished the targets into two distributional categories. In Experiment 2, we repeated the task using the same language but with additional phonological cues to word and category structure. In both studies, we measured learning with head-turn preference tests of segmentation and categorisation, and compared performance against a control group that heard the artificial speech without the marker words (i.e., just the targets). There was no evidence that high frequency words helped either speech segmentation or grammatical categorisation. However, segmentation was seen to improve when the distributional information was supplemented with phonological cues (Experiment 2). In both experiments, exploratory analysis indicated that infants' looking behaviour was related to their linguistic maturity (indexed by infants' vocabulary scores) with infants with high versus low vocabulary scores displaying novelty and familiarity preferences, respectively. We propose that high-frequency words must reach a critical threshold of familiarity before they can be of significant benefit to learning.


Assuntos
Comportamento do Lactente/fisiologia , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Feminino , Humanos , Lactente , Idioma , Masculino , Fonética , Vocabulário
19.
Elife ; 92020 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-33345775

RESUMO

Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.


Assuntos
Córtex Cerebral/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Feminino , Humanos , Idioma , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...