Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 11.343
Filtrar
1.
Trends Hear ; 28: 23312165241253653, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38715401

RESUMO

This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT5-2), and between five-digit and three-digit sequences (SRT5-3), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT5-2 and SRT5-3 demonstrated significant correlations with the three cognitive function tests (rs ranging from -.705 to -.528). Furthermore, SRT5-2 and SRT5-3 were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.


Assuntos
Cognição , Disfunção Cognitiva , Auxiliares de Audição , Memória de Curto Prazo , Humanos , Idoso , Feminino , Masculino , Pessoa de Meia-Idade , Idoso de 80 Anos ou mais , Memória de Curto Prazo/fisiologia , Disfunção Cognitiva/diagnóstico , Ruído/efeitos adversos , Percepção da Fala/fisiologia , Teste do Limiar de Recepção da Fala , Fatores Etários , Pessoas com Deficiência Auditiva/psicologia , Pessoas com Deficiência Auditiva/reabilitação , Perda Auditiva/reabilitação , Perda Auditiva/diagnóstico , Perda Auditiva/psicologia , Testes de Estado Mental e Demência , Memória , Estimulação Acústica , Valor Preditivo dos Testes , Correção de Deficiência Auditiva/instrumentação , Limiar Auditivo
2.
Otol Neurotol ; 45(5): e381-e384, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38728553

RESUMO

OBJECTIVE: To examine patient preference after stapedotomy versus cochlear implantation in a unique case of a patient with symmetrical profound mixed hearing loss and similar postoperative speech perception improvement. PATIENTS: An adult patient with bilateral symmetrical far advanced otosclerosis, with profound mixed hearing loss. INTERVENTION: Stapedotomy in the left ear, cochlear implantation in the right ear. MAIN OUTCOME MEASURE: Performance on behavioral audiometry, and subjective report of hearing and intervention preference. RESULTS: A patient successfully underwent left stapedotomy and subsequent cochlear implantation on the right side, per patient preference. Preoperative audiometric characteristics were similar between ears (pure-tone average [PTA] [R: 114; L: 113 dB]; word recognition score [WRS]: 22%). Postprocedural audiometry demonstrated significant improvement after stapedotomy (PTA: 59 dB, WRS: 75%) and from cochlear implant (PTA: 20 dB, WRS: 60%). The patient subjectively reported a preference for the cochlear implant ear despite having substantial gains from stapedotomy. A nuanced discussion highlighting potentially overlooked benefits of cochlear implants in far advanced otosclerosis is conducted. CONCLUSION: In comparison with stapedotomy and hearing aids, cochlear implantation generally permits greater access to sound among patients with far advanced otosclerosis. Though the cochlear implant literature mainly focuses on speech perception outcomes, an underappreciated benefit of cochlear implantation is the high likelihood of achieving "normal" sound levels across the audiogram.


Assuntos
Implante Coclear , Otosclerose , Percepção da Fala , Cirurgia do Estribo , Humanos , Otosclerose/cirurgia , Cirurgia do Estribo/métodos , Implante Coclear/métodos , Percepção da Fala/fisiologia , Resultado do Tratamento , Masculino , Pessoa de Meia-Idade , Perda Auditiva Condutiva-Neurossensorial Mista/cirurgia , Audiometria de Tons Puros , Preferência do Paciente , Feminino , Adulto
3.
Curr Biol ; 34(9): R348-R351, 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38714162

RESUMO

A recent study has used scalp-recorded electroencephalography to obtain evidence of semantic processing of human speech and objects by domesticated dogs. The results suggest that dogs do comprehend the meaning of familiar spoken words, in that a word can evoke the mental representation of the object to which it refers.


Assuntos
Cognição , Semântica , Animais , Cães/psicologia , Cognição/fisiologia , Humanos , Eletroencefalografia , Fala/fisiologia , Percepção da Fala/fisiologia , Compreensão/fisiologia
4.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38714314

RESUMO

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Assuntos
Face , Confiança , Voz , Humanos , Feminino , Voz/fisiologia , Adulto Jovem , Adulto , Face/fisiologia , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Reconhecimento Facial/fisiologia , Sinais (Psicologia) , Adolescente
5.
Cogn Res Princ Implic ; 9(1): 29, 2024 05 12.
Artigo em Inglês | MEDLINE | ID: mdl-38735013

RESUMO

Auditory stimuli that are relevant to a listener have the potential to capture focal attention even when unattended, the listener's own name being a particularly effective stimulus. We report two experiments to test the attention-capturing potential of the listener's own name in normal speech and time-compressed speech. In Experiment 1, 39 participants were tested with a visual word categorization task with uncompressed spoken names as background auditory distractors. Participants' word categorization performance was slower when hearing their own name rather than other names, and in a final test, they were faster at detecting their own name than other names. Experiment 2 used the same task paradigm, but the auditory distractors were time-compressed names. Three compression levels were tested with 25 participants in each condition. Participants' word categorization performance was again slower when hearing their own name than when hearing other names; the slowing was strongest with slight compression and weakest with intense compression. Personally relevant time-compressed speech has the potential to capture attention, but the degree of capture depends on the level of compression. Attention capture by time-compressed speech has practical significance and provides partial evidence for the duplex-mechanism account of auditory distraction.


Assuntos
Atenção , Nomes , Percepção da Fala , Humanos , Atenção/fisiologia , Feminino , Masculino , Percepção da Fala/fisiologia , Adulto , Adulto Jovem , Fala/fisiologia , Tempo de Reação/fisiologia , Estimulação Acústica
6.
Trends Hear ; 28: 23312165241246596, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38738341

RESUMO

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.


Assuntos
Estimulação Acústica , Eletroencefalografia , Potenciais Evocados Auditivos do Tronco Encefálico , Percepção da Fala , Humanos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Masculino , Feminino , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Adulto Jovem , Limiar Auditivo/fisiologia , Fatores de Tempo , Nervo Coclear/fisiologia , Voluntários Saudáveis
7.
Trends Hear ; 28: 23312165241239541, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38738337

RESUMO

Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.


Assuntos
Cóclea , Percepção da Fala , Zumbido , Humanos , Cóclea/fisiopatologia , Zumbido/fisiopatologia , Zumbido/diagnóstico , Animais , Percepção da Fala/fisiologia , Hiperacusia/fisiopatologia , Ruído/efeitos adversos , Percepção Auditiva/fisiologia , Sinapses/fisiologia , Perda Auditiva Provocada por Ruído/fisiopatologia , Perda Auditiva Provocada por Ruído/diagnóstico , Percepção Sonora
8.
JASA Express Lett ; 4(5)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38717468

RESUMO

This study evaluated whether adaptive training with time-compressed speech produces an age-dependent improvement in speech recognition in 14 adult cochlear-implant users. The protocol consisted of a pretest, 5 h of training, and a posttest using time-compressed speech and an adaptive procedure. There were significant improvements in time-compressed speech recognition at the posttest session following training (>5% in the average time-compressed speech recognition threshold) but no effects of age. These results are promising for the use of adaptive training in aural rehabilitation strategies for cochlear-implant users across the adult lifespan and possibly using speech signals, such as time-compressed speech, to train temporal processing.


Assuntos
Implantes Cocleares , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Idoso , Masculino , Pessoa de Meia-Idade , Feminino , Adulto , Idoso de 80 Anos ou mais , Implante Coclear/métodos , Fatores de Tempo
9.
Nat Commun ; 15(1): 3692, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38693186

RESUMO

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Assuntos
Atenção , Movimentos Oculares , Magnetoencefalografia , Percepção da Fala , Fala , Humanos , Atenção/fisiologia , Movimentos Oculares/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Encéfalo/fisiologia , Tecnologia de Rastreamento Ocular
10.
Cereb Cortex ; 34(5)2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38715408

RESUMO

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Assuntos
Envelhecimento , Encéfalo , Compreensão , Ruído , Espectroscopia de Luz Próxima ao Infravermelho , Percepção da Fala , Humanos , Adulto , Percepção da Fala/fisiologia , Masculino , Feminino , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Pessoa de Meia-Idade , Adulto Jovem , Idoso , Compreensão/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Envelhecimento/fisiologia , Mapeamento Encefálico/métodos , Estimulação Acústica/métodos
11.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38717201

RESUMO

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Assuntos
Sinais (Psicologia) , Mascaramento Perceptivo , Localização de Som , Inteligibilidade da Fala , Percepção da Fala , Humanos , Feminino , Masculino , Adulto Jovem , Adulto , Percepção da Fala/fisiologia , Estimulação Acústica , Limiar Auditivo , Acústica da Fala , Teste do Limiar de Recepção da Fala , Ruído
12.
J Acoust Soc Am ; 155(5): 2990-3004, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38717206

RESUMO

Speakers can place their prosodic prominence on any locations within a sentence, generating focus prosody for listeners to perceive new information. This study aimed to investigate age-related changes in the bottom-up processing of focus perception in Jianghuai Mandarin by clarifying the perceptual cues and the auditory processing abilities involved in the identification of focus locations. Young, middle-aged, and older speakers of Jianghuai Mandarin completed a focus identification task and an auditory perception task. The results showed that increasing age led to a decrease in listeners' accuracy rate in identifying focus locations, with all participants performing the worst when dynamic pitch cues were inaccessible. Auditory processing abilities did not predict focus perception performance in young and middle-aged listeners but accounted significantly for the variance in older adults' performance. These findings suggest that age-related deteriorations in focus perception can be largely attributed to declined auditory processing of perceptual cues. Poor ability to extract frequency modulation cues may be the most important underlying psychoacoustic factor for older adults' difficulties in perceiving focus prosody in Jianghuai Mandarin. The results contribute to our understanding of the bottom-up mechanisms involved in linguistic prosody processing in aging adults, particularly in tonal languages.


Assuntos
Envelhecimento , Sinais (Psicologia) , Percepção da Fala , Humanos , Pessoa de Meia-Idade , Idoso , Masculino , Feminino , Envelhecimento/psicologia , Envelhecimento/fisiologia , Adulto Jovem , Adulto , Percepção da Fala/fisiologia , Fatores Etários , Acústica da Fala , Estimulação Acústica , Percepção da Altura Sonora , Idioma , Qualidade da Voz , Psicoacústica , Audiometria da Fala
13.
JASA Express Lett ; 4(5)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38717469

RESUMO

The perceptual boundary between short and long categories depends on speech rate. We investigated the influence of speech rate on perceptual boundaries for short and long vowel and consonant contrasts by Spanish-English bilingual listeners and English monolinguals. Listeners tended to adapt their perceptual boundaries to speech rates, but the strategy differed between groups, especially for consonants. Understanding the factors that influence auditory processing in this population is essential for developing appropriate assessments of auditory comprehension. These findings have implications for the clinical care of older populations whose ability to rely on spectral and/or temporal information in the auditory signal may decline.


Assuntos
Multilinguismo , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Feminino , Masculino , Adulto , Fonética , Adulto Jovem
14.
Cereb Cortex ; 34(13): 84-93, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38696598

RESUMO

Multimodal integration is crucial for human interaction, in particular for social communication, which relies on integrating information from various sensory modalities. Recently a third visual pathway specialized in social perception was proposed, which includes the right superior temporal sulcus (STS) playing a key role in processing socially relevant cues and high-level social perception. Importantly, it has also recently been proposed that the left STS contributes to audiovisual integration of speech processing. In this article, we propose that brain areas along the right STS that support multimodal integration for social perception and cognition can be considered homologs to those in the left, language-dominant hemisphere, sustaining multimodal integration of speech and semantic concepts fundamental for social communication. Emphasizing the significance of the left STS in multimodal integration and associated processes such as multimodal attention to socially relevant stimuli, we underscore its potential relevance in comprehending neurodevelopmental conditions characterized by challenges in social communication such as autism spectrum disorder (ASD). Further research into this left lateral processing stream holds the promise of enhancing our understanding of social communication in both typical development and ASD, which may lead to more effective interventions that could improve the quality of life for individuals with atypical neurodevelopment.


Assuntos
Cognição Social , Percepção da Fala , Lobo Temporal , Humanos , Lobo Temporal/fisiologia , Lobo Temporal/fisiopatologia , Percepção da Fala/fisiologia , Percepção Social , Transtorno Autístico/fisiopatologia , Transtorno Autístico/psicologia , Lateralidade Funcional/fisiologia
15.
Int J Pediatr Otorhinolaryngol ; 180: 111968, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38714045

RESUMO

AIM & OBJECTIVES: The study aimed to compare P1 latency and P1-N1 amplitude with receptive and expressive language ages in children using cochlear implant (CI) in one ear and a hearing aid (HA) in non-implanted ear. METHODS: The study included 30 children, consisting of 18 males and 12 females, aged between 48 and 96 months. The age at which the children received CI ranged from 42 to 69 months. A within-subject research design was utilized and participants were selected through purposive sampling. Auditory late latency responses (ALLR) were assessed using the Intelligent hearing system to measure P1 latency and P1-N1 amplitude. The assessment checklist for speech-language skills (ACSLS) was employed to evaluate receptive and expressive language age. Both assessments were conducted after cochlear implantation. RESULTS: A total of 30 children participated in the study, with a mean implant age of 20.03 months (SD: 8.14 months). The mean P1 latency and P1-N1 amplitude was 129.50 ms (SD: 15.05 ms) and 6.93 µV (SD: 2.24 µV) respectively. Correlation analysis revealed no significant association between ALLR measures and receptive or expressive language ages. However, there was significant negative correlation between the P1 latency and implant age (Spearman's rho = -0.371, p = 0.043). CONCLUSIONS: The study suggests that P1 latency which is an indicative of auditory maturation, may not be a reliable marker for predicting language outcomes. It can be concluded that language development is likely to be influenced by other factors beyond auditory maturation alone.


Assuntos
Implantes Cocleares , Desenvolvimento da Linguagem , Humanos , Masculino , Feminino , Pré-Escolar , Criança , Implante Coclear/métodos , Tempo de Reação/fisiologia , Surdez/cirurgia , Surdez/reabilitação , Potenciais Evocados Auditivos/fisiologia , Fatores Etários , Percepção da Fala/fisiologia
16.
Otol Neurotol ; 45(5): e393-e399, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38573598

RESUMO

HYPOTHESIS: Preimplantation word scores cannot reliably predict postimplantation outcomes. BACKGROUND: To date, there is no model based on preoperative data that can reliably predict the postoperative outcomes of cochlear implantation in the postlingually deafened adult patient. METHODS: In a group of 228 patients who received a cochlear implant between 2002 and 2021, we tested the predictive power of nine variables (age, etiology, sex, laterality of implantation, preimplantation thresholds and word scores, as well as the design, insertion approach, and angular insertion depth of the electrode array) on postimplantation outcomes. Results of multivariable linear regression analyses were then interpreted in light of data obtained from histopathological analyses of human temporal bones. RESULTS: Age and etiology were the only significant predictors of postimplantation outcomes. In agreement with many investigations, preimplantation word scores failed to significantly predict postimplantation outcomes. Analysis of temporal bone histopathology suggests that neuronal survival must fall below 40% before word scores in quiet begin to drop. Scores fall steeply with further neurodegeneration, such that only 20% survival can support acoustically driven word scores of 50%. Because almost all cochlear implant implantees have at least 20% of their spiral ganglion neurons (SGNs) surviving, it is expected that most cochlear implant users on average should improve to at least 50% word recognition score, as we observed, even if their preimplantation score was near zero as a result of widespread hair cell damage and the fact that ~50% of their SGNs have likely lost their peripheral axons. These "disconnected" SGNs would not contribute to acoustic hearing but likely remain electrically excitable. CONCLUSION: The relationship between preimplantation word scores and data describing the survival of SGNs in humans can explain why preimplantation word scores obtained in unaided conditions fail to predict postimplantation outcomes.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Humanos , Implante Coclear/métodos , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Idoso , Percepção da Fala/fisiologia , Surdez/cirurgia , Resultado do Tratamento , Osso Temporal/cirurgia , Idoso de 80 Anos ou mais , Adulto Jovem , Adolescente
17.
eNeuro ; 11(5)2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38658138

RESUMO

More and more patients worldwide are diagnosed with dementia, which emphasizes the urgent need for early detection markers. In this study, we built on the auditory hypersensitivity theory of a previous study-which postulated that responses to auditory input in the subcortex as well as cortex are enhanced in cognitive decline-and examined auditory encoding of natural continuous speech at both neural levels for its indicative potential for cognitive decline. We recruited study participants aged 60 years and older, who were divided into two groups based on the Montreal Cognitive Assessment, one group with low scores (n = 19, participants with signs of cognitive decline) and a control group (n = 25). Participants completed an audiometric assessment and then we recorded their electroencephalography while they listened to an audiobook and click sounds. We derived temporal response functions and evoked potentials from the data and examined response amplitudes for their potential to predict cognitive decline, controlling for hearing ability and age. Contrary to our expectations, no evidence of auditory hypersensitivity was observed in participants with signs of cognitive decline; response amplitudes were comparable in both cognitive groups. Moreover, the combination of response amplitudes showed no predictive value for cognitive decline. These results challenge the proposed hypothesis and emphasize the need for further research to identify reliable auditory markers for the early detection of cognitive decline.


Assuntos
Disfunção Cognitiva , Eletroencefalografia , Potenciais Evocados Auditivos , Humanos , Feminino , Masculino , Idoso , Disfunção Cognitiva/fisiopatologia , Disfunção Cognitiva/diagnóstico , Pessoa de Meia-Idade , Potenciais Evocados Auditivos/fisiologia , Percepção da Fala/fisiologia , Idoso de 80 Anos ou mais , Córtex Cerebral/fisiologia , Córtex Cerebral/fisiopatologia , Estimulação Acústica , Fala/fisiologia
18.
Trends Hear ; 28: 23312165241240572, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38676325

RESUMO

Realistic outcome measures that reflect everyday hearing challenges are needed to assess hearing aid and cochlear implant (CI) fitting. Literature suggests that listening effort measures may be more sensitive to differences between hearing-device settings than established speech intelligibility measures when speech intelligibility is near maximum. Which method provides the most effective measurement of listening effort for this purpose is currently unclear. This study aimed to investigate the feasibility of two tests for measuring changes in listening effort in CI users due to signal-to-noise ratio (SNR) differences, as would arise from different hearing-device settings. By comparing the effect size of SNR differences on listening effort measures with test-retest differences, the study evaluated the suitability of these tests for clinical use. Nineteen CI users underwent two listening effort tests at two SNRs (+4 and +8 dB relative to individuals' 50% speech perception threshold). We employed dual-task paradigms-a sentence-final word identification and recall test (SWIRT) and a sentence verification test (SVT)-to assess listening effort at these two SNRs. Our results show a significant difference in listening effort between the SNRs for both test methods, although the effect size was comparable to the test-retest difference, and the sensitivity was not superior to speech intelligibility measures. Thus, the implementations of SVT and SWIRT used in this study are not suitable for clinical use to measure listening effort differences of this magnitude in individual CI users. However, they can be used in research involving CI users to analyze group data.


Assuntos
Implante Coclear , Implantes Cocleares , Estudos de Viabilidade , Pessoas com Deficiência Auditiva , Inteligibilidade da Fala , Percepção da Fala , Humanos , Masculino , Feminino , Percepção da Fala/fisiologia , Pessoa de Meia-Idade , Idoso , Inteligibilidade da Fala/fisiologia , Implante Coclear/instrumentação , Pessoas com Deficiência Auditiva/reabilitação , Pessoas com Deficiência Auditiva/psicologia , Reprodutibilidade dos Testes , Estimulação Acústica , Razão Sinal-Ruído , Adulto , Idoso de 80 Anos ou mais , Limiar Auditivo/fisiologia , Valor Preditivo dos Testes , Correção de Deficiência Auditiva/instrumentação , Ruído/efeitos adversos
19.
PLoS Comput Biol ; 20(4): e1011985, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38626220

RESUMO

Animal psychophysics can generate rich behavioral datasets, often comprised of many 1000s of trials for an individual subject. Gradient-boosted models are a promising machine learning approach for analyzing such data, partly due to the tools that allow users to gain insight into how the model makes predictions. We trained ferrets to report a target word's presence, timing, and lateralization within a stream of consecutively presented non-target words. To assess the animals' ability to generalize across pitch, we manipulated the fundamental frequency (F0) of the speech stimuli across trials, and to assess the contribution of pitch to streaming, we roved the F0 from word token to token. We then implemented gradient-boosted regression and decision trees on the trial outcome and reaction time data to understand the behavioral factors behind the ferrets' decision-making. We visualized model contributions by implementing SHAPs feature importance and partial dependency plots. While ferrets could accurately perform the task across all pitch-shifted conditions, our models reveal subtle effects of shifting F0 on performance, with within-trial pitch shifting elevating false alarms and extending reaction times. Our models identified a subset of non-target words that animals commonly false alarmed to. Follow-up analysis demonstrated that the spectrotemporal similarity of target and non-target words rather than similarity in duration or amplitude waveform was the strongest predictor of the likelihood of false alarming. Finally, we compared the results with those obtained with traditional mixed effects models, revealing equivalent or better performance for the gradient-boosted models over these approaches.


Assuntos
Árvores de Decisões , Furões , Animais , Biologia Computacional , Estimulação Acústica , Percepção Auditiva/fisiologia , Comportamento Animal/fisiologia , Tempo de Reação/fisiologia , Masculino , Aprendizado de Máquina , Feminino , Tomada de Decisões/fisiologia , Percepção da Fala/fisiologia
20.
PLoS One ; 19(4): e0301514, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38564597

RESUMO

Evoked potential studies have shown that speech planning modulates auditory cortical responses. The phenomenon's functional relevance is unknown. We tested whether, during this time window of cortical auditory modulation, there is an effect on speakers' perceptual sensitivity for vowel formant discrimination. Participants made same/different judgments for pairs of stimuli consisting of a pre-recorded, self-produced vowel and a formant-shifted version of the same production. Stimuli were presented prior to a "go" signal for speaking, prior to passive listening, and during silent reading. The formant discrimination stimulus /uh/ was tested with a congruent productions list (words with /uh/) and an incongruent productions list (words without /uh/). Logistic curves were fitted to participants' responses, and the just-noticeable difference (JND) served as a measure of discrimination sensitivity. We found a statistically significant effect of condition (worst discrimination before speaking) without congruency effect. Post-hoc pairwise comparisons revealed that JND was significantly greater before speaking than during silent reading. Thus, formant discrimination sensitivity was reduced during speech planning regardless of the congruence between discrimination stimulus and predicted acoustic consequences of the planned speech movements. This finding may inform ongoing efforts to determine the functional relevance of the previously reported modulation of auditory processing during speech planning.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Acústica , Movimento , Fonética , Acústica da Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA