Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.135
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-30699049

RESUMO

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Assuntos
Percepção Auditiva/fisiologia , Implantes Cocleares , Período Crítico Psicológico , Desenvolvimento da Linguagem , Animais , Transtornos da Percepção Auditiva/etiologia , Encéfalo/crescimento & desenvolvimento , Implante Coclear , Compreensão , Sinais (Psicologia) , Surdez/congênito , Surdez/fisiopatologia , Surdez/psicologia , Surdez/cirurgia , Desenho de Equipamento , Humanos , Transtornos do Desenvolvimento da Linguagem/etiologia , Transtornos do Desenvolvimento da Linguagem/prevenção & controle , Aprendizagem/fisiologia , Plasticidade Neuronal , Estimulação Luminosa
2.
Cereb Cortex ; 34(5)2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38715408

RESUMO

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Assuntos
Envelhecimento , Encéfalo , Compreensão , Ruído , Espectroscopia de Luz Próxima ao Infravermelho , Percepção da Fala , Humanos , Adulto , Percepção da Fala/fisiologia , Masculino , Feminino , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Pessoa de Meia-Idade , Adulto Jovem , Idoso , Compreensão/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Envelhecimento/fisiologia , Mapeamento Encefálico/métodos , Estimulação Acústica/métodos
3.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38687241

RESUMO

Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Fonética , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Feminino , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto , Adulto Jovem , Linguística , Estimulação Acústica/métodos , Compreensão/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
4.
Cereb Cortex ; 34(3)2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38494418

RESUMO

Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.


Assuntos
Idioma , Fala , Humanos , Feminino , Fala/fisiologia , Lobo Temporal , Estimulação Magnética Transcraniana , Ruído
5.
Cereb Cortex ; 34(1)2024 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-38163443

RESUMO

The onset of hearing loss can lead to altered brain structure and functions. However, hearing restoration may also result in distinct cortical reorganization. A differential pattern of functional remodeling was observed between post- and prelingual cochlear implant users, but it remains unclear how these speech processing networks are reorganized after cochlear implantation. To explore the impact of language acquisition and hearing restoration on speech perception in cochlear implant users, we conducted assessments of brain activation, functional connectivity, and graph theory-based analysis using functional near-infrared spectroscopy. We examined the effects of speech-in-noise stimuli on three groups: postlingual cochlear implant users (n = 12), prelingual cochlear implant users (n = 10), and age-matched individuals with hearing controls (HC) (n = 22). The activation of auditory-related areas in cochlear implant users showed a lower response compared with the HC group. Wernicke's area and Broca's area demonstrated differences network attributes in speech processing networks in post- and prelingual cochlear implant users. In addition, cochlear implant users maintain a high efficiency of the speech processing network to process speech information. Taken together, our results characterize the speech processing networks, in varying noise environments, in post- and prelingual cochlear implant users and provide new insights for theories of how implantation modes impact remodeling of the speech processing functional networks.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Humanos , Fala , Surdez/cirurgia , Audição , Percepção da Fala/fisiologia
6.
Cereb Cortex ; 34(2)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38282455

RESUMO

Individual variability in functional connectivity underlies individual differences in cognition and behaviors, yet its association with functional specialization in the auditory cortex remains elusive. Using resting-state functional magnetic resonance imaging data from the Human Connectome Project, this study was designed to investigate the spatial distribution of auditory cortex individual variability in its whole-brain functional network architecture. An inherent hierarchical axis of the variability was discerned, which radiates from the medial to lateral orientation, with the left auditory cortex demonstrating more pronounced variations than the right. This variability exhibited a significant correlation with the variations in structural and functional metrics in the auditory cortex. Four auditory cortex subregions, which were identified from a clustering analysis based on this variability, exhibited unique connectional fingerprints and cognitive maps, with certain subregions showing specificity to speech perception functional activation. Moreover, the lateralization of the connectional fingerprint exhibited a U-shaped trajectory across the subregions. These findings emphasize the role of individual variability in functional connectivity in understanding cortical functional organization, as well as in revealing its association with functional specialization from the activation, connectome, and cognition perspectives.


Assuntos
Córtex Auditivo , Conectoma , Humanos , Córtex Auditivo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Conectoma/métodos , Encéfalo , Cognição
7.
J Neurosci ; 43(26): 4856-4866, 2023 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-37127361

RESUMO

Listening in noisy environments requires effort- the active engagement of attention and other cognitive abilities- as well as increased arousal. The ability to separately quantify the contribution of these components is key to understanding the dynamics of effort and how it may change across listening situations and in certain populations. We concurrently measured two types of ocular data in young participants (both sexes): pupil dilation (PD; thought to index arousal aspects of effort) and microsaccades (MS; hypothesized to reflect automatic visual exploratory sampling), while they performed a speech-in-noise task under high- (HL) and low- (LL) listening load conditions. Sentences were manipulated so that the behaviorally relevant information (keywords) appeared at the end (Experiment 1) or beginning (Experiment 2) of the sentence, resulting in different temporal demands on focused attention. In line with previous reports, PD effects were associated with increased dilation under load. We observed a sustained difference between HL and LL conditions, consistent with increased phasic and tonic arousal. Importantly we show that MS rate was also modulated by listening load. This was manifested as a reduced MS rate in HL relative to LL. Critically, in contrast to the sustained difference seen for PD, MS effects were localized in time, specifically during periods when demands on auditory attention were greatest. These results demonstrate that auditory selective attention interfaces with the mechanisms controlling MS generation, establishing MS as an informative measure, complementary to PD, with which to quantify the temporal dynamics of auditory attentional processing under effortful listening conditions.SIGNIFICANCE STATEMENT Listening effort, reflecting the "cognitive bandwidth" deployed to effectively process sound in adverse environments, contributes critically to listening success. Understanding listening effort and the processes involved in its allocation is a major challenge in auditory neuroscience. Here, we demonstrate that microsaccade rate can be used to index a specific subcomponent of listening effort, the allocation of instantaneous auditory attention, that is distinct from the modulation of arousal indexed by pupil dilation (currently the dominant measure of listening effort). These results reveal the push-pull process through which auditory attention interfaces with the (visual) attention network that controls microsaccades, establishing microsaccades as a powerful tool for measuring auditory attention and its deficits.


Assuntos
Pupila , Percepção da Fala , Masculino , Feminino , Humanos , Percepção Auditiva , Ruído , Nível de Alerta
8.
Neuroimage ; 289: 120544, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38365164

RESUMO

Natural poetic speeches (i.e., proverbs, nursery rhymes, and commercial ads) with strong prosodic regularities are easily memorized by children and the harmonious acoustic patterns are suggested to facilitate their integrated sentence processing. Do children have specific neural pathways for perceiving such poetic utterances, and does their speech development benefit from it? We recorded the task-induced hemodynamic changes of 94 children aged 2 to 12 years using functional near-infrared spectroscopy (fNIRS) while they listened to poetic and non-poetic natural sentences. Seventy-three adult as controls were recruited to investigate the developmental specificity of children group. The results indicated that poetic sentences perceiving is a highly integrated process featured by a lower brain workload in both groups. However, an early activated large-scale network was induced only in the child group, coordinated by hubs for connectivity diversity. Additionally, poetic speeches evoked activation in the phonological encoding regions in the children's group rather than adult controls which decreases with children's ages. The neural responses to poetic speeches were positively linked to children's speech communication performance, especially the fluency and semantic aspects. These results reveal children's neural sensitivity to integrated speech perception which facilitate early speech development by strengthening more sophisticated language networks and the perception-production circuit.


Assuntos
Percepção da Fala , Fala , Criança , Adulto , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Idioma , Encéfalo/fisiologia , Semântica , Desenvolvimento da Linguagem
9.
Neuroimage ; 297: 120696, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38909761

RESUMO

How is information processed in the cerebral cortex? In most cases, recorded brain activity is averaged over many (stimulus) repetitions, which erases the fine-structure of the neural signal. However, the brain is obviously a single-trial processor. Thus, we here demonstrate that an unsupervised machine learning approach can be used to extract meaningful information from electro-physiological recordings on a single-trial basis. We use an auto-encoder network to reduce the dimensions of single local field potential (LFP) events to create interpretable clusters of different neural activity patterns. Strikingly, certain LFP shapes correspond to latency differences in different recording channels. Hence, LFP shapes can be used to determine the direction of information flux in the cerebral cortex. Furthermore, after clustering, we decoded the cluster centroids to reverse-engineer the underlying prototypical LFP event shapes. To evaluate our approach, we applied it to both extra-cellular neural recordings in rodents, and intra-cranial EEG recordings in humans. Finally, we find that single channel LFP event shapes during spontaneous activity sample from the realm of possible stimulus evoked event shapes. A finding which so far has only been demonstrated for multi-channel population coding.

10.
Eur J Neurosci ; 59(3): 394-414, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38151889

RESUMO

Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.


Assuntos
Percepção da Fala , Fala , Humanos , Estimulação Acústica , Acústica
11.
Eur J Neurosci ; 59(8): 1918-1932, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37990611

RESUMO

The unconscious integration of vocal and facial cues during speech perception facilitates face-to-face communication. Recent studies have provided substantial behavioural evidence concerning impairments in audiovisual (AV) speech perception in schizophrenia. However, the specific neurophysiological mechanism underlying these deficits remains unknown. Here, we investigated activities and connectivities centered on the auditory cortex during AV speech perception in schizophrenia. Using magnetoencephalography, we recorded and analysed event-related fields in response to auditory (A: voice), visual (V: face) and AV (voice-face) stimuli in 23 schizophrenia patients (13 males) and 22 healthy controls (13 males). The functional connectivity associated with the subadditive response to AV stimulus (i.e., [AV] < [A] + [V]) was also compared between the two groups. Within the healthy control group, [AV] activity was smaller than the sum of [A] and [V] at latencies of approximately 100 ms in the posterior ramus of the lateral sulcus in only the left hemisphere, demonstrating a subadditive N1m effect. Conversely, the schizophrenia group did not show such a subadditive response. Furthermore, weaker functional connectivity from the posterior ramus of the lateral sulcus of the left hemisphere to the fusiform gyrus of the right hemisphere was observed in schizophrenia. Notably, this weakened connectivity was associated with the severity of negative symptoms. These results demonstrate abnormalities in connectivity between speech- and face-related cortical areas in schizophrenia. This aberrant subadditive response and connectivity deficits for integrating speech and facial information may be the neural basis of social communication dysfunctions in schizophrenia.


Assuntos
Córtex Auditivo , Esquizofrenia , Percepção da Fala , Masculino , Humanos , Percepção da Fala/fisiologia , Magnetoencefalografia , Fala/fisiologia , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos
12.
Anim Cogn ; 27(1): 34, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38625429

RESUMO

Humans have an impressive ability to comprehend signal-degraded speech; however, the extent to which comprehension of degraded speech relies on human-specific features of speech perception vs. more general cognitive processes is unknown. Since dogs live alongside humans and regularly hear speech, they can be used as a model to differentiate between these possibilities. One often-studied type of degraded speech is noise-vocoded speech (sometimes thought of as cochlear-implant-simulation speech). Noise-vocoded speech is made by dividing the speech signal into frequency bands (channels), identifying the amplitude envelope of each individual band, and then using these envelopes to modulate bands of noise centered over the same frequency regions - the result is a signal with preserved temporal cues, but vastly reduced frequency information. Here, we tested dogs' recognition of familiar words produced in 16-channel vocoded speech. In the first study, dogs heard their names and unfamiliar dogs' names (foils) in vocoded speech as well as natural speech. In the second study, dogs heard 16-channel vocoded speech only. Dogs listened longer to their vocoded name than vocoded foils in both experiments, showing that they can comprehend a 16-channel vocoded version of their name without prior exposure to vocoded speech, and without immediate exposure to the natural-speech version of their name. Dogs' name recognition in the second study was mediated by the number of phonemes in the dogs' name, suggesting that phonological context plays a role in degraded speech comprehension.


Assuntos
Percepção da Fala , Fala , Humanos , Animais , Cães , Sinais (Psicologia) , Audição , Linguística
13.
Dev Sci ; 27(1): e13412, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37219071

RESUMO

Literacy acquisition is a complex process with genetic and environmental factors influencing cognitive and neural processes associated with reading. Previous research identified factors that predict word reading fluency (WRF), including phonological awareness (PA), rapid automatized naming (RAN), and speech-in-noise perception (SPIN). Recent theoretical accounts suggest dynamic interactions between these factors and reading, but direct investigations of such dynamics are lacking. Here, we investigated the dynamic effect of phonological processing and speech perception on WRF. More specifically, we evaluated the dynamic influence of PA, RAN, and SPIN measured in kindergarten (the year prior to formal reading instruction), first grade (the first year of formal reading instruction) and second grade on WRF in second and third grade. We also assessed the effect of an indirect proxy of family risk for reading difficulties using a parental questionnaire (Adult Reading History Questionnaire, ARHQ). We applied path modeling in a longitudinal sample of 162 Dutch-speaking children of whom the majority was selected to have an increased family and/or cognitive risk for dyslexia. We showed that parental ARHQ had a significant effect on WRF, RAN and SPIN, but unexpectedly not on PA. We also found effects of RAN and PA directly on WRF that were limited to first and second grade respectively, in contrast to previous research reporting pre-reading PA effects and prolonged RAN effects throughout reading acquisition. Our study provides important new insights into early prediction of later word reading abilities and into the optimal time window to target a specific reading-related subskill during intervention.


Assuntos
Dislexia , Leitura , Criança , Humanos , Fonética , Idioma , Cognição
14.
Dev Sci ; 27(1): e13420, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37350014

RESUMO

Auditory selective attention forms an important foundation of children's learning by enabling the prioritisation and encoding of relevant stimuli. It may also influence reading development, which relies on metalinguistic skills including the awareness of the sound structure of spoken language. Reports of attentional impairments and speech perception difficulties in noisy environments in dyslexic readers are also suggestive of the putative contribution of auditory attention to reading development. To date, it is unclear whether non-speech selective attention and its underlying neural mechanisms are impaired in children with dyslexia and to which extent these deficits relate to individual reading and speech perception abilities in suboptimal listening conditions. In this EEG study, we assessed non-speech sustained auditory selective attention in 106 7-to-12-year-old children with and without dyslexia. Children attended to one of two tone streams, detecting occasional sequence repeats in the attended stream, and performed a speech-in-speech perception task. Results show that when children directed their attention to one stream, inter-trial-phase-coherence at the attended rate increased in fronto-central sites; this, in turn, was associated with better target detection. Behavioural and neural indices of attention did not systematically differ as a function of dyslexia diagnosis. However, behavioural indices of attention did explain individual differences in reading fluency and speech-in-speech perception abilities: both these skills were impaired in dyslexic readers. Taken together, our results show that children with dyslexia do not show group-level auditory attention deficits but these deficits may represent a risk for developing reading impairments and problems with speech perception in complex acoustic environments. RESEARCH HIGHLIGHTS: Non-speech sustained auditory selective attention modulates EEG phase coherence in children with/without dyslexia Children with dyslexia show difficulties in speech-in-speech perception Attention relates to dyslexic readers' speech-in-speech perception and reading skills Dyslexia diagnosis is not linked to behavioural/EEG indices of auditory attention.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Leitura , Som , Fala , Distúrbios da Fala , Fonética
15.
Dev Sci ; : e13551, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39036879

RESUMO

Test-retest reliability-establishing that measurements remain consistent across multiple testing sessions-is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants' preference for infant-directed speech over adult-directed speech in a large sample (N = 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants' speech preference (overall r = 0.09, 95% CI [-0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants. RESEARCH HIGHLIGHTS: We assessed test-retest reliability of infants' preference for infant-directed over adult-directed speech in a large pre-registered sample (N = 158). There was no consistent evidence of test-retest reliability in measures of infants' speech preference. Applying stricter criteria for the inclusion of participants may lead to higher test-retest reliability, but at the cost of substantial decreases in sample size. Developmental research relying on stable individual differences should consider the underlying reliability of its measures.

16.
Dev Sci ; 27(1): e13431, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37403418

RESUMO

As reading is inherently a multisensory, audiovisual (AV) process where visual symbols (i.e., letters) are connected to speech sounds, the question has been raised whether individuals with reading difficulties, like children with developmental dyslexia (DD), have broader impairments in multisensory processing. This question has been posed before, yet it remains unanswered due to (a) the complexity and contentious etiology of DD along with (b) lack of consensus on developmentally appropriate AV processing tasks. We created an ecologically valid task for measuring multisensory AV processing by leveraging the natural phenomenon that speech perception improves when listeners are provided visual information from mouth movements (particularly when the auditory signal is degraded). We designed this AV processing task with low cognitive and linguistic demands such that children with and without DD would have equal unimodal (auditory and visual) performance. We then collected data in a group of 135 children (age 6.5-15) with an AV speech perception task to answer the following questions: (1) How do AV speech perception benefits manifest in children, with and without DD? (2) Do children all use the same perceptual weights to create AV speech perception benefits, and (3) what is the role of phonological processing in AV speech perception? We show that children with and without DD have equal AV speech perception benefits on this task, but that children with DD rely less on auditory processing in more difficult listening situations to create these benefits and weigh both incoming information streams differently. Lastly, any reported differences in speech perception in children with DD might be better explained by differences in phonological processing than differences in reading skills. RESEARCH HIGHLIGHTS: Children with versus without developmental dyslexia have equal audiovisual speech perception benefits, regardless of their phonological awareness or reading skills. Children with developmental dyslexia rely less on auditory performance to create audiovisual speech perception benefits. Individual differences in speech perception in children might be better explained by differences in phonological processing than differences in reading skills.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Adolescente , Dislexia/psicologia , Leitura , Fonética , Conscientização
17.
Audiol Neurootol ; 29(1): 60-66, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37586357

RESUMO

INTRODUCTION: The effect of chronic kidney disease (CKD) on hearing is well documented in the literature. Several studies have investigated the effect of hemodialysis on the peripheral auditory system among individuals with CKD. However, studies investigating the effect of hemodialysis on speech perception and auditory processing abilities are limited. The present study investigated the effect of hemodialysis on few auditory processing abilities and speech perception in noise among adults with CKD. METHODS: A total of 25 adults with CKD undergoing hemodialysis regularly participated in the study. Spectral ripple discrimination threshold (SRDT), gap detection threshold (GDT), amplitude-modulation detection threshold (AMDT), and speech recognition threshold in noise (SRTn) were measured before and after hemodialysis. Paired samples "t" test was carried out to investigate the effect of hemodialysis on thresholds. RESULTS: Results showed a significant improvement for SRDT, GDT, AMDT, and SRTn after hemodialysis among individuals with CKD. DISCUSSION: Hemodialysis showed a positive effect on speech perception in noise and auditory processing abilities among individuals with CKD.


Assuntos
Insuficiência Renal Crônica , Percepção da Fala , Percepção do Tempo , Adulto , Humanos , Limiar Auditivo , Percepção Auditiva , Diálise Renal , Insuficiência Renal Crônica/terapia
18.
Audiol Neurootol ; : 1-7, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38768568

RESUMO

INTRODUCTION: This study aimed to verify the influence of speech stimulus presentation and speed on auditory recognition in cochlear implant (CI) users with poorer performance. METHODS: The cross-sectional observational study applied auditory speech perception tests to fifteen adults, using three different ways of presenting the stimulus, in the absence of competitive noise: monitored live voice (MLV); recorded speech at typical speed (RSTS); recorded speech at slow speed (RSSS). The scores were assessed using the Percent Sentence Recognition Index (PSRI). The data were inferentially analysed using the Friedman and Wilcoxon tests with a 95% confidence interval and 5% significance level (p < 0.05). RESULTS: The mean age was 41.1 years, the mean duration of CI use was 11.4 years, and the mean hearing threshold was 29.7 ± 5.9 dBHL. Test performance, as determined by the PSRI, was MLV = 42.4 ± 17.9%; RSTS = 20.3 ± 14.3%; RSSS = 40.6 ± 20.7%. There was a significant difference identified for RSTS compared to MLV and RSSS. CONCLUSION: The way the stimulus is presented and the speed at which it is presented enable greater auditory speech recognition in CI users, thus favouring comprehension when the tests are applied in the MLV and RSSS modalities.

19.
Audiol Neurootol ; : 1-8, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38697033

RESUMO

INTRODUCTION: The aim of this study was to examine how bimodal stimulation affects quality of life (QOL) during the postoperative period following cochlear implantation (CI). These data could potentially provide evidence to encourage more bimodal candidates to continue hearing aid (HA) use after CI. METHODS: In this prospective study, patients completed preoperative and 1-, 3-, and 6-month post-activation QOL surveys on listening effort, speech perception, sound quality/localization, and hearing handicap. Fifteen HA users who were candidates for contralateral CI completed the study (mean age 65.6 years). RESULTS: Patients used both devices at a median rate of 97%, 97%, and 98% of the time at 1, 3, and 6 months, respectively. On average, patients' hearing handicap scores decreased by 16% at 1 month, 36% at 3 months, and 30% at 6 months. Patients' listening effort scores decreased by a mean of 10.8% at 1 month, 12.6% at 3 months, and 18.7% at 6 months. Localization significantly improved by 24.3% at 1 month and remained steady. There was no significant improvement in sound quality scores. CONCLUSION: Bimodal listeners should expect QOL to improve, and listening effort and localization are generally optimized using CI and HA compared to CI alone. Some scores improved at earlier time points than others, suggesting bimodal auditory skills may develop at different rates.

20.
Audiol Neurootol ; : 1-19, 2024 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-38387454

RESUMO

INTRODUCTION: For the treatment of single-sided deafness (SSD), common treatment choices include a contralateral routing of signals (CROS) hearing aid, a bone conduction device (BCD), and a cochlear implant (CI). The primary aim of this study was to compare speech understanding in noise and binaural benefits in adults with postlingual SSD between preoperative unaided baseline, preoperative CROS and BCD trial devices, and CI, following recommendations from a consensus protocol. In addition, we investigated the effect of masker type on speech understanding. METHODS: This was a prospective study with twelve participants. Binaural effects of head shadow, squelch, summation, and spatial release from masking were assessed by measuring speech reception thresholds (SRTs) in five different spatial target-masker configurations using two different maskers: two-talker babble (TTB), and speech-shaped noise (SSN). Preoperatively, participants were assessed unaided and with CROS and BCD trial devices. After cochlear implantation, participants were assessed at 1, 3, and 6 months post-activation. RESULTS: For TTB, significant improvements in SRT with a CI relative to preoperatively unaided were found in all spatial configurations. With CI at 6 months, median benefits were 7.8 dB in SSSDNAH and 5.1 dB in S0NAH (head shadow), 3.4 dB in S0N0 (summation), and 4.6 dB in S0NSSD and 5.1 dB in SAHNSSD (squelch). CROS yielded a significant head shadow benefit of 2.4 dB in SSSDNAH and a significant deterioration in squelch of 2.5 dB in S0NSSD and SAHNSSD, but no summation effect. With BCD, there was a significant summation benefit of 1.5 dB, but no head shadow nor squelch effect. For SSN, significant improvements in SRT with CI compared to preoperatively unaided were found in three spatial configurations. Median benefits with CI at 6 months were: 8.5 dB in SSSDNAH and 4.6 dB in S0NAH (head shadow), 1.4 dB in S0N0 (summation), but no squelch. CROS showed a significant head shadow benefit of 1.7 dB in SSSDNAH, but no summation effect, and a significant deterioration in squelch of 2.9 dB in S0NSSD and 3.2 dB in SAHNSSD. With BCD, no binaural effect was obtained. Longitudinally, we found significant head shadow benefits with a CI in SSSDNAH in both maskers at all postoperative intervals and in S0NAH at 3 and 6 months post-activation. CONCLUSION: With a CI, a clear benefit for masked speech perception was observed for all binaural effects. Benefits with CROS and BCD were more limited. CROS usage was detrimental to the squelch effect.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa