Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
Neuroimage ; : 120796, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39153523

RESUMO

PURPOSE: In this study, the objectification of the subjective perception of loudness was investigated using electroencephalography (EEG). In particular, the emergence of objective markers in the domain of the acoustic discomfort threshold was examined. METHODS: A cohort of 27 adults with normal hearing, aged between 18 and 30, participated in the study. The participants were presented with 500 ms long noise stimuli via in-ear headphones. The acoustic signals were presented with sound levels of [55, 65, 75, 85, 95 dB]. After each stimulus, the subjects provided their subjective assessment of the perceived loudness using a colored scale on a touchscreen. EEG signals were recorded, and afterward, event-related potentials (ERPs) locked to sound onset were analyzed. RESULTS: Our findings reveal a linear dependency between the N100 components and both the sound level and the subjective loudness categorization of the sound. Additionally, the data demonstrated a nonlinear relationship between the P300 potential and the sound level as well as for the subjective loudness rating. The P300 potential was elicited exclusively when the stimuli had been subjectively rated as "very loud" or "too loud". CONCLUSION: The findings of the present study suggest the possibility of the identification of the subjective uncomfortable loudness level by objective neural parameters.

2.
Audiol Neurootol ; 28(4): 262-271, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36791686

RESUMO

INTRODUCTION: Contralateral routing of signals (CROS) overcomes the head shadow effect by redirecting speech signals from the contralateral ear to the better-hearing cochlear implant (CI) ear. Here we tested the performance of an adaptive monaural beamformer (MB) and a fixed binaural beamformer (BB) using the CROS system of Advanced Bionics. METHODS: In a group of 17 unilateral CI users, we evaluated the benefits of MB and BB for speech recognition by measuring speech reception threshold (SRT) with and without beamforming. MB and BB were additionally evaluated with signal-to-noise ratio (SNR) measurements using a KEMAR manikin. We also assessed the effect of residual hearing in the CROS ear on the benefits of MB and BB. Speech was delivered in front of the listener in a background of homogeneous 8-talker babble noise. RESULTS: With CI-CROS in omnidirectional settings with the T-mic active on the CI as a reference, BB significantly improved SRT by 1.4 dB, whereas MB yielded no significant improvements. The difference in effects on SRT between the two beamformers was, however, not significant. SNR effects were substantially larger, at 2.1 dB for MB and 5.8 dB for BB. CI-CROS with default omnidirectional settings also improved SRT and SNR by 1 dB over CI alone. Residual hearing did not significantly affect beamformer performance. DISCUSSION: We recommend the use of BB over MB for CI-CROS users. Residual hearing in the CROS ear is not a limiting factor for fitting a CROS device, although a bimodal option should be considered.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Audição , Ruído
3.
Audiol Neurootol ; 27(1): 75-82, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-33849023

RESUMO

INTRODUCTION: Contralateral routing of signals (CROS) can be used to eliminate the head shadow effect. In unilateral cochlear implant (CI) users, CROS can be achieved with placement of a microphone on the contralateral ear, with the signal streamed to the CI ear. CROS was originally developed for unilateral CI users without any residual hearing in the nonimplanted ear. However, the criteria for implantation are becoming progressively looser, and the nonimplanted ear can have substantial residual hearing. In this study, we assessed how residual hearing in the contralateral ear influences CROS effectiveness in unilateral CI users. METHODS: In a group of unilateral CI users (N = 17) with varying amounts of residual hearing, we deployed free-field speech tests to determine the effects of CROS on the speech reception threshold (SRT) in amplitude-modulated noise. We compared 2 spatial configurations: (1) speech presented to the CROS ear and noise to the CI ear (SCROSNCI) and (2) the reverse (SCINCROS). RESULTS: Compared with the use of CI only, CROS improved the SRT by 6.4 dB on average in the SCROSNCI configuration. In the SCINCROS configuration, however, CROS deteriorated the SRT by 8.4 dB. The benefit and disadvantage of CROS both decreased significantly with the amount of residual hearing. CONCLUSION: CROS users need careful instructions about the potential disadvantage when listening in conditions where the CROS ear mainly receives noise, especially if they have residual hearing in the contralateral ear. The CROS device should be turned off when it is on the noise side (SCINCROS). CI users with residual hearing in the CROS ear also should understand that contralateral amplification (i.e., a bimodal hearing solution) will yield better results than a CROS device. Unilateral CI users with no functional contralateral hearing should be considered the primary target population for a CROS device.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Localização de Som , Percepção da Fala , Progressão da Doença , Audição , Humanos
4.
Sensors (Basel) ; 22(8)2022 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-35458885

RESUMO

Cough is a very common symptom and the most frequent reason for seeking medical advice. Optimized care goes inevitably through an adapted recording of this symptom and automatic processing. This study provides an updated exhaustive quantitative review of the field of cough sound acquisition, automatic detection in longer audio sequences and automatic classification of the nature or disease. Related studies were analyzed and metrics extracted and processed to create a quantitative characterization of the state-of-the-art and trends. A list of objective criteria was established to select a subset of the most complete detection studies in the perspective of deployment in clinical practice. One hundred and forty-four studies were short-listed, and a picture of the state-of-the-art technology is drawn. The trend shows an increasing number of classification studies, an increase of the dataset size, in part from crowdsourcing, a rapid increase of COVID-19 studies, the prevalence of smartphones and wearable sensors for the acquisition, and a rapid expansion of deep learning. Finally, a subset of 12 detection studies is identified as the most complete ones. An unequaled quantitative overview is presented. The field shows a remarkable dynamic, boosted by the research on COVID-19 diagnosis, and a perfect adaptation to mobile health.


Assuntos
COVID-19 , Crowdsourcing , COVID-19/diagnóstico , Teste para COVID-19 , Tosse/diagnóstico , Humanos , Som
5.
Neuroimage ; 244: 118575, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34517127

RESUMO

Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.


Assuntos
Córtex Auditivo/fisiologia , Hemodinâmica/fisiologia , Neurônios/fisiologia , Teorema de Bayes , Retroalimentação Fisiológica , Retroalimentação Psicológica , Humanos , Imageamento por Ressonância Magnética , Modelos Neurológicos , Sensação , Som , Lobo Temporal/fisiologia
6.
Audiol Neurootol ; 26(3): 188-194, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33461201

RESUMO

PURPOSE: Cochlear implant (CI) sound-processing strategies are important to the overall success of a CI recipient. This study aimed to determine the effects of 2 Advanced Bionics (AB) CI-processing strategies, Optima-S and Optima-P, on speech recognition outcomes in adult CI users. METHODS: A retrospective chart review was completed at a tertiary academic medical center. Seventeen post-lingually deafened adult CI users (median age = 58.6 years; age range: 23.5-78.9 years) with long-term use of a paired sound-processing strategy (Optima-P) were reprogrammed with a sequential strategy (Optima-S). Demographic data and speech recognition scores with pre- and post-intervention analyses were collected and compared with respect to the 95% confidence interval for common CI word and sentence recognition tests. RESULTS: Using Optima-S sound-processing strategy, all patients (100%) performed equivalent or better on word and sentence testing than with Optima-P. More specifically, 17.6, 41.2, and 58.8% of the patients performed above the 95% confidence interval for speech recognition conditions of monosyllabic words, sentences in quiet, and sentences in noise, respectively. All patients (100%) selected Optima-S as their preferred strategy for future CI use. CONCLUSION: Speech recognition performance with Optima-S processing strategy was stable or improved compared to results with Optima-P in all tested conditions, and subjective preference of Optima-S was selected by all patients. Given these results, CI clinicians should consider programming AB CI users with Optima-S sound-processing strategy to optimize overall speech recognition performance.


Assuntos
Estimulação Acústica/métodos , Implante Coclear , Implantes Cocleares , Percepção da Fala/fisiologia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Fala , Adulto Jovem
7.
Cereb Cortex ; 30(7): 3895-3909, 2020 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-32090251

RESUMO

Cortical inhibition is essential for brain activity and behavior. Yet, the mechanisms that modulate cortical inhibition and their impact on sensory processing remain less understood. Synaptically released zinc, a neuromodulator released by cortical glutamatergic synaptic vesicles, has emerged as a powerful modulator of sensory processing and behavior. Despite the puzzling finding that the vesicular zinc transporter (ZnT3) mRNA is expressed in cortical inhibitory interneurons, the actions of synaptic zinc in cortical inhibitory neurotransmission remain unknown. Using in vitro electrophysiology and optogenetics in mouse brain slices containing the layer 2/3 (L2/3) of auditory cortex, we discovered that synaptic zinc increases the quantal size of inhibitory GABAergic neurotransmission mediated by somatostatin (SOM)- but not parvalbumin (PV)-expressing neurons. Using two-photon imaging in awake mice, we showed that synaptic zinc is required for the effects of SOM- but not PV-mediated inhibition on frequency tuning of principal neurons. Thus, cell-specific zinc modulation of cortical inhibition regulates frequency tuning.


Assuntos
Córtex Auditivo/metabolismo , Inibição Neural/fisiologia , Neurônios/metabolismo , Sinapses/metabolismo , Zinco/metabolismo , Animais , Córtex Auditivo/fisiologia , Proteínas de Transporte de Cátions/genética , Técnicas In Vitro , Potenciais Pós-Sinápticos Inibidores , Interneurônios/metabolismo , Camundongos , Camundongos Knockout , Imagem Óptica , Optogenética , Parvalbuminas/metabolismo , Técnicas de Patch-Clamp , RNA Mensageiro/metabolismo , Somatostatina/metabolismo , Transmissão Sináptica , Oligoelementos/farmacologia , Zinco/farmacologia , Ácido gama-Aminobutírico/metabolismo
8.
Int J Mol Sci ; 22(9)2021 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-33925933

RESUMO

The LIM homeodomain transcription factor ISL1 is essential for the different aspects of neuronal development and maintenance. In order to study the role of ISL1 in the auditory system, we generated a transgenic mouse (Tg) expressing Isl1 under the Pax2 promoter control. We previously reported a progressive age-related decline in hearing and abnormalities in the inner ear, medial olivocochlear system, and auditory midbrain of these Tg mice. In this study, we investigated how Isl1 overexpression affects sound processing by the neurons of the inferior colliculus (IC). We recorded extracellular neuronal activity and analyzed the responses of IC neurons to broadband noise, clicks, pure tones, two-tone stimulation and frequency-modulated sounds. We found that Tg animals showed a higher inhibition as displayed by two-tone stimulation; they exhibited a wider dynamic range, lower spontaneous firing rate, longer first spike latency and, in the processing of frequency modulated sounds, showed a prevalence of high-frequency inhibition. Functional changes were accompanied by a decreased number of calretinin and parvalbumin positive neurons, and an increased expression of vesicular GABA/glycine transporter and calbindin in the IC of Tg mice, compared to wild type animals. The results further characterize abnormal sound processing in the IC of Tg mice and demonstrate that major changes occur on the side of inhibition.


Assuntos
Percepção Auditiva/genética , Colículos Inferiores/fisiologia , Proteínas com Homeodomínio LIM/genética , Fatores de Transcrição/genética , Animais , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Encéfalo/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Feminino , Expressão Gênica/genética , Audição , Humanos , Colículos Inferiores/metabolismo , Proteínas com Homeodomínio LIM/metabolismo , Masculino , Camundongos , Camundongos Transgênicos , Neurônios/fisiologia , Fator de Transcrição PAX2/genética , Regiões Promotoras Genéticas/genética , Fatores de Transcrição/metabolismo
9.
Exp Brain Res ; 236(3): 733-743, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29306985

RESUMO

Autism spectrum disorder (ASD) is diverse, manifesting in a wide array of phenotypes. However, a consistent theme is reduced communicative and social abilities. Auditory processing deficits have been shown in individuals with ASD-these deficits may play a role in the communication difficulties ASD individuals experience. Specifically, children with ASD have delayed neural timing and poorer tracking of a changing pitch relative to their typically developing peers. Given that accurate processing of sound requires highly coordinated and consistent neural activity, we hypothesized that these auditory processing deficits stem from a failure to respond to sound in a consistent manner. Therefore, we predicted that individuals with ASD have reduced neural stability in response to sound. We recorded the frequency-following response (FFR), an evoked response that mirrors the acoustic features of its stimulus, of high-functioning children with ASD age 7-13 years. Evident across multiple speech stimuli, children with ASD have less stable FFRs to speech sounds relative to their typically developing peers. This reduced auditory stability could contribute to the language and communication profiles observed in individuals with ASD.


Assuntos
Percepção Auditiva/fisiologia , Transtorno do Espectro Autista/fisiopatologia , Adolescente , Criança , Feminino , Humanos , Masculino , Percepção da Fala/fisiologia
10.
Sensors (Basel) ; 18(3)2018 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-29518927

RESUMO

Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT) actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP). These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE).

11.
Cogn Affect Behav Neurosci ; 16(5): 940-61, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27473463

RESUMO

Both the imagery literature and grounded models of language comprehension emphasize the tight coupling of high-level cognitive processes, such as forming a mental image of something or language understanding, and low-level sensorimotor processes in the brain. In an electrophysiological study, imagery and language processes were directly compared and the sensory associations of processing linguistically implied sounds or imagined sounds were investigated. Participants read sentences describing auditory events (e.g., "The dog barks"), heard a physical (environmental) sound, or had to imagine such a sound. We examined the influence of the 3 sound conditions (linguistic, physical, imagery) on subsequent physical sound processing. Event-related potential (ERP) difference waveforms indicated that in all 3 conditions, prime compatibility influenced physical sound processing. The earliest compatibility effect was observed in the physical condition, starting in the 80-110 ms time interval with a negative maximum over occipital electrode sites. In contrast, the linguistic and the imagery condition elicited compatibility effects starting in the 180-220 ms time window with a maximum over central electrode sites. In line with the ERPs, the analysis of the oscillatory activity showed that compatibility influenced early theta and alpha band power changes in the physical, but not in the linguistic and imagery, condition. These dissociations were further confirmed by dipole localization results showing a clear separation between the source of the compatibility effect in the physical sound condition (superior temporal area) and the source of the compatibility effect triggered by the linguistically implied sounds or the imagined sounds (inferior temporal area). Implications for grounded models of language understanding are discussed.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Linguística , Estimulação Acústica , Ritmo alfa , Eletroculografia , Potenciais Evocados , Feminino , Humanos , Imaginação/fisiologia , Masculino , Testes Neuropsicológicos , Ritmo Teta , Fatores de Tempo , Adulto Jovem
12.
Early Child Educ J ; 44(1): 11-19, 2016 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-26839494

RESUMO

This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching pre-school children to decode, or read, single letters. The study compared a control group, which received the preschool's standard letter-sound instruction, to an intervention group which received a 3-step letter-sound instruction intervention. The children's growth in letter-sound reading and CVC word decoding abilities were assessed at baseline and 2, 4, 6 and 8 weeks. When compared to the control group, the growth of letter-sound reading ability was slightly higher for the intervention group. The rate of increase in letter-sound reading was significantly faster for the intervention group. In both groups, too few children learned to decode any CVC words to allow for analysis. Results of this study support the use of the intervention strategy in preschools for teaching children print-to-sound processing.

13.
J Neurophysiol ; 113(1): 307-27, 2015 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-25298387

RESUMO

We recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a "single mode" in A1 that relies on increased activity for AM relative to unmodulated sounds and a "dual-polar mode" in ML that uses both increases and decreases in neural activity to encode modulation. In the dual-polar ML code, nonsynchronized responses might play a special role. The results are consistent with findings in the primary and secondary somatosensory cortices during discrimination of vibrotactile modulation frequency, implicating a common scheme in the hierarchical processing of temporal information among different modalities. The time course of activity differences between behaving and passive conditions was also distinct in A1 and ML and may have implications for auditory attention. At modulation depths ≥ 16% (approximately behavioral threshold), A1 neurons' improvement in distinguishing AM from unmodulated noise is relatively constant or improves slightly with increasing modulation depth. In ML, improvement during engagement is most pronounced near threshold and disappears at highly suprathreshold depths. This ML effect is evident later in the stimulus, and mainly in nonsynchronized responses. This suggests that attention-related increases in activity are stronger or longer-lasting for more difficult stimuli in ML.


Assuntos
Córtex Auditivo/fisiologia , Discriminação Psicológica/fisiologia , Percepção Sonora/fisiologia , Neurônios/fisiologia , Estimulação Acústica , Potenciais de Ação , Animais , Feminino , Macaca mulatta , Masculino , Microeletrodos , Testes Neuropsicológicos , Curva ROC , Processamento de Sinais Assistido por Computador , Fatores de Tempo
14.
Clin Neurophysiol ; 162: 248-261, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38492973

RESUMO

OBJECTIVE: We investigated how infant mismatch responses (MMRs), which have the potential for providing information on auditory discrimination abilities, could predict subsequent development of pre-reading skills and the risk for familial dyslexia. METHODS: We recorded MMRs to vowel, duration, and frequency deviants in pseudo-words at birth and 28 months in a sample over-represented by infants with dyslexia risk. We examined MMRs' associations with pre-reading skills at 28 months and 4-5 years and compared the results in subgroups with vs. without dyslexia risk. RESULTS: Larger positive MMR (P-MMR) at birth was found to be associated with better serial naming. In addition, increased mismatch negativity (MMN) and late discriminative negativity (LDN), and decreased P-MMR at 28 months overall, were shown to be related to better pre-reading skills. The associations were influenced by dyslexia risk, which was also linked to poor pre-reading skills. CONCLUSIONS: Infant MMRs, providing information about the maturity of the auditory system, are associated with the development of pre-reading skills. Speech-processing deficits may contribute to deficits in language acquisition observed in dyslexia. SIGNIFICANCE: Infant MMRs could work as predictive markers of atypical linguistic development during early childhood. Results may help in planning preventive and rehabilitation interventions in children at risk of learning impairments.


Assuntos
Dislexia , Desenvolvimento da Linguagem , Humanos , Dislexia/fisiopatologia , Dislexia/diagnóstico , Masculino , Feminino , Pré-Escolar , Lactente , Percepção da Fala/fisiologia , Potenciais Evocados Auditivos/fisiologia , Eletroencefalografia/métodos , Estimulação Acústica/métodos , Fonética
15.
Hear Res ; 428: 108677, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36580732

RESUMO

Perception of speech requires sensitivity to features, such as amplitude and frequency modulations, that are often temporally regular. Previous work suggests age-related changes in neural responses to temporally regular features, but little work has focused on age differences for different types of modulations. We recorded magnetoencephalography in younger (21-33 years) and older adults (53-73 years) to investigate age differences in neural responses to slow (2-6 Hz sinusoidal and non-sinusoidal) modulations in amplitude, frequency, or combined amplitude and frequency. Audiometric pure-tone average thresholds were elevated in older compared to younger adults, indicating subclinical hearing impairment in the recruited older-adult sample. Neural responses to sound onset (independent of temporal modulations) were increased in magnitude in older compared to younger adults, suggesting hyperresponsivity and a loss of inhibition in the aged auditory system. Analyses of neural activity to modulations revealed greater neural synchronization with amplitude, frequency, and combined amplitude-frequency modulations for older compared to younger adults. This potentiated response generalized across different degrees of temporal regularity (sinusoidal and non-sinusoidal), although neural synchronization was generally lower for non-sinusoidal modulation. Despite greater synchronization, sustained neural activity was reduced in older compared to younger adults for sounds modulated both sinusoidally and non-sinusoidally in frequency. Our results suggest age differences in the sensitivity of the auditory system to features present in speech and other natural sounds.


Assuntos
Percepção Auditiva , Perda Auditiva , Humanos , Idoso , Percepção Auditiva/fisiologia , Som , Magnetoencefalografia , Estimulação Acústica/métodos
16.
Biol Psychol ; 177: 108512, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36724810

RESUMO

Past work has shown that when a peripheral sound captures our attention, it activates the contralateral visual cortex as revealed by an event-related potential component labelled the auditory-evoked contralateral occipital positivity (ACOP). This cross-modal activation of the visual cortex has been observed even when the sounds were not relevant to the ongoing task (visual or auditory), suggesting that peripheral sounds automatically activate the visual cortex. However, it is unclear whether top-down factors such as visual working memory (VWM) load and endogenous attention, which modulate the impact of task-irrelevant information, may modulate this spatially-specific component. Here, we asked participants to perform a lateralized VWM task (change detection), whose performance is supported by both endogenous spatial attention and VWM storage. A peripheral sound that was unrelated to the ongoing task was delivered during the retention interval. The amplitude of sound-elicited ACOP was analyzed as a function of the spatial correspondence with the cued hemifield, and of the memory array set-size. The typical ACOP modulation was observed over parieto-occipital sites in the 280-500 ms time window after sound onset. Its amplitude was not affected by VWM load but was modulated when the location of the sound did not correspond to the hemifield (right or left) that was cued for the change detection task. Our results suggest that sound-elicited activation of visual cortices, as reflected in the ACOP modulation, is unaffected by visual working memory load. However, endogenous spatial attention affects the ACOP, challenging the hypothesis that it reflects an automatic process.


Assuntos
Potenciais Evocados Auditivos , Córtex Visual , Adulto , Feminino , Humanos , Masculino , Adulto Jovem , Atenção/fisiologia , Potenciais Evocados Auditivos/fisiologia , Memória de Curto Prazo/fisiologia , Córtex Visual/fisiologia
17.
J Autism Dev Disord ; 53(8): 3257-3271, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35672616

RESUMO

Efficient neural encoding of sound plays a critical role in speech and language, and when impaired, may have reverberating effects on communication skills. This study investigated disruptions to neural processing of temporal and spectral properties of speech in individuals with ASD and their parents and found evidence of inefficient temporal encoding of speech sounds in both groups. The ASD group further demonstrated less robust neural representation of spectral properties of speech sounds. Associations between neural processing of speech sounds and language-related abilities were evident in both groups. Parent-child associations were also detected in neural pitch processing. Together, results suggest that atypical neural processing of speech sounds is a heritable ingredient contributing to the ASD language phenotype.


Assuntos
Transtorno do Espectro Autista , Percepção da Fala , Humanos , Fonética , Fala , Idioma
18.
bioRxiv ; 2023 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-38187767

RESUMO

Objective: Cochlear implants (CIs) are auditory prostheses for individuals with severe to profound hearing loss, offering substantial but incomplete restoration of hearing function by stimulating the auditory nerve using electrodes. However, progress in CI performance and innovation has been constrained by the inability to rapidly test multiple sound processing strategies. Current research interfaces provided by major CI manufacturers have limitations in supporting a wide range of auditory experiments due to portability, programming difficulties, and the lack of direct comparison between sound processing algorithms. To address these limitations, we present the CompHEAR research platform, designed specifically for the Cochlear Implant Hackathon, enabling researchers to conduct diverse auditory experiments on a large scale. Study Design: Quasi-experimental. Setting: Virtual. Methods: CompHEAR is an open-source, user-friendly platform which offers flexibility and ease of customization, allowing researchers to set up a broad set of auditory experiments. CompHEAR employs a vocoder to simulate novel sound coding strategies for CIs. It facilitates even distribution of listening tasks among participants and delivers real-time metrics for evaluation. The software architecture underlies the platform's flexibility in experimental design and its wide range of applications in sound processing research. Results: Performance testing of the CompHEAR platform ensured that it could support at least 10,000 concurrent users. The CompHEAR platform was successfully implemented during the COVID-19 pandemic and enabled global collaboration for the CI Hackathon (www.cihackathon.com). Conclusion: The CompHEAR platform is a useful research tool that permits comparing diverse signal processing strategies across a variety of auditory tasks with crowdsourced judging. Its versatility, scalability, and ease of use can enable further research with the goal of promoting advancements in cochlear implant performance and improved patient outcomes.

19.
Neurosci Biobehav Rev ; 132: 61-75, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34822879

RESUMO

The auditory system provides us with extremely rich and precise information about the outside world. Once a sound reaches our ears, the acoustic information it carries travels from the cochlea all the way to the auditory cortex, where its complexity and nuances are integrated. In the auditory cortex, functional circuits are formed by subpopulations of intermingled excitatory and inhibitory cells. In this review, we discuss recent evidence of the specific contributions of inhibitory neurons in sound processing and integration. We first examine intrinsic properties of three main classes of inhibitory interneurons in the auditory cortex. Then, we describe how inhibition shapes the responsiveness of the auditory cortex to sound. Finally, we discuss how inhibitory interneurons contribute to the sensation and perception of sounds. Altogether, this review points out the crucial role of cortical inhibitory interneurons in integrating information about the context, history, or meaning of a sound. It also highlights open questions to be addressed for increasing our understanding of the staggering complexity leading to the subtlest auditory perception.


Assuntos
Córtex Auditivo , Estimulação Acústica , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Interneurônios
20.
Data Brief ; 42: 108091, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35392615

RESUMO

The speech emotion recognition system determines a speaker's emotional state by analyzing his/her speech audio signal. It is an essential at the same time a challenging task in human-computer interaction systems and is one of the most demanding areas of research using artificial intelligence and deep machine learning architectures. Despite being the world's seventh most widely spoken language, Bangla is still classified as one of the low-resource languages for speech emotion recognition tasks because of inadequate availability of data. There is an apparent lack of speech emotion recognition dataset to perform this type of research in Bangla language. This article presents a Bangla language-based emotional speech-audio recognition dataset to address this problem. BanglaSER is a Bangla language-based speech emotion recognition dataset. It consists of speech-audio data of 34 participating speakers from diverse age groups between 19 and 47 years, with a balanced 17 male and 17 female nonprofessional participating actors. This dataset contains 1467 Bangla speech-audio recordings of five rudimentary human emotional states, namely angry, happy, neutral, sad, and surprise. Three trials are conducted for each emotional state. Hence, the total number of recordings involves 3 statements × 3 repetitions × 4 emotional states (angry, happy, sad, and surprise) × 34 participating speakers = 1224 recordings + 3 statements × 3 repetitions × 1 emotional state (neutral) × 27 participating speakers = 243 recordings, resulting in a total number of recordings of 1467. BanglaSER dataset is created by recording speech-audios through smartphones, and laptops, having a balanced number of recordings in each category with evenly distributed participating male and female actors, and would serve as an essential training dataset for the Bangla speech emotion recognition model in terms of generalization. BanglaSER is compatible with various deep learning architectures such as Convolutional neural networks, Long short-term memory, Gated recurrent unit, Transformer, etc. The dataset is available at https://data.mendeley.com/datasets/t9h6p943xy/5 and can be used for research purposes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA