Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
J Assoc Res Otolaryngol ; 24(6): 607-617, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38062284

RESUMO

OBJECTIVES: Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. DESIGN: Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. RESULTS: No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users' speech-in-noise performance that was not explained by spectral and temporal resolution. CONCLUSION: Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Fala , Ruído
2.
PLoS One ; 18(9): e0288432, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37768896

RESUMO

Image search systems could be endangered by adversarial attacks and data perturbations. The image retrieval system can be compromised either by distorting the query or hacking the ranking system. However, existing literature primarily discusses attack methods, whereas the research on countermeasures to defend against such adversarial attacks is rare. As a defense mechanism against the intrusions, quality assessment can complement existing image retrieval systems. "GuaRD" is proposed as an end-to-end framework that uses the quality metric as a weighted-regularization term. Proper utilization and balance of the two features could lead to reliable and robust ranking; the original image is assigned a higher rank while the distorted image is assigned a relatively lower rank. Meanwhile, the primary goal of the image retrieval system is to prioritize searching the relevant images. Therefore, the use of leveraged features should not compromise the accuracy of the system. To evaluate the generality of the framework, we conducted three experiments on two image quality assessment(IQA) benchmarks (Waterloo and PieAPP). For the first two tests, GuaRD achieved enhanced performance than the existing model: the mean reciprocal rank(mRR) value of the original image predictions increased by 61%, and the predictions for the distorted input query decreased by 18%. The third experiment was conducted to analyze the mean average precision (mAP) score of the system to verify the accuracy of the retrieval system. The results indicated little deviation in performance between the tested methods, and the score was not effected or slightly decreased by 0.9% after the GuaRD was applied. Therefore, GuaRD is a novel and robust framework that can act as a defense mechanism for data distortions.


Assuntos
Benchmarking , Motivação , Projetos de Pesquisa
3.
PLoS One ; 18(6): e0287584, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37352220

RESUMO

Auditory evoked potential (AEP) has been used to evaluate the degree of hearing and speech cognition. Because AEP generates a very small voltage relative to ambient noise, a repetitive presentation of a stimulus, such as a tone, word, or short sentence, should be employed to generate ensemble averages over trials. However, the stimulation of repetitive short words and sentences may present an unnatural situation to a subject. Phoneme-related potentials (PRPs), which are evoked-responses to typical phonemic stimuli, can be extracted from electroencephalography (EEG) data in response to a continuous storybook. In this study, we investigated the effects of spectrally degraded speech stimuli on PRPs. The EEG data in response to the spectrally degraded and natural storybooks were recorded from normal listeners, and the PRP components for 10 vowels and 12 consonants were extracted. The PRP responses to a vocoded (spectrally-degraded) storybook showed a statistically significant lower peak amplitude and were prolonged compared with those of a natural storybook. The findings in this study suggest that PRPs can be considered a potential tool to evaluate hearing and speech cognition as other AEPs. Moreover, PRPs can provide the details of phonological processing and phonemic awareness to understand poor speech intelligibility. Further investigation with the hearing impaired is required prior to clinical application.


Assuntos
Perda Auditiva , Percepção da Fala , Humanos , Audição/fisiologia , Ruído , Potenciais Evocados Auditivos , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Estimulação Acústica
4.
Ear Hear ; 44(5): 1107-1120, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37144890

RESUMO

OBJECTIVES: Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN: We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS: In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS: These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Humanos , Fala , Individualidade , Ruído , Percepção da Fala/fisiologia
5.
Hear Res ; 427: 108649, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36462377

RESUMO

Cochlear implants (CIs) have evolved to combine residual acoustic hearing with electric hearing. It has been expected that CI users with residual acoustic hearing experience better speech-in-noise perception than CI-only listeners because preserved acoustic cues aid unmasking speech from background noise. This study sought neural substrate of better speech unmasking in CI users with preserved acoustic hearing compared to those with lower degree of acoustic hearing. Cortical evoked responses to speech in multi-talker babble noise were compared between 29 Hybrid (i.e., electric acoustic stimulation or EAS) and 29 electric-only CI users. The amplitude ratio of evoked responses to speech and noise, or internal SNR, was significantly larger in the CI users with EAS. This result indicates that CI users with better residual acoustic hearing exhibit enhanced unmasking of speech from background noise.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Fala , Percepção da Fala/fisiologia , Audição , Estimulação Acústica , Estimulação Elétrica
6.
Front Neurosci ; 16: 906616, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36061597

RESUMO

Auditory prostheses provide an opportunity for rehabilitation of hearing-impaired patients. Speech intelligibility can be used to estimate the extent to which the auditory prosthesis improves the user's speech comprehension. Although behavior-based speech intelligibility is the gold standard, precise evaluation is limited due to its subjectiveness. Here, we used a convolutional neural network to predict speech intelligibility from electroencephalography (EEG). Sixty-four-channel EEGs were recorded from 87 adult participants with normal hearing. Sentences spectrally degraded by a 2-, 3-, 4-, 5-, and 8-channel vocoder were used to set relatively low speech intelligibility conditions. A Korean sentence recognition test was used. The speech intelligibility scores were divided into 41 discrete levels ranging from 0 to 100%, with a step of 2.5%. Three scores, namely 30.0, 37.5, and 40.0%, were not collected. The speech features, i.e., the speech temporal envelope (ENV) and phoneme (PH) onset, were used to extract continuous-speech EEGs for speech intelligibility prediction. The deep learning model was trained by a dataset of event-related potentials (ERP), correlation coefficients between the ERPs and ENVs, between the ERPs and PH onset, or between ERPs and the product of the multiplication of PH and ENV (PHENV). The speech intelligibility prediction accuracies were 97.33% (ERP), 99.42% (ENV), 99.55% (PH), and 99.91% (PHENV). The models were interpreted using the occlusion sensitivity approach. While the ENV models' informative electrodes were located in the occipital area, the informative electrodes of the phoneme models, i.e., PH and PHENV, were based on the occlusion sensitivity map located in the language processing area. Of the models tested, the PHENV model obtained the best speech intelligibility prediction accuracy. This model may promote clinical prediction of speech intelligibility with a comfort speech intelligibility test.

7.
J Integr Neurosci ; 21(1): 29, 2022 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-35164465

RESUMO

Background: Verbal communication comprises the retrieval of semantic and syntactic information elicited by various kinds of words (i.e., parts of speech) in a sentence. Content words, such as nouns and verbs, convey essential information about the overall meaning (semantics) of a sentence, whereas function words, such as prepositions and pronouns, carry less meaning and support the syntax of the sentence. Methods: This study aimed to identify neural correlates of the differential information retrieval processes for several parts of speech (i.e., content and function words, nouns and verbs, and objects and subjects) via electroencephalography performed during English spoken-sentence comprehension in thirteen participants with normal hearing. Recently, phoneme-related information has become a potential acoustic feature to investigate human speech processing. Therefore, in this study, we examined the importance of various parts of speech over sentence processing using information about the onset time of phonemes. Results: The distinction in the strength of cortical responses in language-related brain regions provides the neurological evidence that content words, nouns, and objects are dominant compared to function words, verbs, and subjects in spoken sentences, respectively. Conclusions: The findings of this study may provide insights into the different contributions of certain types of words over others to the overall process of sentence understanding.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Eletroencefalografia , Psicolinguística , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
8.
PLoS One ; 15(8): e0236784, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32745116

RESUMO

Spectral ripple discrimination (SRD) has been widely used to evaluate the spectral resolution in cochlear implant (CI) recipients based on its strong correlation with speech perception performance. However, despite its usefulness for predicting speech perception outcomes, SRD performance exhibits large across-subject variabilities even among subjects implanted with the same CIs and sound processors. The potential factors of this observation include current spread, nerve survival, and CI mapping. Previous studies have found that the spectral resolution reduces with increasing distance of the stimulation electrode from the auditory nerve fibers (ANFs), attributable to increasing current spread. However, it remains unclear whether the spread of excitation is the only cause of the observation, or whether other factors such as temporal interaction also contribute to it. In this study, we used a computational model to investigate channel interaction upon non-simultaneous stimulation with respect to the electrode-ANF distance, and evaluated the SRD performance for five electrode-ANF distances. The SRD performance was determined based on the similarity between two neurograms in response to standard and inverted stimuli and used to evaluate the spectral resolution in the computational model. The spread of excitation was observed to increase with increasing electrode-ANF distance, consistent with previous findings. Additionally, the preceding pulses delivered from neighboring channels induced a channel interaction that either inhibited or facilitated the neural responses to subsequent pulses depending on the electrode-ANF distance. The SRD performance was also found to decrease with increasing electrode-ANF distance. The findings of this study suggest that variation of the neural responses (inhibition or facilitation) with the electrode-ANF distance in CI users may cause spectral smearing, and hence poor spectral resolution. A computational model such as that used in this study is a useful tool for understanding the neural factors related to CI outcomes, such as cannot be accomplished by behavioral studies alone.


Assuntos
Estimulação Acústica/métodos , Implantes Cocleares , Limiar Auditivo/fisiologia , Implante Coclear/métodos , Nervo Coclear/fisiologia , Simulação por Computador , Humanos , Percepção da Fala/fisiologia
9.
Psychopathology ; 52(4): 265-270, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31614360

RESUMO

BACKGROUND: Suicide is known to be closely related to depression, which is accompanied by cognitive decline. OBJECTIVE: This study examined whether memory performance and cortical networking differ between high suicide risk and control groups depending on task difficulty. METHODS: The participants were 28 high school students consisting of 14 suicide risk and 14 control subjects. Real-time electroencephalography signals were collected during a working memory task. Inter- and intrahemispheric coherences were analyzed. RESULTS: Higher cortical networking during memory encoding was found in suicide risk adolescents compared to the control group. An increase in task difficulty heightened interhemispheric coherence. CONCLUSIONS: Higher cortical networking in suicide risk adolescents seems to reflect activation of compensatory mechanisms in an attempt to minimize behavioral decline.


Assuntos
Transtornos Cognitivos/psicologia , Eletroencefalografia/métodos , Memória de Curto Prazo/fisiologia , Suicídio/psicologia , Adolescente , Feminino , Humanos , Masculino , Fatores de Risco
10.
Hear Res ; 373: 113-120, 2019 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-30665078

RESUMO

Interest in electrocochleography (ECoG) has recently resurged as a potential tool to assess peripheral auditory function in cochlear implant (CI) users. ECoG recordings can be evoked using acoustic stimulation and recorded from an extra- or intra-cochlear electrode in CI users. Recordings reflect contributions from cochlear hair cells and the auditory nerve. We recently demonstrated the feasibility of using Custom Sound EP (clinically available software) to record ECoG responses in Nucleus Hybrid CI users with preserved acoustic hearing in the implanted ear (Abbas et al, 2017). While successful, the recording procedures were time intensive, limiting clinical applications. The current report describes how we improved data collection efficiency by writing custom software using Python programming language. The software interfaced with Nucleus Implant Communicator (NIC) routines to record responses from an intracochlear electrode. ECoG responses were recorded in eight CI users with preserved acoustic hearing using Custom Sound EP and the Python-based software. Responses were similar across both recording systems, but the recording time decreased significantly using the Python-based software. Seven additional CI users underwent repeated testing using the Python-based software and showed high test-retest reliability. The improved efficiency and high reliability increases the likelihood of translating intracochlear ECoG to clinical practice.


Assuntos
Audiometria de Resposta Evocada , Vias Auditivas/fisiopatologia , Percepção Auditiva , Cóclea/fisiopatologia , Implante Coclear/instrumentação , Implantes Cocleares , Pessoas com Deficiência Auditiva/reabilitação , Processamento de Sinais Assistido por Computador , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Audição , Humanos , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Valor Preditivo dos Testes , Software
11.
J Neurosci Methods ; 311: 253-258, 2019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-30389490

RESUMO

Classification of spoken word-evoked potentials is useful for both neuroscientific and clinical applications including brain-computer interfaces (BCIs). By evaluating whether adopting a biology-based structure improves a classifier's accuracy, we can investigate the importance of such structure in human brain circuitry, and advance BCI performance. In this study, we propose a semantic-hierarchical structure for classifying spoken word-evoked cortical responses. The proposed structure decodes the semantic grouping of the words first (e.g., a body part vs. a number) and then decodes which exact word was heard. The proposed classifier structure exhibited a consistent ∼10% improvement of classification accuracy when compared with a non-hierarchical structure. Our result provides a tool for investigating the neural representation of semantic hierarchy and the acoustic properties of spoken words in human brains. Our results suggest an improved algorithm for BCIs operated by decoding heard, and possibly imagined, words.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Reconhecimento Automatizado de Padrão/métodos , Semântica , Processamento de Sinais Assistido por Computador , Percepção da Fala/fisiologia , Adulto , Algoritmos , Eletrocorticografia , Potenciais Evocados , Humanos , Masculino , Fala , Adulto Jovem
12.
Sci Rep ; 8(1): 3645, 2018 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-29483598

RESUMO

The goal of this study was to develop an objective and neurophysiologic method of identifying the presence of cochlear dead region (CDR) by combining acoustic change complex (ACC) responses with threshold-equalizing noise (TEN) test. The goal of the first study was to confirm whether ACC could be evoked with TEN stimuli and to also optimize the test conditions. The goal of the second study was to determine whether the TEN-ACC test is capable of detecting CDR(s). The ACC responses were successfully recorded from all study participants. Both behaviorally and electrophysiologically obtained masked thresholds (TEN threshold and TEN-ACC threshold) were similar and below 10 and 12 dB SNR in NH listeners, respectively. HI listeners were divided into HI (non-CDR) and CDR groups based on the behavioral TEN test. For the non-CDR group, TEN-ACC thresholds were below 12 dB which were similar to NH listeners. However, for the CDR group, TEN-ACC thresholds were significantly higher (≥12 dB SNR) than those in the NH and HI groups, indicating that CDR(s) can be objectively detected using the ACC. Results of this study demonstrate that it is possible to detect the presence of CDR using an electrophysiologic method.


Assuntos
Cóclea/fisiologia , Eletrofisiologia/métodos , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Limiar Auditivo , Implante Coclear , Feminino , Humanos , Masculino , Ruído , Adulto Jovem
13.
Percept Mot Skills ; 124(6): 1194-1210, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28942702

RESUMO

This study investigated interactive effects of stress and task difficulty on working memory and cortico-cortical communication during memory encoding. Thirty-eight adolescent participants (mean age of 15.7 ± 1.5 years) completed easy and hard working memory tasks under low- and high-stress conditions. We analyzed the accuracy and reaction time (RT) of working memory performance and inter- and intrahemispheric electroencephalogram coherences during memory encoding. Working memory accuracy was higher, and RT shorter, in the easy versus the hard task. RT was shorter under the high-stress (TENS) versus low-stress (no-TENS) condition, while there was no difference in memory accuracy between the two stress conditions. For electroencephalogram coherence, we found higher interhemispheric coherence in all bands but only at frontal electrode sites in the easy versus the hard task. On the other hand, intrahemispheric coherence was higher in the left hemisphere in the easy (versus hard task) and higher in the right hemisphere (with one exception) in the hard (versus easy task). Inter- and intracoherences were higher in the low- versus high-stress condition. Significant interactions between task difficulty and stress condition were observed in coherences of the beta frequency band. The difference in coherence between low- and high-stress conditions was greater in the hard compared with the easy task, with lower coherence under the high-stress condition relative to the low-stress condition. Stress seemed to cause a decrease in cortical network communications between memory-relevant cortical areas as task difficulty increased.


Assuntos
Córtex Cerebral/fisiologia , Memória de Curto Prazo/fisiologia , Rede Nervosa/fisiologia , Estresse Psicológico/psicologia , Adolescente , Eletroencefalografia , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Tempo de Reação/fisiologia
14.
Sensors (Basel) ; 17(3)2017 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-28264522

RESUMO

Generalized tonic-clonic seizures (GTCSs) can be underestimated and can also increase mortality rates. The monitoring devices used to detect GTCS events in daily life are very helpful for early intervention and precise estimation of seizure events. Several studies have introduced methods for GTCS detection using an accelerometer (ACM), electromyography, or electroencephalography. However, these studies need to be improved with respect to accuracy and user convenience. This study proposes the use of an ACM banded to the wrist and spectral analysis of ACM data to detect GTCS in daily life. The spectral weight function dependent on GTCS was used to compute a GTCS-correlated score that can effectively discriminate between GTCS and normal movement. Compared to the performance of the previous temporal method, which used a standard deviation method, the spectral analysis method resulted in better sensitivity and fewer false positive alerts. Finally, the spectral analysis method can be implemented in a GTCS monitoring device using an ACM and can provide early alerts to caregivers to prevent risks associated with GTCS.


Assuntos
Epilepsia Tônico-Clônica , Aceleração , Eletroencefalografia , Eletromiografia , Humanos
15.
PLoS One ; 11(2): e0148466, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26848755

RESUMO

Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was not different from the controls. In addition, the extent of their central VF was positively correlated with speech perception outcome (r = 0.669, P = 0.009). Persistent visual activation in right temporal cortex even after CI causes negative effect on outcome in post-lingual deaf adults. We interpret these results to suggest that insufficient intra-modal (visual) compensation by the occipital cortex may cause negative effects on outcome. Based on our results, it appears that a narrowed central VF could help identify CI users with poor outcomes with their device.


Assuntos
Implantes Cocleares , Surdez/fisiopatologia , Modelos Neurológicos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Atenção/fisiologia , Estudos de Casos e Controles , Potenciais Evocados Visuais/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Plasticidade Neuronal , Comportamento Espacial/fisiologia , Testes de Campo Visual , Adulto Jovem
16.
PLoS One ; 11(2): e0149128, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26866811

RESUMO

Previous studies have shown that concurrent vowel identification improves with increasing temporal onset asynchrony of the vowels, even if the vowels have the same fundamental frequency. The current study investigated the possible underlying neural processing involved in concurrent vowel perception. The individual vowel stimuli from a previously published study were used as inputs for a phenomenological auditory-nerve (AN) model. Spectrotemporal representations of simulated neural excitation patterns were constructed (i.e., neurograms) and then matched quantitatively with the neurograms of the single vowels using the Neurogram Similarity Index Measure (NSIM). A novel computational decision model was used to predict concurrent vowel identification. To facilitate optimum matches between the model predictions and the behavioral human data, internal noise was added at either neurogram generation or neurogram matching using the NSIM procedure. The best fit to the behavioral data was achieved with a signal-to-noise ratio (SNR) of 8 dB for internal noise added at the neurogram but with a much smaller amount of internal noise (SNR of 60 dB) for internal noise added at the level of the NSIM computations. The results suggest that accurate modeling of concurrent vowel data from listeners with normal hearing may partly depend on internal noise and where internal noise is hypothesized to occur during the concurrent vowel identification process.


Assuntos
Percepção Auditiva , Limiar Auditivo/fisiologia , Ruído , Testes de Discriminação da Fala , Percepção da Fala , Adulto , Algoritmos , Nervo Coclear/fisiopatologia , Simulação por Computador , Feminino , Audição , Humanos , Masculino , Neurônios/fisiologia , Fonética , Razão Sinal-Ruído , Adulto Jovem
17.
PLoS One ; 10(10): e0140920, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26485715

RESUMO

Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test-retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information.


Assuntos
Implantes Cocleares , Surdez/fisiopatologia , Pessoas com Deficiência Auditiva , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Idoso , Surdez/reabilitação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
18.
Comput Math Methods Med ; 2015: 934382, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25755675

RESUMO

A cochlear implant (CI) is an auditory prosthesis that enables hearing by providing electrical stimuli through an electrode array. It has been previously established that the electrode position can influence CI performance. Thus, electrode position should be considered in order to achieve better CI results. This paper describes how the electrode position influences the auditory nerve fiber (ANF) response to either a single pulse or low- (250 pulses/s) and high-rate (5,000 pulses/s) pulse-trains using a computational model. The field potential in the cochlea was calculated using a three-dimensional finite-element model, and the ANF response was simulated using a biophysical ANF model. The effects were evaluated in terms of the dynamic range, stochasticity, and spike excitation pattern. The relative spread, threshold, jitter, and initiated node were analyzed for single-pulse response; and the dynamic range, threshold, initiated node, and interspike interval were analyzed for pulse-train stimuli responses. Electrode position was found to significantly affect the spatiotemporal pattern of the ANF response, and this effect was significantly dependent on the stimulus rate. We believe that these modeling results can provide guidance regarding perimodiolar and lateral insertion of CIs in clinical settings and help understand CI performance.


Assuntos
Nervo Coclear/patologia , Eletrodos , Imageamento Tridimensional/métodos , Potenciais de Ação/fisiologia , Algoritmos , Cóclea/fisiologia , Implantes Cocleares , Simulação por Computador , Estimulação Elétrica , Análise de Elementos Finitos , Humanos , Potenciais da Membrana , Modelos Estatísticos , Fibras Nervosas , Processos Estocásticos
19.
J Acoust Soc Am ; 136(5): 2714-25, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25373971

RESUMO

The hypothesis of this study was that broader patterns of physiological channel interactions in the local region of the cochlea are associated with poorer spectral resolution in the same region. Electrically evoked compound action potentials (ECAPs) were measured for three to six probe electrodes per subject to examine the channel interactions in different regions across the electrode array. To evaluate spectral resolution at a confined location within the cochlea, spectral-ripple discrimination (SRD) was measured using narrowband ripple stimuli with the bandwidth spanning five electrodes: Two electrodes apical and basal to the ECAP probe electrode. The relationship between the physiological channel interactions, spectral resolution in the local cochlear region, and vowel identification was evaluated. Results showed that (1) there was within- and across-subject variability in the widths of ECAP channel interaction functions and in narrowband SRD performance, (2) significant correlations were found between the widths of the ECAP functions and narrowband SRD thresholds, and between mean bandwidths of ECAP functions averaged across multiple probe electrodes and broadband SRD performance across subjects, and (3) the global spectral resolution reflecting the entire electrode array, not the local region, predicts vowel identification.


Assuntos
Cóclea/fisiopatologia , Implantes Cocleares , Potenciais Evocados Auditivos/fisiologia , Fonética , Percepção da Fala/fisiologia , Potenciais de Ação , Idoso , Idoso de 80 Anos ou mais , Discriminação Psicológica , Eletrodos Implantados , Desenho de Equipamento , Perda Auditiva/fisiopatologia , Perda Auditiva/terapia , Humanos , Pessoa de Meia-Idade , Reconhecimento Fisiológico de Modelo , Psicoacústica , Som
20.
Sensors (Basel) ; 14(6): 10346-60, 2014 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-24926692

RESUMO

It is important and recommended to detect hearing loss as soon as possible. If it is found early, proper treatment may help improve hearing and reduce the negative consequences of hearing loss. In this study, we developed smartphone-based hearing screening methods that can ubiquitously test hearing. However, environmental noise generally results in the loss of ear sensitivity, which causes a hearing threshold shift (HTS). To overcome this limitation in the hearing screening location, we developed a correction algorithm to reduce the HTS effect. A built-in microphone and headphone were calibrated to provide the standard units of measure. The HTSs in the presence of either white or babble noise were systematically investigated to determine the mean HTS as a function of noise level. When the hearing screening application runs, the smartphone automatically measures the environmental noise and provides the HTS value to correct the hearing threshold. A comparison to pure tone audiometry shows that this hearing screening method in the presence of noise could closely estimate the hearing threshold. We expect that the proposed ubiquitous hearing test method could be used as a simple hearing screening tool and could alert the user if they suffer from hearing loss.


Assuntos
Telefone Celular , Testes Auditivos/métodos , Aplicações da Informática Médica , Adulto , Idoso , Calibragem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA