Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
1.
Ear Hear ; 45(2): 411-424, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37811966

RESUMO

OBJECTIVES: Children with cochlear implants (CIs) vary widely in their ability to identify emotions in speech. The causes of this variability are unknown, but this knowledge will be crucial if we are to design improvements in technological or rehabilitative interventions that are effective for individual patients. The objective of this study was to investigate how well factors such as age at implantation, duration of device experience (hearing age), nonverbal cognition, vocabulary, and socioeconomic status predict prosody-based emotion identification in children with CIs, and how the key predictors in this population compare to children with normal hearing who are listening to either normal emotional speech or to degraded speech. DESIGN: We measured vocal emotion identification in 47 school-age CI recipients aged 7 to 19 years in a single-interval, 5-alternative forced-choice task. None of the participants had usable residual hearing based on parent/caregiver report. Stimuli consisted of a set of semantically emotion-neutral sentences that were recorded by 4 talkers in child-directed and adult-directed prosody corresponding to five emotions: neutral, angry, happy, sad, and scared. Twenty-one children with normal hearing were also tested in the same tasks; they listened to both original speech and to versions that had been noise-vocoded to simulate CI information processing. RESULTS: Group comparison confirmed the expected deficit in CI participants' emotion identification relative to participants with normal hearing. Within the CI group, increasing hearing age (correlated with developmental age) and nonverbal cognition outcomes predicted emotion recognition scores. Stimulus-related factors such as talker and emotional category also influenced performance and were involved in interactions with hearing age and cognition. Age at implantation was not predictive of emotion identification. Unlike the CI participants, neither cognitive status nor vocabulary predicted outcomes in participants with normal hearing, whether listening to original speech or CI-simulated speech. Age-related improvements in outcomes were similar in the two groups. Participants with normal hearing listening to original speech showed the greatest differences in their scores for different talkers and emotions. Participants with normal hearing listening to CI-simulated speech showed significant deficits compared with their performance with original speech materials, and their scores also showed the least effect of talker- and emotion-based variability. CI participants showed more variation in their scores with different talkers and emotions than participants with normal hearing listening to CI-simulated speech, but less so than participants with normal hearing listening to original speech. CONCLUSIONS: Taken together, these results confirm previous findings that pediatric CI recipients have deficits in emotion identification based on prosodic cues, but they improve with age and experience at a rate that is similar to peers with normal hearing. Unlike participants with normal hearing, nonverbal cognition played a significant role in CI listeners' emotion identification. Specifically, nonverbal cognition predicted the extent to which individual CI users could benefit from some talkers being more expressive of emotions than others, and this effect was greater in CI users who had less experience with their device (or were younger) than CI users who had more experience with their device (or were older). Thus, in young prelingually deaf children with CIs performing an emotional prosody identification task, cognitive resources may be harnessed to a greater degree than in older prelingually deaf children with CIs or than children with normal hearing.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Criança , Idoso , Audição , Emoções
2.
Ear Hear ; 44(2): 371-384, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36342278

RESUMO

OBJECTIVE: This study assessed the relationships between the salience of amplitude modulation (AM) cues encoded at the auditory nerve (AN), perceptual sensitivity to changes in AM rate (i.e., AM rate discrimination threshold, AMRDT), and speech perception scores in postlingually deafened adult cochlear implant (CI) users. DESIGN: Study participants were 18 postlingually deafened adults with Cochlear Nucleus devices, including five bilaterally implanted patients. For each of 23 implanted ears, neural encoding of AM cues at 20 Hz at the AN was evaluated at seven electrode locations across the electrode array using electrophysiological measures of the electrically evoked compound action potential (eCAP). The salience of AM neural encoding was quantified by the Modulated Response Amplitude Ratio (MRAR). Psychophysical measures of AMRDT for 20 Hz modulation were evaluated in 16 ears using a three-alternative, forced-choice procedure, targeting 79.4% correct on the psychometric function. AMRDT was measured at up to five electrode locations for each test ear, including the electrode pair that showed the largest difference in the MRAR. Consonant-Nucleus-Consonant (CNC) word scores presented in quiet and in speech-shaped noise at a signal to noise ratio (SNR) of +10 dB were measured in all 23 implanted ears. Simulation tests were used to assess the variations in correlation results when using the MRAR and AMRDT measured at only one electrode location in each participant to correlate with CNC word scores. Linear Mixed Models (LMMs) were used to evaluate the relationship between MRARs/AMRDTs measured at individual electrode locations and CNC word scores. Spearman Rank correlation tests were used to evaluate the strength of association between CNC word scores measured in quiet and in noise with (1) the variances in MRARs and AMRDTs, and (2) the averaged MRAR or AMRDT across multiple electrodes tested for each participant. RESULTS: There was no association between the MRAR and AMRDT. Using the MRAR and AMRDT measured at only one, randomly selected electrode location to assess their associations with CNC word scores could lead to opposite conclusions. Both the results of LMMs and Spearman Rank correlation tests showed that CNC word scores measured in quiet or at 10 dB SNR were not significantly correlated with the MRAR or AMRDT. In addition, the results of Spearman Rank correlation tests showed that the variances in MRARs and AMRDTs were not significantly correlated with CNC word scores measured in quiet or in noise. CONCLUSIONS: The difference in AN sensitivity to AM cues is not the primary factor accounting for the variation in AMRDTs measured at different stimulation sites within individual CI users. The AN sensitivity to AM per se may not be a crucial factor for CNC word perception in quiet or at 10 dB SNR in postlingually deafened adult CI users. Using electrophysiological or psychophysical results measured at only one electrode location to correlate with speech perception scores in CI users can lead to inaccurate, if not wrong, conclusions.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Percepção da Fala/fisiologia , Ruído , Nervo Coclear
3.
Ear Hear ; 43(2): 323-334, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34406157

RESUMO

OBJECTIVES: Identification of emotional prosody in speech declines with age in normally hearing (NH) adults. Cochlear implant (CI) users have deficits in the perception of prosody, but the effects of age on vocal emotion recognition by adult postlingually deaf CI users are not known. The objective of the present study was to examine age-related changes in CI users' and NH listeners' emotion recognition. DESIGN: Participants included 18 CI users (29.6 to 74.5 years) and 43 NH adults (25.8 to 74.8 years). Participants listened to emotion-neutral sentences spoken by a male and female talker in five emotions (happy, sad, scared, angry, neutral). NH adults heard them in four conditions: unprocessed (full spectrum) speech, 16-channel, 8-channel, and 4-channel noise-band vocoded speech. The adult CI users only listened to unprocessed (full spectrum) speech. Sensitivity (d') to emotions and Reaction Times were obtained using a single-interval, five-alternative, forced-choice paradigm. RESULTS: For NH participants, results indicated age-related declines in Accuracy and d', and age-related increases in Reaction Time in all conditions. Results indicated an overall deficit, as well as age-related declines in overall d' for CI users, but Reaction Times were elevated compared with NH listeners and did not show age-related changes. Analysis of Accuracy scores (hit rates) were generally consistent with d' data. CONCLUSIONS: Both CI users and NH listeners showed age-related deficits in emotion identification. The CI users' overall deficit in emotion perception, and their slower response times, suggest impaired social communication which may in turn impact overall well-being, particularly so for older CI users, as lower vocal emotion recognition scores have been associated with poorer subjective quality of life in CI patients.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Implantes Cocleares/psicologia , Feminino , Humanos , Masculino , Qualidade de Vida , Reconhecimento de Voz
4.
Ear Hear ; 42(6): 1727-1740, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34294630

RESUMO

OBJECTIVES: Normally-hearing (NH) listeners rely more on prosodic cues than on lexical-semantic cues for emotion perception in speech. In everyday spoken communication, the ability to decipher conflicting information between prosodic and lexical-semantic cues to emotion can be important: for example, in identifying sarcasm or irony. Speech degradation in cochlear implants (CIs) can be sufficiently overcome to identify lexical-semantic cues, but the distortion of voice pitch cues makes it particularly challenging to hear prosody with CIs. The purpose of this study was to examine changes in relative reliance on prosodic and lexical-semantic cues in NH adults listening to spectrally degraded speech and adult CI users. We hypothesized that, compared with NH counterparts, CI users would show increased reliance on lexical-semantic cues and reduced reliance on prosodic cues for emotion perception. We predicted that NH listeners would show a similar pattern when listening to CI-simulated versions of emotional speech. DESIGN: Sixteen NH adults and 8 postlingually deafened adult CI users participated in the study. Sentences were created to convey five lexical-semantic emotions (angry, happy, neutral, sad, and scared), with five sentences expressing each category of emotion. Each of these 25 sentences was then recorded with the 5 (angry, happy, neutral, sad, and scared) prosodic emotions by 2 adult female talkers. The resulting stimulus set included 125 recordings (25 Sentences × 5 Prosodic Emotions) per talker, of which 25 were congruent (consistent lexical-semantic and prosodic cues to emotion) and the remaining 100 were incongruent (conflicting lexical-semantic and prosodic cues to emotion). The recordings were processed to have 3 levels of spectral degradation: full-spectrum, CI-simulated (noise-vocoded) to have 8 channels and 16 channels of spectral information, respectively. Twenty-five recordings (one sentence per lexical-semantic emotion recorded in all five prosodies) were used for a practice run in the full-spectrum condition. The remaining 100 recordings were used as test stimuli. For each talker and condition of spectral degradation, listeners indicated the emotion associated with each recording in a single-interval, five-alternative forced-choice task. The responses were scored as proportion correct, where "correct" responses corresponded to the lexical-semantic emotion. CI users heard only the full-spectrum condition. RESULTS: The results showed a significant interaction between hearing status (NH, CI) and congruency in identifying the lexical-semantic emotion associated with the stimuli. This interaction was as predicted, that is, CI users showed increased reliance on lexical-semantic cues in the incongruent conditions, while NH listeners showed increased reliance on the prosodic cues in the incongruent conditions. As predicted, NH listeners showed increased reliance on lexical-semantic cues to emotion when the stimuli were spectrally degraded. CONCLUSIONS: The present study confirmed previous findings of prosodic dominance for emotion perception by NH listeners in the full-spectrum condition. Further, novel findings with CI patients and NH listeners in the CI-simulated conditions showed reduced reliance on prosodic cues and increased reliance on lexical-semantic cues to emotion. These results have implications for CI listeners' ability to perceive conflicts between prosodic and lexical-semantic cues, with repercussions for their identification of sarcasm and humor. Understanding instances of sarcasm or humor can impact a person's ability to develop relationships, follow conversation, understand vocal emotion and intended message of a speaker, following jokes, and everyday communication in general.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Sinais (Psicologia) , Emoções , Feminino , Humanos , Semântica , Fala
5.
Ear Hear ; 41(5): 1372-1382, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32149924

RESUMO

OBJECTIVES: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adolescente , Adulto , Criança , Emoções , Feminino , Humanos , Masculino , Fala , Adulto Jovem
6.
J Acoust Soc Am ; 147(4): 2432, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32359241

RESUMO

The ability to recognize speech that is degraded spectrally is a critical skill for successfully using a cochlear implant (CI). Previous research has shown that toddlers with normal hearing can successfully recognize noise-vocoded words as long as the signal contains at least eight spectral channels [Newman and Chatterjee. (2013). J. Acoust. Soc. Am. 133(1), 483-494; Newman, Chatterjee, Morini, and Remez. (2015). J. Acoust. Soc. Am. 138(3), EL311-EL317], although they have difficulty with signals that only contain four channels of information. Young children with CIs not only need to match a degraded speech signal to a stored representation (word recognition), but they also need to create new representations (word learning), a task that is likely to be more cognitively demanding. Normal-hearing toddlers aged 34 months were tested on their ability to initially learn (fast-map) new words in noise-vocoded stimuli. While children were successful at fast-mapping new words from 16-channel noise-vocoded stimuli, they failed to do so from 8-channel noise-vocoded speech. The level of degradation imposed by 8-channel vocoding appears sufficient to disrupt fast-mapping in young children. Recent results indicate that only CI patients with high spectral resolution can benefit from more than eight active electrodes. This suggests that for many children with CIs, reduced spectral resolution may limit their acquisition of novel words.


Assuntos
Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Pré-Escolar , Humanos , Ruído/efeitos adversos , Fala
7.
Ear Hear ; 40(3): 477-492, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30074504

RESUMO

OBJECTIVES: Emotional communication is important in children's social development. Previous studies have shown deficits in voice emotion recognition by children with moderate-to-severe hearing loss or with cochlear implants. Little, however, is known about emotion recognition in children with mild-to-moderate hearing loss. The objective of this study was to compare voice emotion recognition by children with mild-to-moderate hearing loss relative to their peers with normal hearing, under conditions in which the emotional prosody was either more or less exaggerated (child-directed or adult-directed speech, respectively). We hypothesized that the performance of children with mild-to-moderate hearing loss would be comparable to their normally hearing peers when tested with child-directed materials but would show significant deficits in emotion recognition when tested with adult-directed materials, which have reduced prosodic cues. DESIGN: Nineteen school-aged children (8 to 14 years of age) with mild-to-moderate hearing loss and 20 children with normal hearing aged 6 to 17 years participated in the study. A group of 11 young, normally hearing adults was also tested. Stimuli comprised sentences spoken in one of five emotions (angry, happy, sad, neutral, and scared), either in a child-directed or in an adult-directed manner. The task was a single-interval, five-alternative forced-choice paradigm, in which the participants heard each sentence in turn and indicated which of the five emotions was associated with that sentence. Reaction time was also recorded as a measure of cognitive load. RESULTS: Acoustic analyses confirmed the exaggerated prosodic cues in the child-directed materials relative to the adult-directed materials. Results showed significant effects of age, specific emotion (happy, sad, etc.), and test materials (better performance with child-directed materials) in both groups of children, as well as susceptibility to talker variability. Contrary to our hypothesis, no significant differences were observed between the 2 groups of children in either emotion recognition (percent correct or d' values) or in reaction time, with either child- or adult-directed materials. Among children with hearing loss, degree of hearing loss (mild or moderate) did not predict performance. In children with hearing loss, interactions between vocabulary, materials, and age were observed, such that older children with stronger vocabulary showed better performance with child-directed speech. Such interactions were not observed in children with normal hearing. The pattern of results was broadly consistent across the different measures of accuracy, d', and reaction time. CONCLUSIONS: Children with mild-to-moderate hearing loss do not have significant deficits in overall voice emotion recognition compared with their normally hearing peers, but mechanisms involved may be different between the 2 groups. The results suggest a stronger role for linguistic ability in emotion recognition by children with normal hearing than by children with hearing loss.


Assuntos
Emoções , Perda Auditiva/fisiopatologia , Reconhecimento Psicológico , Percepção Social , Voz , Adolescente , Estudos de Casos e Controles , Criança , Desenvolvimento Infantil , Sinais (Psicologia) , Feminino , Humanos , Masculino , Índice de Gravidade de Doença
8.
Ear Hear ; 40(5): 1069-1083, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30614835

RESUMO

OBJECTIVES: Emotional communication is a cornerstone of social cognition and informs human interaction. Previous studies have shown deficits in facial and vocal emotion recognition in older adults, particularly for negative emotions. However, few studies have examined combined effects of aging and hearing loss on vocal emotion recognition by adults. The objective of this study was to compare vocal emotion recognition in adults with hearing loss relative to age-matched peers with normal hearing. We hypothesized that age would play a role in emotion recognition and that listeners with hearing loss would show deficits across the age range. DESIGN: Thirty-two adults (22 to 74 years of age) with mild to severe, symmetrical sensorineural hearing loss, amplified with bilateral hearing aids and 30 adults (21 to 75 years of age) with normal hearing, participated in the study. Stimuli consisted of sentences spoken by 2 talkers, 1 male, 1 female, in 5 emotions (angry, happy, neutral, sad, and scared) in an adult-directed manner. The task involved a single-interval, five-alternative forced-choice paradigm, in which the participants listened to individual sentences and indicated which of the five emotions was targeted in each sentence. Reaction time was recorded as an indirect measure of cognitive load. RESULTS: Results showed significant effects of age. Older listeners had reduced accuracy, increased reaction times, and reduced d' values. Normal hearing listeners showed an Age by Talker interaction where older listeners had more difficulty identifying male vocal emotion. Listeners with hearing loss showed reduced accuracy, increased reaction times, and lower d' values compared with age-matched normal-hearing listeners. Within the group with hearing loss, age and talker effects were significant, and low-frequency pure-tone averages showed a marginally significant effect. Contrary to other studies, once hearing thresholds were taken into account, no effects of listener sex were observed, nor were there effects of individual emotions on accuracy. However, reaction times and d' values showed significant differences between individual emotions. CONCLUSIONS: The results of this study confirm existing findings in the literature showing that older adults show significant deficits in voice emotion recognition compared with their normally hearing peers, and that among listeners with normal hearing, age-related changes in hearing do not predict this age-related deficit. The present results also add to the literature by showing that hearing impairment contributes additionally to deficits in vocal emotion recognition, separate from deficits related to age. These effects of age and hearing loss appear to be quite robust, being evident in reduced accuracy scores and d' measures, as well as in reaction time measures.


Assuntos
Emoções , Reconhecimento Facial , Perda Auditiva Neurossensorial/fisiopatologia , Percepção Social , Percepção da Fala , Adulto , Fatores Etários , Idoso , Estudos de Casos e Controles , Feminino , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Humanos , Masculino , Pessoa de Meia-Idade , Índice de Gravidade de Doença , Adulto Jovem
9.
Ear Hear ; 39(5): 874-880, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29337761

RESUMO

OBJECTIVES: It is known that school-aged children with cochlear implants show deficits in voice emotion recognition relative to normal-hearing peers. Little, however, is known about normal-hearing children's processing of emotional cues in cochlear implant-simulated, spectrally degraded speech. The objective of this study was to investigate school-aged, normal-hearing children's recognition of voice emotion, and the degree to which their performance could be predicted by their age, vocabulary, and cognitive factors such as nonverbal intelligence and executive function. DESIGN: Normal-hearing children (6-19 years old) and young adults were tested on a voice emotion recognition task under three different conditions of spectral degradation using cochlear implant simulations (full-spectrum, 16-channel, and 8-channel noise-vocoded speech). Measures of vocabulary, nonverbal intelligence, and executive function were obtained as well. RESULTS: Adults outperformed children on all tasks, and a strong developmental effect was observed. The children's age, the degree of spectral resolution, and nonverbal intelligence were predictors of performance, but vocabulary and executive functions were not, and no interactions were observed between age and spectral resolution. CONCLUSIONS: These results indicate that cognitive function and age play important roles in children's ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children's age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution.


Assuntos
Emoções , Inteligência , Percepção da Fala , Adolescente , Fatores Etários , Criança , Cognição , Feminino , Humanos , Modelos Lineares , Masculino , Valores de Referência , Vocabulário , Adulto Jovem
10.
J Acoust Soc Am ; 143(2): 1117, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29495705

RESUMO

Little is known about cochlear implant (CI) users' ability to process amplitude modulation (AM) under conditions of forward masking (forward-modulation detection/discrimination interference, or F-MDI). In this study, F-MDI was investigated in adult CI listeners using direct electrical stimulation via research interface. The target was sinusoidally amplitude modulated at 50 Hz, and presented to a fixed electrode in the middle of the array. The forward masker was either amplitude modulated at the same rate (AM) or unmodulated and presented at the peak amplitude of its AM counterpart (steady-state peak, SSP). Results showed that the AM masker produced higher modulation thresholds in the target than the SSP masker. The difference (F-MDI) was estimated to be 4.6 dB on average, and did not change with masker-target delays up to 100 ms or with masker-target spatial electrode distances up to eight electrodes. Results with a coherent remote cue presented with the masker showed that confusion effects did not play a role in the observed F-MDI. Traditional recovery from forward masking using the same maskers and a 20-ms probe, measured in four of the subjects, confirmed the expected result: higher thresholds with the SSP masker than the AM masker. Collectively, the results indicate that significant F-MDI occurs in CI users.


Assuntos
Percepção Auditiva , Implante Coclear/instrumentação , Implantes Cocleares , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Estimulação Acústica , Adulto , Idoso , Limiar Auditivo , Sinais (Psicologia) , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia
11.
J Acoust Soc Am ; 141(5): 3190, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28682084

RESUMO

Psychophysical recovery from forward masking was measured in adult cochlear implant users of CochlearTM and Advanced BionicsTM devices, in monopolar and in focused (bipolar and tripolar) stimulation modes, at four electrode sites across the arrays, and at two levels (loudness balanced across modes and electrodes). Results indicated a steeper psychophysical recovery from forward masking in monopolar over bipolar and tripolar modes, modified by differential effects of electrode and level. The interactions between factors varied somewhat across devices. It is speculated that psychophysical recovery from forward masking may be driven by different populations of neurons in the different modes, with a broader stimulation pattern resulting in a greater likelihood of response by healthier and/or faster-recovering neurons within the stimulated population. If a more rapid recovery from prior stimulation reflects responses of neurons not necessarily close to the activating site, the spectral pattern of the incoming acoustic signal may be distorted. These results have implications for speech processor implementations using different degrees of focusing of the electric field. The primary differences in the shape of the recovery function were observed in the earlier portion (between 2 and 45 ms) of recovery, which is significant in terms of the speech envelope.


Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Idoso , Limiar Auditivo , Criança , Estimulação Elétrica , Feminino , Humanos , Percepção Sonora , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Pessoas com Deficiência Auditiva/psicologia , Desenho de Prótese , Psicoacústica , Fatores de Tempo , Adulto Jovem
12.
J Acoust Soc Am ; 141(1): 50, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28147600

RESUMO

Sequential stream segregation by normal hearing (NH) and cochlear implant (CI) listeners was investigated using an irregular rhythm detection (IRD) task. Pure tones and narrowband noises of different bandwidths were presented monaurally to older and younger NH listeners via headphones. For CI users, stimuli were delivered as pure tones via soundfield and via direct electrical stimulation. Results confirmed that tonal pitch is not essential for stream segregation by NH listeners and that aging does not reduce NH listeners' stream segregation. CI listeners' stream segregation was significantly poorer than NH listeners' with pure tone stimuli. With direct stimulation, however, CI listeners showed significantly stronger stream segregation, with a mean normalized pattern similar to NH listeners, implying that the CI speech processors possibly degraded acoustic cues. CI listeners' performance on an electrode discrimination task indicated that cues that are salient enough to make two electrodes highly discriminable may not be sufficiently salient for stream segregation, and that gap detection/discrimination, which must depend on perceptual electrode differences, did not play a role in the IRD task. Although the IRD task does not encompass all aspects of full stream segregation, these results suggest that some CI listeners may demonstrate aspects of stream segregation.


Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Sinais (Psicologia) , Pessoas com Deficiência Auditiva/reabilitação , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Fatores Etários , Idoso , Audiometria de Tons Puros , Estudos de Casos e Controles , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Detecção de Sinal Psicológico , Adulto Jovem
13.
J Acoust Soc Am ; 142(4): 1739, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092612

RESUMO

Musicians can sometimes achieve better speech recognition in noisy backgrounds than non-musicians, a phenomenon referred to as the "musician advantage effect." In addition, musicians are known to possess a finer sense of pitch than non-musicians. The present study examined the hypothesis that the latter fact could explain the former. Four experiments measured speech reception threshold for a target voice against speech or non-speech maskers. Although differences in fundamental frequency (ΔF0s) were shown to be beneficial even when presented to opposite ears (experiment 1), the authors' attempt to maximize their use by directing the listener's attention to the target F0 led to unexpected impairments (experiment 2) and the authors' attempt to hinder their use by generating uncertainty about the competing F0s led to practically negligible effects (experiments 3 and 4). The benefits drawn from ΔF0s showed surprisingly little malleability for a cue that can be used in the complete absence of energetic masking. In half of the experiments, musicians obtained better thresholds than non-musicians, particularly in speech-on-speech conditions, but they did not reliably obtain larger ΔF0 benefits. Thus, the data do not support the hypothesis that the musician advantage effect is based on greater ability to exploit ΔF0s.


Assuntos
Sinais (Psicologia) , Música , Ocupações , Percepção da Altura Sonora , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adolescente , Adulto , Atenção , Feminino , Humanos , Masculino , Ruído/efeitos adversos , Mascaramento Perceptivo , Discriminação da Altura Tonal , Reconhecimento Psicológico , Teste do Limiar de Recepção da Fala , Adulto Jovem
14.
J Acoust Soc Am ; 140(5): 3718, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27908046

RESUMO

Band importance functions estimate the relative contribution of individual acoustic frequency bands to speech intelligibility. Previous studies of band importance in listeners with cochlear implants have used experimental maps and direct stimulation. Here, band importance was estimated for clinical maps with acoustic stimulation. Listeners with cochlear implants had band importance functions that relied more heavily on lower frequencies and showed less cross-listener consistency than in listeners with normal hearing. The intersubject variability observed here indicates that averaging band importance functions across listeners with cochlear implants, as has been done in previous studies, may not be meaningful. Additionally, band importance functions of listeners with normal hearing for vocoded speech that either did or did not simulate spread of excitation were not different from one another, suggesting that additional factors beyond spread of excitation are necessary to account for changes in band importance in listeners with cochlear implants.


Assuntos
Implantes Cocleares , Estimulação Acústica , Adulto , Implante Coclear , Humanos , Masculino , Pessoa de Meia-Idade , Inteligibilidade da Fala , Percepção da Fala , Adulto Jovem
15.
J Acoust Soc Am ; 138(3): 1687-95, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26428806

RESUMO

Voice-pitch cues provide detailed information about a talker that help a listener to understand speech in complex environments. Temporal-envelope based voice-pitch coding is important for listeners with hearing impairment, especially listeners with cochlear implants, as spectral resolution is not sufficient to provide a spectrally based voice-pitch cue. The effect of aging on the ability to glean voice-pitch information using temporal envelope cues is not completely understood. The current study measured fundamental frequency (f0) discrimination limens in normal-hearing younger and older adults while listening to noise-band vocoded harmonic complexes with varying numbers of spectral channels. Age-related disparities in performance were apparent across all conditions, independent of spectral degradation and/or fundamental frequency. The findings have important implications for older listeners with normal hearing and hearing loss, who may be inherently limited in their ability to perceive f0 cues due to senescent decline in auditory function.


Assuntos
Audição/fisiologia , Discriminação da Altura Tonal/fisiologia , Estimulação Acústica , Adolescente , Idoso , Audiometria de Tons Puros , Feminino , Humanos , Masculino , Ruído , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto Jovem
16.
J Acoust Soc Am ; 138(3): EL311-7, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26428832

RESUMO

Recent findings suggest that development changes the ability to comprehend degraded speech. Preschool children showed greater difficulties perceiving noise-vocoded speech (a signal that integrates amplitude over broad frequency bands) than sine-wave speech (which maintains the spectral peaks without the spectrum envelope). In contrast, 27-month-old children in the present study could recognize speech with either type of degradation and performed slightly better with eight-channel vocoded speech than with sine-wave speech. This suggests that children's identification performance depends critically on the degree of degradation and that their success in recognizing unfamiliar speech encodings is encouraging overall.


Assuntos
Estimulação Acústica/métodos , Desenvolvimento Infantil , Compreensão , Inteligibilidade da Fala , Percepção da Fala , Acústica , Fatores Etários , Audiometria da Fala , Pré-Escolar , Feminino , Humanos , Masculino , Reconhecimento Psicológico , Espectrografia do Som
17.
Neuroimage ; 88: 41-6, 2014 03.
Artigo em Inglês | MEDLINE | ID: mdl-24188816

RESUMO

Speech recognition is robust to background noise. One underlying neural mechanism is that the auditory system segregates speech from the listening background and encodes it reliably. Such robust internal representation has been demonstrated in auditory cortex by neural activity entrained to the temporal envelope of speech. A paradox, however, then arises, as the spectro-temporal fine structure rather than the temporal envelope is known to be the major cue to segregate target speech from background noise. Does the reliable cortical entrainment in fact reflect a robust internal "synthesis" of the attended speech stream rather than direct tracking of the acoustic envelope? Here, we test this hypothesis by degrading the spectro-temporal fine structure while preserving the temporal envelope using vocoders. Magnetoencephalography (MEG) recordings reveal that cortical entrainment to vocoded speech is severely degraded by background noise, in contrast to the robust entrainment to natural speech. Furthermore, cortical entrainment in the delta-band (1-4Hz) predicts the speech recognition score at the level of individual listeners. These results demonstrate that reliable cortical entrainment to speech relies on the spectro-temporal fine structure, and suggest that cortical entrainment to the speech envelope is not merely a representation of the speech envelope but a coherent representation of multiscale spectro-temporal features that are synchronized to the syllabic and phrasal rhythms of speech.


Assuntos
Ondas Encefálicas/fisiologia , Córtex Cerebral/fisiologia , Sincronização de Fases em Eletroencefalografia/fisiologia , Magnetoencefalografia/métodos , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
18.
J Acoust Soc Am ; 136(2): 829-40, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25096116

RESUMO

The objective of this study was to investigate charge-integration at threshold by cochlear implant listeners using pulse train stimuli in different stimulation modes (monopolar, bipolar, tripolar). The results partially confirmed and extended the findings of previous studies conducted in animal models showing that charge-integration depends on the stimulation mode. The primary overall finding was that threshold vs pulse phase duration functions had steeper slopes in monopolar mode and shallower slopes in more spatially restricted modes. While the result was clear-cut in eight users of the Cochlear Corporation(TM) device, the findings with the six user of the Advanced Bionics(TM) device who participated were less consistent. It is likely that different stimulation modes excite different neuronal populations and/or sites of excitation on the same neuron (e.g., peripheral process vs central axon). These differences may influence not only charge integration but possibly also temporal dynamics at suprathreshold levels and with more speech-relevant stimuli. Given the present interest in focused stimulation modes, these results have implications for cochlear implant speech processor design and protocols used to map acoustic amplitude to electric stimulation parameters.


Assuntos
Percepção Auditiva , Implante Coclear/instrumentação , Implantes Cocleares , Estimulação Elétrica/métodos , Pessoas com Deficiência Auditiva/reabilitação , Estimulação Acústica , Adolescente , Adulto , Idoso , Vias Auditivas/fisiopatologia , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Desenho de Prótese , Psicoacústica , Fatores de Tempo , Adulto Jovem
19.
J Acoust Soc Am ; 136(5): 2726-36, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25373972

RESUMO

When phase relationships between partials of a complex masker produce highly modulated temporal envelopes on the basilar membrane, listeners may detect speech information from temporal dips in the within-channel masker envelopes. This source of masking release (MR) is however located in regions of unresolved masker partials and it is unclear how much of the speech information in these regions is really needed for intelligibility. Also, other sources of MR such as glimpsing in between resolved masker partials may provide sufficient information from regions that disregard phase relationships. This study simplified the problem of speech recognition to a masked detection task. Target bands of speech-shaped noise were restricted to frequency regions containing either only resolved or only unresolved masker partials, as a function of masker phase relationships (sine or random), masker fundamental frequency (F0) (50, 100, or 200 Hz), and masker spectral profile (flat-spectrum or speech-shaped). Although masker phase effects could be observed in unresolved regions at F0s of 50 and 100 Hz, it was only at 50-Hz F0 that detection thresholds were ever lower in unresolved than in resolved regions, suggesting little role of envelope modulations for harmonic complexes with F0s in the human voice range and at moderate level.


Assuntos
Ruído , Mascaramento Perceptivo/fisiologia , Fonética , Inteligibilidade da Fala , Adulto , Limiar Diferencial , Humanos , Modelos Teóricos , Reconhecimento Fisiológico de Modelo , Periodicidade , Som , Acústica da Fala , Adulto Jovem
20.
J Acoust Soc Am ; 135(5): 2873-84, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24815268

RESUMO

Speech recognition in a complex masker usually benefits from masker harmonicity, but there are several factors at work. The present study focused on two of them, glimpsing spectrally in between masker partials and periodicity within individual frequency channels. Using both a theoretical and an experimental approach, it is demonstrated that when inharmonic complexes are generated by jittering partials from their harmonic positions, there are better opportunities for spectral glimpsing in inharmonic than in harmonic maskers, and this difference is enhanced as fundamental frequency (F0) increases. As a result, measurements of masking level difference between the two maskers can be reduced, particularly at higher F0s. Using inharmonic maskers that offer similar glimpsing opportunity to harmonic maskers, it was found that the masking level difference between the two maskers varied little with F0, was influenced by periodicity of the first four partials, and could occur in low-, mid-, or high-frequency regions. Overall, the present results suggested that both spectral glimpsing and periodicity contribute to speech recognition under masking by harmonic complexes, and these effects seem independent from one another.


Assuntos
Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto , Limiar Auditivo , Humanos , Pessoa de Meia-Idade , Periodicidade , Fonética , Espectrografia do Som , Inteligibilidade da Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA