RESUMO
OBJECTIVES: Children with cochlear implants (CIs) vary widely in their ability to identify emotions in speech. The causes of this variability are unknown, but this knowledge will be crucial if we are to design improvements in technological or rehabilitative interventions that are effective for individual patients. The objective of this study was to investigate how well factors such as age at implantation, duration of device experience (hearing age), nonverbal cognition, vocabulary, and socioeconomic status predict prosody-based emotion identification in children with CIs, and how the key predictors in this population compare to children with normal hearing who are listening to either normal emotional speech or to degraded speech. DESIGN: We measured vocal emotion identification in 47 school-age CI recipients aged 7 to 19 years in a single-interval, 5-alternative forced-choice task. None of the participants had usable residual hearing based on parent/caregiver report. Stimuli consisted of a set of semantically emotion-neutral sentences that were recorded by 4 talkers in child-directed and adult-directed prosody corresponding to five emotions: neutral, angry, happy, sad, and scared. Twenty-one children with normal hearing were also tested in the same tasks; they listened to both original speech and to versions that had been noise-vocoded to simulate CI information processing. RESULTS: Group comparison confirmed the expected deficit in CI participants' emotion identification relative to participants with normal hearing. Within the CI group, increasing hearing age (correlated with developmental age) and nonverbal cognition outcomes predicted emotion recognition scores. Stimulus-related factors such as talker and emotional category also influenced performance and were involved in interactions with hearing age and cognition. Age at implantation was not predictive of emotion identification. Unlike the CI participants, neither cognitive status nor vocabulary predicted outcomes in participants with normal hearing, whether listening to original speech or CI-simulated speech. Age-related improvements in outcomes were similar in the two groups. Participants with normal hearing listening to original speech showed the greatest differences in their scores for different talkers and emotions. Participants with normal hearing listening to CI-simulated speech showed significant deficits compared with their performance with original speech materials, and their scores also showed the least effect of talker- and emotion-based variability. CI participants showed more variation in their scores with different talkers and emotions than participants with normal hearing listening to CI-simulated speech, but less so than participants with normal hearing listening to original speech. CONCLUSIONS: Taken together, these results confirm previous findings that pediatric CI recipients have deficits in emotion identification based on prosodic cues, but they improve with age and experience at a rate that is similar to peers with normal hearing. Unlike participants with normal hearing, nonverbal cognition played a significant role in CI listeners' emotion identification. Specifically, nonverbal cognition predicted the extent to which individual CI users could benefit from some talkers being more expressive of emotions than others, and this effect was greater in CI users who had less experience with their device (or were younger) than CI users who had more experience with their device (or were older). Thus, in young prelingually deaf children with CIs performing an emotional prosody identification task, cognitive resources may be harnessed to a greater degree than in older prelingually deaf children with CIs or than children with normal hearing.
Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Criança , Idoso , Audição , EmoçõesRESUMO
OBJECTIVES: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adolescente , Adulto , Criança , Emoções , Feminino , Humanos , Masculino , Fala , Adulto JovemRESUMO
OBJECTIVES: Emotional communication is a cornerstone of social cognition and informs human interaction. Previous studies have shown deficits in facial and vocal emotion recognition in older adults, particularly for negative emotions. However, few studies have examined combined effects of aging and hearing loss on vocal emotion recognition by adults. The objective of this study was to compare vocal emotion recognition in adults with hearing loss relative to age-matched peers with normal hearing. We hypothesized that age would play a role in emotion recognition and that listeners with hearing loss would show deficits across the age range. DESIGN: Thirty-two adults (22 to 74 years of age) with mild to severe, symmetrical sensorineural hearing loss, amplified with bilateral hearing aids and 30 adults (21 to 75 years of age) with normal hearing, participated in the study. Stimuli consisted of sentences spoken by 2 talkers, 1 male, 1 female, in 5 emotions (angry, happy, neutral, sad, and scared) in an adult-directed manner. The task involved a single-interval, five-alternative forced-choice paradigm, in which the participants listened to individual sentences and indicated which of the five emotions was targeted in each sentence. Reaction time was recorded as an indirect measure of cognitive load. RESULTS: Results showed significant effects of age. Older listeners had reduced accuracy, increased reaction times, and reduced d' values. Normal hearing listeners showed an Age by Talker interaction where older listeners had more difficulty identifying male vocal emotion. Listeners with hearing loss showed reduced accuracy, increased reaction times, and lower d' values compared with age-matched normal-hearing listeners. Within the group with hearing loss, age and talker effects were significant, and low-frequency pure-tone averages showed a marginally significant effect. Contrary to other studies, once hearing thresholds were taken into account, no effects of listener sex were observed, nor were there effects of individual emotions on accuracy. However, reaction times and d' values showed significant differences between individual emotions. CONCLUSIONS: The results of this study confirm existing findings in the literature showing that older adults show significant deficits in voice emotion recognition compared with their normally hearing peers, and that among listeners with normal hearing, age-related changes in hearing do not predict this age-related deficit. The present results also add to the literature by showing that hearing impairment contributes additionally to deficits in vocal emotion recognition, separate from deficits related to age. These effects of age and hearing loss appear to be quite robust, being evident in reduced accuracy scores and d' measures, as well as in reaction time measures.
Assuntos
Emoções , Reconhecimento Facial , Perda Auditiva Neurossensorial/fisiopatologia , Percepção Social , Percepção da Fala , Adulto , Fatores Etários , Idoso , Estudos de Casos e Controles , Feminino , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Humanos , Masculino , Pessoa de Meia-Idade , Índice de Gravidade de Doença , Adulto JovemRESUMO
OBJECTIVES: It is known that school-aged children with cochlear implants show deficits in voice emotion recognition relative to normal-hearing peers. Little, however, is known about normal-hearing children's processing of emotional cues in cochlear implant-simulated, spectrally degraded speech. The objective of this study was to investigate school-aged, normal-hearing children's recognition of voice emotion, and the degree to which their performance could be predicted by their age, vocabulary, and cognitive factors such as nonverbal intelligence and executive function. DESIGN: Normal-hearing children (6-19 years old) and young adults were tested on a voice emotion recognition task under three different conditions of spectral degradation using cochlear implant simulations (full-spectrum, 16-channel, and 8-channel noise-vocoded speech). Measures of vocabulary, nonverbal intelligence, and executive function were obtained as well. RESULTS: Adults outperformed children on all tasks, and a strong developmental effect was observed. The children's age, the degree of spectral resolution, and nonverbal intelligence were predictors of performance, but vocabulary and executive functions were not, and no interactions were observed between age and spectral resolution. CONCLUSIONS: These results indicate that cognitive function and age play important roles in children's ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children's age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution.
Assuntos
Emoções , Inteligência , Percepção da Fala , Adolescente , Fatores Etários , Criança , Cognição , Feminino , Humanos , Modelos Lineares , Masculino , Valores de Referência , Vocabulário , Adulto JovemRESUMO
Little is known about cochlear implant (CI) users' ability to process amplitude modulation (AM) under conditions of forward masking (forward-modulation detection/discrimination interference, or F-MDI). In this study, F-MDI was investigated in adult CI listeners using direct electrical stimulation via research interface. The target was sinusoidally amplitude modulated at 50 Hz, and presented to a fixed electrode in the middle of the array. The forward masker was either amplitude modulated at the same rate (AM) or unmodulated and presented at the peak amplitude of its AM counterpart (steady-state peak, SSP). Results showed that the AM masker produced higher modulation thresholds in the target than the SSP masker. The difference (F-MDI) was estimated to be 4.6 dB on average, and did not change with masker-target delays up to 100 ms or with masker-target spatial electrode distances up to eight electrodes. Results with a coherent remote cue presented with the masker showed that confusion effects did not play a role in the observed F-MDI. Traditional recovery from forward masking using the same maskers and a 20-ms probe, measured in four of the subjects, confirmed the expected result: higher thresholds with the SSP masker than the AM masker. Collectively, the results indicate that significant F-MDI occurs in CI users.
Assuntos
Percepção Auditiva , Implante Coclear/instrumentação , Implantes Cocleares , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Estimulação Acústica , Adulto , Idoso , Limiar Auditivo , Sinais (Psicologia) , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologiaRESUMO
Psychophysical recovery from forward masking was measured in adult cochlear implant users of CochlearTM and Advanced BionicsTM devices, in monopolar and in focused (bipolar and tripolar) stimulation modes, at four electrode sites across the arrays, and at two levels (loudness balanced across modes and electrodes). Results indicated a steeper psychophysical recovery from forward masking in monopolar over bipolar and tripolar modes, modified by differential effects of electrode and level. The interactions between factors varied somewhat across devices. It is speculated that psychophysical recovery from forward masking may be driven by different populations of neurons in the different modes, with a broader stimulation pattern resulting in a greater likelihood of response by healthier and/or faster-recovering neurons within the stimulated population. If a more rapid recovery from prior stimulation reflects responses of neurons not necessarily close to the activating site, the spectral pattern of the incoming acoustic signal may be distorted. These results have implications for speech processor implementations using different degrees of focusing of the electric field. The primary differences in the shape of the recovery function were observed in the earlier portion (between 2 and 45 ms) of recovery, which is significant in terms of the speech envelope.
Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Idoso , Limiar Auditivo , Criança , Estimulação Elétrica , Feminino , Humanos , Percepção Sonora , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Pessoas com Deficiência Auditiva/psicologia , Desenho de Prótese , Psicoacústica , Fatores de Tempo , Adulto JovemRESUMO
The objective of this study was to investigate charge-integration at threshold by cochlear implant listeners using pulse train stimuli in different stimulation modes (monopolar, bipolar, tripolar). The results partially confirmed and extended the findings of previous studies conducted in animal models showing that charge-integration depends on the stimulation mode. The primary overall finding was that threshold vs pulse phase duration functions had steeper slopes in monopolar mode and shallower slopes in more spatially restricted modes. While the result was clear-cut in eight users of the Cochlear Corporation(TM) device, the findings with the six user of the Advanced Bionics(TM) device who participated were less consistent. It is likely that different stimulation modes excite different neuronal populations and/or sites of excitation on the same neuron (e.g., peripheral process vs central axon). These differences may influence not only charge integration but possibly also temporal dynamics at suprathreshold levels and with more speech-relevant stimuli. Given the present interest in focused stimulation modes, these results have implications for cochlear implant speech processor design and protocols used to map acoustic amplitude to electric stimulation parameters.
Assuntos
Percepção Auditiva , Implante Coclear/instrumentação , Implantes Cocleares , Estimulação Elétrica/métodos , Pessoas com Deficiência Auditiva/reabilitação , Estimulação Acústica , Adolescente , Adulto , Idoso , Vias Auditivas/fisiopatologia , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Desenho de Prótese , Psicoacústica , Fatores de Tempo , Adulto JovemRESUMO
Links between perception and production of emotional prosody by children with cochlear implants (CIs) have not been extensively explored. In this study, production and perception of emotional prosody were measured in 20 prelingually deaf school-age children with CIs. All were implanted by the age of 3, and most by 18 months. Emotion identification was well-predicted by prosody productions in terms of voice pitch modulation and duration. This finding supports the idea that in prelingually deaf children with CIs, production of emotional prosody is associated with access to auditory cues that support the perception of emotional prosody.
Assuntos
Implante Coclear , Implantes Cocleares , Criança , Humanos , Lactente , Cóclea , Emoções , PercepçãoRESUMO
Purpose Cochlear implants (CIs) transmit a degraded version of the acoustic input to the listener. This impacts the perception of harmonic pitch, resulting in deficits in the perception of voice features critical to speech prosody. Such deficits may relate to changes in how children with CIs (CCIs) learn to produce vocal emotions. The purpose of this study was to investigate happy and sad emotional speech productions by school-age CCIs, compared to productions by children with normal hearing (NH), postlingually deaf adults with CIs, and adults with NH. Method All individuals recorded the same emotion-neutral sentences in a happy manner and a sad manner. These recordings were then used as stimuli in an emotion recognition task performed by child and adult listeners with NH. Their performance was taken as a measure of how well the 4 groups of talkers communicated the 2 emotions. Results Results showed high variability in the identifiability of emotions produced by CCIs, relative to other groups. Some CCIs produced highly identifiable emotions, while others showed deficits. The postlingually deaf adults with CIs produced highly identifiable emotions and relatively small intersubject variability. Age at implantation was found to be a significant predictor of performance by CCIs. In addition, the NH listeners' age predicted how well they could identify the emotions produced by CCIs. Thus, older NH child listeners were better able to identify the CCIs' intended emotions than younger NH child listeners. In contrast to the deficits in their emotion productions, CCIs produced highly intelligible words in the sentences carrying the emotions. Conclusions These results confirm previous findings showing deficits in CCIs' productions of prosodic cues and indicate that early auditory experience plays an important role in vocal emotion productions by individuals with CIs.
Assuntos
Implantes Cocleares/psicologia , Surdez/psicologia , Emoções/fisiologia , Pessoas com Deficiência Auditiva/psicologia , Inteligibilidade da Fala , Voz/fisiologia , Adolescente , Adulto , Criança , Implante Coclear , Surdez/cirurgia , Feminino , Audição , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Fala , Adulto JovemRESUMO
Purpose: Cochlear implants (CIs) provide reasonable levels of speech recognition quietly, but voice pitch perception is severely impaired in CI users. The central question addressed here relates to how access to acoustic input pre-implantation influences vocal emotion production by individuals with CIs. The objective of this study was to compare acoustic characteristics of vocal emotions produced by prelingually deaf school-aged children with cochlear implants (CCIs) who were implanted at the age of 2 and had no usable hearing before implantation with those produced by children with normal hearing (CNH), adults with normal hearing (ANH), and postlingually deaf adults with cochlear implants (ACI) who developed with good access to acoustic information prior to losing their hearing and receiving a CI. Method: A set of 20 sentences without lexically based emotional information was recorded by 13 CCI, 9 CNH, 9 ANH, and 10 ACI, each with a happy emotion and a sad emotion, without training or guidance. The sentences were analyzed for primary acoustic characteristics of the productions. Results: Significant effects of Emotion were observed in all acoustic features analyzed (mean voice pitch, standard deviation of voice pitch, intensity, duration, and spectral centroid). ACI and ANH did not differ in any of the analyses. Of the four groups, CCI produced the smallest acoustic contrasts between the emotions in voice pitch and emotions in its standard deviation. Effects of developmental age (highly correlated with the duration of device experience) and age at implantation (moderately correlated with duration of device experience) were observed, and interactions with the children's sex were also observed. Conclusion: Although prelingually deaf CCI and postlingually deaf ACI are listening to similar degraded speech and show similar deficits in vocal emotion perception, these groups are distinct in their productions of contrastive vocal emotions. The results underscore the importance of access to acoustic hearing in early childhood for the production of speech prosody and also suggest the need for a greater role of speech therapy in this area.
RESUMO
In tonal languages, voice pitch inflections change the meaning of words, such that the brain processes pitch not merely as an acoustic characterization of sound but as semantic information. In normally-hearing (NH) adults, this linguistic pressure on pitch appears to sharpen its neural encoding and can lead to perceptual benefits, depending on the task relevance, potentially generalizing outside of the speech domain. In children, however, linguistic systems are still malleable, meaning that their encoding of voice pitch information might not receive as much neural specialization but might generalize more easily to ecologically irrelevant pitch contours. This would seem particularly true for early-deafened children wearing a cochlear implant (CI), who must exhibit great adaptability to unfamiliar sounds as their sense of pitch is severely degraded. Here, we provide the first demonstration of a tonal language benefit in dynamic pitch sensitivity among NH children (using both a sweep discrimination and labelling task) which extends partially to children with CI (i.e., in the labelling task only). Strong age effects suggest that sensitivity to pitch contours reaches adult-like levels early in tonal language speakers (possibly before 6 years of age) but continues to develop in non-tonal language speakers well into the teenage years. Overall, we conclude that language-dependent neuroplasticity can enhance behavioral sensitivity to dynamic pitch, even in extreme cases of auditory degradation, but it is most easily observable early in life.
Assuntos
Implantes Cocleares , Audição , Idioma , Discriminação da Altura Tonal , Percepção da Altura Sonora , Adolescente , Comportamento , Criança , Humanos , Plasticidade Neuronal , Adulto JovemRESUMO
Sensitivity to static changes in pitch has been shown to be poorer in school-aged children wearing cochlear implants (CIs) than children with normal hearing (NH), but it is unclear whether this is also the case for dynamic changes in pitch. Yet, dynamically changing pitch has considerable ecological relevance in terms of natural speech, particularly aspects such as intonation, emotion, or lexical tone information. Twenty one children with NH and 23 children wearing a CI participated in this study, along with 18 NH adults and 6 CI adults for comparison. Listeners with CIs used their clinically assigned settings with envelope-based coding strategies. Percent correct was measured in one- or three-interval two-alternative forced choice tasks, for the direction or discrimination of harmonic complexes based on a linearly rising or falling fundamental frequency. Sweep rates were adjusted per subject, in a logarithmic scale, so as to cover the full extent of the psychometric function. Data for up- and down-sweeps were fitted separately, using a maximum-likelihood technique. Fits were similar for up- and down-sweeps in the discrimination task, but diverged in the direction task because psychometric functions for down-sweeps were very shallow. Hits and false alarms were then converted into d' and beta values, from which a threshold was extracted at a d' of 0.77. Thresholds were very consistent between the two tasks and considerably higher (worse) for CI listeners than for their NH peers. Thresholds were also higher for children than adults. Factors such as age at implantation, age at profound hearing loss, and duration of CI experience did not play any major role in this sensitivity. Thresholds of dynamic pitch sensitivity (in either task) also correlated with thresholds for static pitch sensitivity and with performance in tasks related to speech prosody.
RESUMO
Despite their remarkable success in bringing spoken language to hearing impaired listeners, the signal transmitted through cochlear implants (CIs) remains impoverished in spectro-temporal fine structure. As a consequence, pitch-dominant information such as voice emotion, is diminished. For young children, the ability to correctly identify the mood/intent of the speaker (which may not always be visible in their facial expression) is an important aspect of social and linguistic development. Previous work in the field has shown that children with cochlear implants (cCI) have significant deficits in voice emotion recognition relative to their normally hearing peers (cNH). Here, we report on voice emotion recognition by a cohort of 36 school-aged cCI. Additionally, we provide for the first time, a comparison of their performance to that of cNH and NH adults (aNH) listening to CI simulations of the same stimuli. We also provide comparisons to the performance of adult listeners with CIs (aCI), most of whom learned language primarily through normal acoustic hearing. Results indicate that, despite strong variability, on average, cCI perform similarly to their adult counterparts; that both groups' mean performance is similar to aNHs' performance with 8-channel noise-vocoded speech; that cNH achieve excellent scores in voice emotion recognition with full-spectrum speech, but on average, show significantly poorer scores than aNH with 8-channel noise-vocoded speech. A strong developmental effect was observed in the cNH with noise-vocoded speech in this task. These results point to the considerable benefit obtained by cochlear-implanted children from their devices, but also underscore the need for further research and development in this important and neglected area. This article is part of a Special Issue entitled