RESUMO
OBJECTIVES: Children with cochlear implants (CIs) vary widely in their ability to identify emotions in speech. The causes of this variability are unknown, but this knowledge will be crucial if we are to design improvements in technological or rehabilitative interventions that are effective for individual patients. The objective of this study was to investigate how well factors such as age at implantation, duration of device experience (hearing age), nonverbal cognition, vocabulary, and socioeconomic status predict prosody-based emotion identification in children with CIs, and how the key predictors in this population compare to children with normal hearing who are listening to either normal emotional speech or to degraded speech. DESIGN: We measured vocal emotion identification in 47 school-age CI recipients aged 7 to 19 years in a single-interval, 5-alternative forced-choice task. None of the participants had usable residual hearing based on parent/caregiver report. Stimuli consisted of a set of semantically emotion-neutral sentences that were recorded by 4 talkers in child-directed and adult-directed prosody corresponding to five emotions: neutral, angry, happy, sad, and scared. Twenty-one children with normal hearing were also tested in the same tasks; they listened to both original speech and to versions that had been noise-vocoded to simulate CI information processing. RESULTS: Group comparison confirmed the expected deficit in CI participants' emotion identification relative to participants with normal hearing. Within the CI group, increasing hearing age (correlated with developmental age) and nonverbal cognition outcomes predicted emotion recognition scores. Stimulus-related factors such as talker and emotional category also influenced performance and were involved in interactions with hearing age and cognition. Age at implantation was not predictive of emotion identification. Unlike the CI participants, neither cognitive status nor vocabulary predicted outcomes in participants with normal hearing, whether listening to original speech or CI-simulated speech. Age-related improvements in outcomes were similar in the two groups. Participants with normal hearing listening to original speech showed the greatest differences in their scores for different talkers and emotions. Participants with normal hearing listening to CI-simulated speech showed significant deficits compared with their performance with original speech materials, and their scores also showed the least effect of talker- and emotion-based variability. CI participants showed more variation in their scores with different talkers and emotions than participants with normal hearing listening to CI-simulated speech, but less so than participants with normal hearing listening to original speech. CONCLUSIONS: Taken together, these results confirm previous findings that pediatric CI recipients have deficits in emotion identification based on prosodic cues, but they improve with age and experience at a rate that is similar to peers with normal hearing. Unlike participants with normal hearing, nonverbal cognition played a significant role in CI listeners' emotion identification. Specifically, nonverbal cognition predicted the extent to which individual CI users could benefit from some talkers being more expressive of emotions than others, and this effect was greater in CI users who had less experience with their device (or were younger) than CI users who had more experience with their device (or were older). Thus, in young prelingually deaf children with CIs performing an emotional prosody identification task, cognitive resources may be harnessed to a greater degree than in older prelingually deaf children with CIs or than children with normal hearing.
Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Criança , Idoso , Audição , EmoçõesRESUMO
Remote communicative contexts are part of everyday social, familial, and academic interactions for the modern child. We investigated the ability of second-graders to engage in remote discourse, and we determined whether language ability, theory of mind, and shy temperament predicted their success. Fifty 7-to-9-year-old monolingual English speakers with a wide range of language abilities participated in standardized testing and an expository discourse task in which they taught two adults to solve the Tower of London, one in an audiovisual condition to simulate video chat and a second in an audio-only condition to simulate phone communication. The discourse was scored with a rubric of 15 items deemed relevant to the explanation. Children included 27% to 87% of the items, with more items communicated via gesture than spoken word in both conditions. Gesture scores and spoken scores were highly correlated. Children specified more rubric items overall in the audio condition and more rubric items in the spoken modality when in the audio condition than the audiovisual condition. Performance in both conditions was positively associated with scores on independent measures of language ability. There was no relationship between performance and theory of mind, shy temperament, ability to solve the Tower of London, age, or sex. We conclude that 7-to-9-year-olds adjust the modality and content of their message to suit their remote partner's needs, but their success in remote discourse contexts varies significantly from individual to individual. Children with below-average language skills are at risk for functional impairments in remote communication.