Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
1.
Anim Cogn ; 27(1): 34, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38625429

RESUMO

Humans have an impressive ability to comprehend signal-degraded speech; however, the extent to which comprehension of degraded speech relies on human-specific features of speech perception vs. more general cognitive processes is unknown. Since dogs live alongside humans and regularly hear speech, they can be used as a model to differentiate between these possibilities. One often-studied type of degraded speech is noise-vocoded speech (sometimes thought of as cochlear-implant-simulation speech). Noise-vocoded speech is made by dividing the speech signal into frequency bands (channels), identifying the amplitude envelope of each individual band, and then using these envelopes to modulate bands of noise centered over the same frequency regions - the result is a signal with preserved temporal cues, but vastly reduced frequency information. Here, we tested dogs' recognition of familiar words produced in 16-channel vocoded speech. In the first study, dogs heard their names and unfamiliar dogs' names (foils) in vocoded speech as well as natural speech. In the second study, dogs heard 16-channel vocoded speech only. Dogs listened longer to their vocoded name than vocoded foils in both experiments, showing that they can comprehend a 16-channel vocoded version of their name without prior exposure to vocoded speech, and without immediate exposure to the natural-speech version of their name. Dogs' name recognition in the second study was mediated by the number of phonemes in the dogs' name, suggesting that phonological context plays a role in degraded speech comprehension.


Assuntos
Percepção da Fala , Fala , Humanos , Animais , Cães , Sinais (Psicologia) , Audição , Linguística
2.
J Child Lang ; : 1-22, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38362892

RESUMO

Children who receive cochlear implants develop spoken language on a protracted timescale. The home environment facilitates speech-language development, yet it is relatively unknown how the environment differs between children with cochlear implants and typical hearing. We matched eighteen preschoolers with implants (31-65 months) to two groups of children with typical hearing: by chronological age and hearing age. Each child completed a long-form, naturalistic audio recording of their home environment (appx. 16 hours/child; >730 hours of observation) to measure adult speech input, child vocal productivity, and caregiver-child interaction. Results showed that children with cochlear implants and typical hearing were exposed to and engaged in similar amounts of spoken language with caregivers. However, the home environment did not reflect developmental stages as closely for children with implants, or predict their speech outcomes as strongly. Home-based speech-language interventions should focus on the unique input-outcome relationships for this group of children with hearing loss.

3.
Child Dev ; 94(4): e197-e214, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37036081

RESUMO

To learn language, children must map variable input to categories such as phones and words. How do children process variation and distinguish between variable pronunciations ("shoup" for soup) versus new words? The unique sensory experience of children with cochlear implants, who learn speech through their device's degraded signal, lends new insight into this question. In a mispronunciation sensitivity eyetracking task, children with implants (N = 33), and typical hearing (N = 24; 36-66 months; 36F, 19M; all non-Hispanic white), with larger vocabularies processed known words faster. But children with implants were less sensitive to mispronunciations than typical hearing controls. Thus, children of all hearing experiences use lexical knowledge to process familiar words but require detailed speech representations to process variable speech in real time.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Criança , Humanos , Fala , Idioma
4.
J Acoust Soc Am ; 153(3): 1486, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-37002071

RESUMO

Because speaking rates are highly variable, listeners must use cues like phoneme or sentence duration to normalize speech across different contexts. Scaling speech perception in this way allows listeners to distinguish between temporal contrasts, like voiced and voiceless stops, even at different speech speeds. It has long been assumed that this speaking rate normalization can occur over small units such as phonemes. However, phonemes lack clear boundaries in running speech, so it is not clear that listeners can rely on them for normalization. To evaluate this, we isolate two potential processing levels for speaking rate normalization-syllabic and sub-syllabic-by manipulating phoneme duration in order to cue speaking rate, while also holding syllable duration constant. In doing so, we show that changing the duration of phonemes both with unique spectro-temporal signatures (/kɑ/) and more overlapping spectro-temporal signatures (/wɪ/) results in a speaking rate normalization effect. These results suggest that when acoustic boundaries within syllables are less clear, listeners can normalize for rate differences on the basis of sub-syllabic units.


Assuntos
Fonética , Percepção da Fala , Acústica da Fala , Fala , Idioma
5.
J Exp Child Psychol ; 227: 105581, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36423439

RESUMO

Although there is ample evidence documenting the development of spoken word recognition from infancy to adolescence, it is still unclear how development of word-level processing interacts with higher-level sentence processing, such as the use of lexical-semantic cues, to facilitate word recognition. We investigated how the ability to use an informative verb (e.g., draws) to predict an upcoming word (picture) and suppress competition from similar-sounding words (pickle) develops throughout the school-age years. Eye movements of children from two age groups (5-6 years and 9-10 years) were recorded while the children heard a sentence with an informative or neutral verb (The brother draws/gets the small picture) in which the final word matched one of a set of four pictures, one of which was a cohort competitor (pickle). Both groups demonstrated use of the informative verb to more quickly access the target word and suppress cohort competition. Although the age groups showed similar ability to use semantic context to facilitate processing, the older children demonstrated faster lexical access and more robust cohort suppression in both informative and uninformative contexts. This suggests that development of word-level processing facilitates access of top-down linguistic cues that support more efficient spoken language processing. Whereas developmental differences in the use of semantic context to facilitate lexical access were not explained by vocabulary knowledge, differences in the ability to suppress cohort competition were explained by vocabulary. This suggests a potential role for vocabulary knowledge in the resolution of lexical competition and perhaps the influence of lexical competition dynamics on vocabulary development.


Assuntos
Percepção da Fala , Masculino , Criança , Adolescente , Humanos , Pré-Escolar , Idioma , Semântica , Vocabulário , Linguística
6.
J Exp Child Psychol ; 226: 105567, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36244079

RESUMO

This research examined whether the auditory short-term memory (STM) capacity for speech sounds differs from that for nonlinguistic sounds in 11-month-old infants. Infants were presented with streams composed of repeating sequences of either 2 or 4 syllables, akin to prior work by Ross-Sheehy and Newman (2015) using nonlinguistic musical instruments. These syllable sequences either stayed the same for every repetition (constant) or changed by one syllable each time it repeated (varying). Using the head-turn preference procedure, we measured infant listening time to each type of stream (constant vs varying and 2 vs 4 syllables). Longer listening to the varying stream was taken as evidence for STM because this required remembering all syllables in the sequence. We found that infants listened longer to the varying streams for 2-syllable sequences but not for 4-syllable sequences. This capacity limitation is comparable to that found previously for nonlinguistic instrument tones, suggesting that young infants have similar STM limitations for speech and nonspeech stimuli.


Assuntos
Memória de Curto Prazo , Percepção da Fala , Lactente , Humanos , Fonética , Percepção Auditiva , Fala
7.
Anim Cogn ; 26(2): 451-463, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36064831

RESUMO

Studies have shown that both cotton-top tamarins as well as rats can discriminate between two languages based on rhythmic cues. This is similar to the capabilities of young infants, who also rely on rhythmic cues to differentiate between languages. However, the animals in these studies did not have long-term language exposure, so these studies did not specifically assess the role of language experience. In this study, we used companion dogs, who have prolonged exposure to human language in their home environment. These dogs came from homes where either English or Spanish was primarily spoken. The dogs were then presented with speech in English and in Spanish in a Headturn Preference Procedure paradigm to examine their language discrimination abilities as well as their language preferences. Dogs successfully discriminated between the two languages. In addition, dogs showed a novelty effect with their language preference such that Spanish-hearing dogs listened longer to English, and English-hearing dogs listened longer to Spanish. It is unclear what particular cue dogs are utilizing to discriminate between the two languages; future studies should explore dogs' utilization of phonological and rhythmic cues for language discrimination.


Assuntos
Idioma , Fala , Animais , Humanos , Cães , Ratos , Sinais (Psicologia) , Cognição , Linguística
8.
J Acoust Soc Am ; 151(5): 2898, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649892

RESUMO

Cochlear-implant (CI) users have previously demonstrated perceptual restoration, or successful repair of noise-interrupted speech, using the interrupted sentences paradigm [Bhargava, Gaudrain, and Baskent (2014). "Top-down restoration of speech in cochlear-implant users," Hear. Res. 309, 113-123]. The perceptual restoration effect was defined experimentally as higher speech understanding scores with noise-burst interrupted sentences compared to silent-gap interrupted sentences. For the perceptual restoration illusion to occur, it is often necessary for the masking or interrupting noise bursts to have a higher intensity than the adjacent speech signal to be perceived as a plausible masker. Thus, signal processing factors like noise reduction algorithms and automatic gain control could have a negative impact on speech repair in this population. Surprisingly, evidence that participants with cochlear implants experienced the perceptual restoration illusion was not observed across the two planned experiments. A separate experiment, which aimed to provide a close replication of previous work on perceptual restoration in CI users, also found no consistent evidence of perceptual restoration, contrasting the original study's previously reported findings. Typical speech repair of interrupted sentences was not observed in the present work's sample of CI users, and signal-processing factors did not appear to affect speech repair.


Assuntos
Implante Coclear , Implantes Cocleares , Ilusões , Percepção da Fala , Estimulação Acústica , Humanos , Inteligibilidade da Fala
9.
Front Neurol ; 13: 809939, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35237230

RESUMO

Concussions are common among flat-track roller derby players, a unique and under-studied sport, but little has been done to assess how common they are or what players can do to manage injury risk. The purpose of this study is to provide an epidemiological investigation of concussion incidence and experience in a large international sampling of roller derby players. Six hundred sixty-five roller derby players from 25 countries responded to a comprehensive online survey about injury and sport participation. Participants also responded to a battery of psychometric assessment tools targeting risk-factors for poor injury recovery (negative bias, social support, mental toughness) and players' thoughts and feelings in response to injury. Per 1,000 athletes, 790.98 concussions were reported. Current players reported an average of 2.2 concussions, while former players reported 3.1 concussions. However, groups were matched when these figures were corrected for differences in years of play (approximately one concussion every 2 years). Other frequent injuries included fractures in extremities and upper limbs, torn knee ligaments, and sprained ankles. We found no evidence that players' position, full-contact scrimmages, or flooring impacted number of concussions. However, neurological history and uncorrected vision were more influential predictors of an individual's number of concussions during roller derby than years of participation or age, though all four contributed significantly. These findings should assist athletes in making informed decisions about participation in roller derby, though more work is needed to understand the nature of risk.

10.
J Acoust Soc Am ; 150(3): 2256, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598599

RESUMO

Previous work has found that preschoolers with greater phonological awareness and larger lexicons, who speak more throughout the day, exhibit less intra-syllabic coarticulation in controlled speech production tasks. These findings suggest that both linguistic experience and speech-motor control are important predictors of spoken phonetic development. Still, it remains unclear how preschoolers' speech practice when they talk drives the development of coarticulation because children who talk more are likely to have both increased fine motor control and increased auditory feedback experience. Here, the potential effect of auditory feedback is studied by examining a population-children with cochlear implants (CIs)-which is naturally differing in auditory experience. The results show that (1) developmentally appropriate coarticulation improves with an increased hearing age but not chronological age; (2) children with CIs pattern coarticulatorily closer to their younger, hearing age-matched peers than chronological age-matched peers; and (3) the effects of speech practice on coarticulation, measured using naturalistic, at-home recordings of the children's speech production, only appear in the children with CIs after several years of hearing experience. Together, these results indicate a strong role of auditory feedback experience on coarticulation and suggest that parent-child communicative exchanges could stimulate children's own vocal output, which drives speech development.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Surdez/cirurgia , Retroalimentação , Audição , Humanos , Fonética
11.
Front Psychol ; 12: 712647, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34630222

RESUMO

Speech-language input from adult caregivers is a strong predictor of children's developmental outcomes. But the properties of this child-directed speech are not static over the first months or years of a child's life. This study assesses a large cohort of children and caregivers (n = 84) at 7, 10, 18, and 24 months to document (1) how a battery of phonetic, phonological, and lexical characteristics of child-directed speech changes in the first 2 years of life and (2) how input at these different stages predicts toddlers' phonological processing and vocabulary size at 2 years. Results show that most measures of child-directed speech do change as children age, and certain characteristics, like hyperarticulation, actually peak at 24 months. For language outcomes, children's phonological processing benefited from exposure to longer (in phonemes) words, more diverse word types, and enhanced coarticulation in their input. It is proposed that longer words in the input may stimulate children's phonological working memory development, while heightened coarticulation simultaneously introduces important sublexical cues and exposes them to challenging, naturalistic speech, leading to overall stronger phonological processing outcomes.

12.
J Acoust Soc Am ; 150(4): 2936, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717484

RESUMO

Cochlear-implant (CI) listeners experience signal degradation, which leads to poorer speech perception than normal-hearing (NH) listeners. In the present study, difficulty with word segmentation, the process of perceptually parsing the speech stream into separate words, is considered as a possible contributor to this decrease in performance. CI listeners were compared to a group of NH listeners (presented with unprocessed speech and eight-channel noise-vocoded speech) in their ability to segment phrases with word segmentation ambiguities (e.g., "an iceman" vs "a nice man"). The results showed that CI listeners and NH listeners were worse at segmenting words when hearing processed speech than NH listeners were when presented with unprocessed speech. When viewed at a broad level, all of the groups used cues to word segmentation in similar ways. Detailed analyses, however, indicated that the two processed speech groups weighted top-down knowledge cues to word boundaries more and weighted acoustic cues to word boundaries less relative to NH listeners presented with unprocessed speech.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Sinais (Psicologia) , Audição , Humanos , Masculino , Fala
13.
J Speech Lang Hear Res ; 64(5): 1636-1649, 2021 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-33887149

RESUMO

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb (draws) or a neutral verb (gets) preceding a target word (picture). The target referent was presented on a screen, along with a phonologically similar competitor (pickle). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Criança , Pré-Escolar , Surdez/cirurgia , Humanos , Semântica , Vocabulário
14.
J Acoust Soc Am ; 149(3): 1488, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33765790

RESUMO

Cochlear-implant (CI) users experience less success in understanding speech in noisy, real-world listening environments than normal-hearing (NH) listeners. Perceptual restoration is one method NH listeners use to repair noise-interrupted speech. Whereas previous work has reported that CI users can use perceptual restoration in certain cases, they failed to do so under listening conditions in which NH listeners can successfully restore. Providing increased opportunities to use top-down linguistic knowledge is one possible method to increase perceptual restoration use in CI users. This work tested perceptual restoration abilities in 18 CI users and varied whether a semantic cue (presented visually) was available prior to the target sentence (presented auditorily). Results showed that whereas access to a semantic cue generally improved performance with interrupted speech, CI users failed to perceptually restore speech regardless of the semantic cue availability. The lack of restoration in this population directly contradicts previous work in this field and raises questions of whether restoration is possible in CI users. One reason for speech-in-noise understanding difficulty in CI users could be that they are unable to use tools like restoration to process noise-interrupted speech effectively.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Sinais (Psicologia) , Semântica , Inteligibilidade da Fala
15.
J Neurodev Disord ; 13(1): 4, 2021 01 05.
Artigo em Inglês | MEDLINE | ID: mdl-33402099

RESUMO

BACKGROUND: Adults and adolescents with autism spectrum disorders show greater difficulties comprehending speech in the presence of noise. Moreover, while neurotypical adults use visual cues on the mouth to help them understand speech in background noise, differences in attention to human faces in autism may affect use of these visual cues. No work has yet examined these skills in toddlers with ASD, despite the fact that they are frequently faced with noisy, multitalker environments. METHODS: Children aged 2-5 years, both with and without autism spectrum disorder (ASD), saw pairs of images in a preferential looking study and were instructed to look at one of the two objects. Sentences were presented in the presence of quiet or another background talker (noise). On half of the trials, the face of the target person speaking was presented, while half had no face present. Growth-curve modeling was used to examine the time course of children's looking to the appropriate vs. opposite image. RESULTS: Noise impaired performance for both children with ASD and their age- and language-matched peers. When there was no face present on the screen, the effect of noise was generally similar across groups with and without ASD. But when the face was present, the noise had a more detrimental effect on children with ASD than their language-matched peers, suggesting neurotypical children were better able to use visual cues on the speaker's face to aid performance. Moreover, those children with ASD who attended more to the speaker's face showed better listening performance in the presence of noise. CONCLUSIONS: Young children both with and without ASD show poorer performance comprehending speech in the presence of another talker than in quiet. However, results suggest that neurotypical children may be better able to make use of face cues to partially counteract the effects of noise. Children with ASD varied in their use of face cues, but those children who spent more time attending to the face of the target speaker appeared less disadvantaged by the presence of background noise, indicating a potential path for future interventions.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Percepção Auditiva , Pré-Escolar , Feminino , Humanos , Lábio , Masculino , Fala
16.
Int J Billing ; 25(5): 1446-1459, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36160086

RESUMO

Aims and objectives: The purpose of this study was to examine whether differences in language exposure (i.e., being raised in a bilingual versus a monolingual environment) influence young children's ability to comprehend words when speech is heard in the presence of background noise. Methodology: Forty-four children (22 monolinguals and 22 bilinguals) between the ages of 29 and 31 months completed a preferential looking task where they saw picture-pairs of familiar objects (e.g., balloon and apple) on a screen and simultaneously heard sentences instructing them to locate one of the objects (e.g., look at the apple!). Speech was heard in quiet and in the presence of competing white noise. Data and analyses: Children's eye-movements were coded off-line to identify the proportion of time they fixated on the correct object on the screen and performance across groups was compared using a 2 × 3 mixed analysis of variance. Findings: Bilingual toddlers performed worse than monolinguals during the task. This group difference in performance was particularly clear when the listening condition contained background noise. Originality: There are clear differences in how infants and adults process speech in noise. To date, developmental work on this topic has mainly been carried out with monolingual infants. This study is one of the first to examine how background noise might influence word identification in young bilingual children who are just starting to acquire their languages. Significance: High noise levels are often reported in daycares and classrooms where bilingual children are present. Therefore, this work has important implications for learning and education practices with young bilinguals.

17.
Anim Cogn ; 24(3): 419-431, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33052544

RESUMO

Consonants and vowels play different roles in speech perception: listeners rely more heavily on consonant information rather than vowel information when distinguishing between words. This reliance on consonants for word identification is the consonant bias Nespor et al. (Ling 2:203-230, 2003). Several factors modulate infants' development of the consonant bias, including fine-grained temporal processing ability and native language exposure [for review, see Nazzi et al. (Curr Direct Psychol Sci 25:291-296, 2016)]. A rat model demonstrated that mature fine-grained temporal processing alone cannot account for consonant bias emergence; linguistic exposure is also necessary Bouchon and Toro (An Cog 22:839-850, 2019). This study tested domestic dogs, who have similarly fine-grained temporal processing but more language exposure than rats, to assess whether a minimal lexicon and small degree of regular linguistic exposure can allow for consonant bias development. Dogs demonstrated a vowel bias rather than a consonant bias, preferring their own name over a vowel-mispronounced version of their name, but not in comparison to a consonant-mispronounced version. This is the pattern seen in young infants Bouchon et al. (Dev Sci 18:587-598, 2015) and rats Bouchon et al. (An Cog 22:839-850, 2019). In a follow-up study, dogs treated a consonant-mispronounced version of their name similarly to their actual name, further suggesting that dogs do not treat consonant differences as meaningful for word identity. These results support the findings from Bouchon and Toro (An Cog 2:839-850, 2019), suggesting that there may be a default preference for vowel information over consonant information when identifying word forms, and that the consonant bias may be a human-exclusive tool for language learning.


Assuntos
Fonética , Percepção da Fala , Animais , Cães , Seguimentos , Humanos , Idioma , Desenvolvimento da Linguagem , Ratos
18.
Q J Exp Psychol (Hove) ; 74(2): 312-325, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32988312

RESUMO

Viewers' perception of actions is coloured by the context in which those actions are found. An action that seems uncomfortably sudden in one context might seem expeditious in another. In this study, we examined the influence of one type of context: the rate at which an action is being performed. Based on parallel findings in other modalities, we anticipated that viewers would adapt to the rate at which actions were displayed at. Viewers watched a series of actions performed on a touchscreen that could end in actions that were ambiguous to their number (e.g., two separate "tap" actions versus a single "double tap" action) or identity (e.g., a "swipe" action versus a slower "drag"). In Experiment 1, the rate of actions themselves was manipulated; participants used the rate of the actions to distinguish between two similar, related actions. In Experiment 2, the rate of the actions that preceded the ambiguous one was sped up or slowed down. In line with our hypotheses, viewers perceived the identity of those final actions with reference to the rate of the preceding actions. This was true even in Experiment 3, when the action immediately before the ambiguous one was left unmodified. Ambiguous actions embedded in a fast context were seen as relatively long, while ambiguous actions embedded in a slow context were seen as relatively short. This shows that viewers adapt to the rate of actions when perceiving visual events.


Assuntos
Percepção , Percepção do Tempo , Percepção Visual , Humanos
19.
J Acoust Soc Am ; 147(4): 2432, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32359241

RESUMO

The ability to recognize speech that is degraded spectrally is a critical skill for successfully using a cochlear implant (CI). Previous research has shown that toddlers with normal hearing can successfully recognize noise-vocoded words as long as the signal contains at least eight spectral channels [Newman and Chatterjee. (2013). J. Acoust. Soc. Am. 133(1), 483-494; Newman, Chatterjee, Morini, and Remez. (2015). J. Acoust. Soc. Am. 138(3), EL311-EL317], although they have difficulty with signals that only contain four channels of information. Young children with CIs not only need to match a degraded speech signal to a stored representation (word recognition), but they also need to create new representations (word learning), a task that is likely to be more cognitively demanding. Normal-hearing toddlers aged 34 months were tested on their ability to initially learn (fast-map) new words in noise-vocoded stimuli. While children were successful at fast-mapping new words from 16-channel noise-vocoded stimuli, they failed to do so from 8-channel noise-vocoded speech. The level of degradation imposed by 8-channel vocoding appears sufficient to disrupt fast-mapping in young children. Recent results indicate that only CI patients with high spectral resolution can benefit from more than eight active electrodes. This suggests that for many children with CIs, reduced spectral resolution may limit their acquisition of novel words.


Assuntos
Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Pré-Escolar , Humanos , Ruído/efeitos adversos , Fala
20.
J Child Lang ; 47(6): 1263-1275, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32157973

RESUMO

Aims: Although IDS is typically described as slower than adult-directed speech (ADS), potential impacts of slower speech on language development have not been examined. We explored whether IDS speech rates in 42 mother-infant dyads at four time periods predicted children's language outcomes at two years. Method: We correlated IDS speech rate with child language outcomes at two years, and contrasted outcomes in dyads displaying high/low rate profiles. Outcomes: Slower IDS rate at 7 months significantly correlated with vocabulary knowledge at two years. Slowed IDS may benefit child language learning even before children first speak.


Assuntos
Desenvolvimento da Linguagem , Fala , Adulto , Linguagem Infantil , Feminino , Humanos , Lactente , Idioma , Aprendizagem , Masculino , Mães , Percepção da Fala , Vocabulário
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...