Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Anim Cogn ; 27(1): 34, 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38625429

RESUMEN

Humans have an impressive ability to comprehend signal-degraded speech; however, the extent to which comprehension of degraded speech relies on human-specific features of speech perception vs. more general cognitive processes is unknown. Since dogs live alongside humans and regularly hear speech, they can be used as a model to differentiate between these possibilities. One often-studied type of degraded speech is noise-vocoded speech (sometimes thought of as cochlear-implant-simulation speech). Noise-vocoded speech is made by dividing the speech signal into frequency bands (channels), identifying the amplitude envelope of each individual band, and then using these envelopes to modulate bands of noise centered over the same frequency regions - the result is a signal with preserved temporal cues, but vastly reduced frequency information. Here, we tested dogs' recognition of familiar words produced in 16-channel vocoded speech. In the first study, dogs heard their names and unfamiliar dogs' names (foils) in vocoded speech as well as natural speech. In the second study, dogs heard 16-channel vocoded speech only. Dogs listened longer to their vocoded name than vocoded foils in both experiments, showing that they can comprehend a 16-channel vocoded version of their name without prior exposure to vocoded speech, and without immediate exposure to the natural-speech version of their name. Dogs' name recognition in the second study was mediated by the number of phonemes in the dogs' name, suggesting that phonological context plays a role in degraded speech comprehension.


Asunto(s)
Percepción del Habla , Habla , Humanos , Animales , Perros , Señales (Psicología) , Audición , Lingüística
2.
J Child Lang ; : 1-22, 2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38362892

RESUMEN

Children who receive cochlear implants develop spoken language on a protracted timescale. The home environment facilitates speech-language development, yet it is relatively unknown how the environment differs between children with cochlear implants and typical hearing. We matched eighteen preschoolers with implants (31-65 months) to two groups of children with typical hearing: by chronological age and hearing age. Each child completed a long-form, naturalistic audio recording of their home environment (appx. 16 hours/child; >730 hours of observation) to measure adult speech input, child vocal productivity, and caregiver-child interaction. Results showed that children with cochlear implants and typical hearing were exposed to and engaged in similar amounts of spoken language with caregivers. However, the home environment did not reflect developmental stages as closely for children with implants, or predict their speech outcomes as strongly. Home-based speech-language interventions should focus on the unique input-outcome relationships for this group of children with hearing loss.

3.
Anim Cogn ; 26(2): 451-463, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36064831

RESUMEN

Studies have shown that both cotton-top tamarins as well as rats can discriminate between two languages based on rhythmic cues. This is similar to the capabilities of young infants, who also rely on rhythmic cues to differentiate between languages. However, the animals in these studies did not have long-term language exposure, so these studies did not specifically assess the role of language experience. In this study, we used companion dogs, who have prolonged exposure to human language in their home environment. These dogs came from homes where either English or Spanish was primarily spoken. The dogs were then presented with speech in English and in Spanish in a Headturn Preference Procedure paradigm to examine their language discrimination abilities as well as their language preferences. Dogs successfully discriminated between the two languages. In addition, dogs showed a novelty effect with their language preference such that Spanish-hearing dogs listened longer to English, and English-hearing dogs listened longer to Spanish. It is unclear what particular cue dogs are utilizing to discriminate between the two languages; future studies should explore dogs' utilization of phonological and rhythmic cues for language discrimination.


Asunto(s)
Lenguaje , Habla , Animales , Humanos , Perros , Ratas , Señales (Psicología) , Cognición , Lingüística
4.
J Exp Child Psychol ; 226: 105567, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36244079

RESUMEN

This research examined whether the auditory short-term memory (STM) capacity for speech sounds differs from that for nonlinguistic sounds in 11-month-old infants. Infants were presented with streams composed of repeating sequences of either 2 or 4 syllables, akin to prior work by Ross-Sheehy and Newman (2015) using nonlinguistic musical instruments. These syllable sequences either stayed the same for every repetition (constant) or changed by one syllable each time it repeated (varying). Using the head-turn preference procedure, we measured infant listening time to each type of stream (constant vs varying and 2 vs 4 syllables). Longer listening to the varying stream was taken as evidence for STM because this required remembering all syllables in the sequence. We found that infants listened longer to the varying streams for 2-syllable sequences but not for 4-syllable sequences. This capacity limitation is comparable to that found previously for nonlinguistic instrument tones, suggesting that young infants have similar STM limitations for speech and nonspeech stimuli.


Asunto(s)
Memoria a Corto Plazo , Percepción del Habla , Lactante , Humanos , Fonética , Percepción Auditiva , Habla
5.
J Exp Child Psychol ; 227: 105581, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36423439

RESUMEN

Although there is ample evidence documenting the development of spoken word recognition from infancy to adolescence, it is still unclear how development of word-level processing interacts with higher-level sentence processing, such as the use of lexical-semantic cues, to facilitate word recognition. We investigated how the ability to use an informative verb (e.g., draws) to predict an upcoming word (picture) and suppress competition from similar-sounding words (pickle) develops throughout the school-age years. Eye movements of children from two age groups (5-6 years and 9-10 years) were recorded while the children heard a sentence with an informative or neutral verb (The brother draws/gets the small picture) in which the final word matched one of a set of four pictures, one of which was a cohort competitor (pickle). Both groups demonstrated use of the informative verb to more quickly access the target word and suppress cohort competition. Although the age groups showed similar ability to use semantic context to facilitate processing, the older children demonstrated faster lexical access and more robust cohort suppression in both informative and uninformative contexts. This suggests that development of word-level processing facilitates access of top-down linguistic cues that support more efficient spoken language processing. Whereas developmental differences in the use of semantic context to facilitate lexical access were not explained by vocabulary knowledge, differences in the ability to suppress cohort competition were explained by vocabulary. This suggests a potential role for vocabulary knowledge in the resolution of lexical competition and perhaps the influence of lexical competition dynamics on vocabulary development.


Asunto(s)
Percepción del Habla , Masculino , Niño , Adolescente , Humanos , Preescolar , Lenguaje , Semántica , Vocabulario , Lingüística
6.
J Acoust Soc Am ; 153(3): 1486, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-37002071

RESUMEN

Because speaking rates are highly variable, listeners must use cues like phoneme or sentence duration to normalize speech across different contexts. Scaling speech perception in this way allows listeners to distinguish between temporal contrasts, like voiced and voiceless stops, even at different speech speeds. It has long been assumed that this speaking rate normalization can occur over small units such as phonemes. However, phonemes lack clear boundaries in running speech, so it is not clear that listeners can rely on them for normalization. To evaluate this, we isolate two potential processing levels for speaking rate normalization-syllabic and sub-syllabic-by manipulating phoneme duration in order to cue speaking rate, while also holding syllable duration constant. In doing so, we show that changing the duration of phonemes both with unique spectro-temporal signatures (/kɑ/) and more overlapping spectro-temporal signatures (/wɪ/) results in a speaking rate normalization effect. These results suggest that when acoustic boundaries within syllables are less clear, listeners can normalize for rate differences on the basis of sub-syllabic units.


Asunto(s)
Fonética , Percepción del Habla , Acústica del Lenguaje , Habla , Lenguaje
7.
J Acoust Soc Am ; 151(5): 2898, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35649892

RESUMEN

Cochlear-implant (CI) users have previously demonstrated perceptual restoration, or successful repair of noise-interrupted speech, using the interrupted sentences paradigm [Bhargava, Gaudrain, and Baskent (2014). "Top-down restoration of speech in cochlear-implant users," Hear. Res. 309, 113-123]. The perceptual restoration effect was defined experimentally as higher speech understanding scores with noise-burst interrupted sentences compared to silent-gap interrupted sentences. For the perceptual restoration illusion to occur, it is often necessary for the masking or interrupting noise bursts to have a higher intensity than the adjacent speech signal to be perceived as a plausible masker. Thus, signal processing factors like noise reduction algorithms and automatic gain control could have a negative impact on speech repair in this population. Surprisingly, evidence that participants with cochlear implants experienced the perceptual restoration illusion was not observed across the two planned experiments. A separate experiment, which aimed to provide a close replication of previous work on perceptual restoration in CI users, also found no consistent evidence of perceptual restoration, contrasting the original study's previously reported findings. Typical speech repair of interrupted sentences was not observed in the present work's sample of CI users, and signal-processing factors did not appear to affect speech repair.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Ilusiones , Percepción del Habla , Estimulación Acústica , Humanos , Inteligibilidad del Habla
8.
Anim Cogn ; 24(3): 419-431, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33052544

RESUMEN

Consonants and vowels play different roles in speech perception: listeners rely more heavily on consonant information rather than vowel information when distinguishing between words. This reliance on consonants for word identification is the consonant bias Nespor et al. (Ling 2:203-230, 2003). Several factors modulate infants' development of the consonant bias, including fine-grained temporal processing ability and native language exposure [for review, see Nazzi et al. (Curr Direct Psychol Sci 25:291-296, 2016)]. A rat model demonstrated that mature fine-grained temporal processing alone cannot account for consonant bias emergence; linguistic exposure is also necessary Bouchon and Toro (An Cog 22:839-850, 2019). This study tested domestic dogs, who have similarly fine-grained temporal processing but more language exposure than rats, to assess whether a minimal lexicon and small degree of regular linguistic exposure can allow for consonant bias development. Dogs demonstrated a vowel bias rather than a consonant bias, preferring their own name over a vowel-mispronounced version of their name, but not in comparison to a consonant-mispronounced version. This is the pattern seen in young infants Bouchon et al. (Dev Sci 18:587-598, 2015) and rats Bouchon et al. (An Cog 22:839-850, 2019). In a follow-up study, dogs treated a consonant-mispronounced version of their name similarly to their actual name, further suggesting that dogs do not treat consonant differences as meaningful for word identity. These results support the findings from Bouchon and Toro (An Cog 2:839-850, 2019), suggesting that there may be a default preference for vowel information over consonant information when identifying word forms, and that the consonant bias may be a human-exclusive tool for language learning.


Asunto(s)
Fonética , Percepción del Habla , Animales , Perros , Estudios de Seguimiento , Humanos , Lenguaje , Desarrollo del Lenguaje , Ratas
9.
J Acoust Soc Am ; 150(4): 2936, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34717484

RESUMEN

Cochlear-implant (CI) listeners experience signal degradation, which leads to poorer speech perception than normal-hearing (NH) listeners. In the present study, difficulty with word segmentation, the process of perceptually parsing the speech stream into separate words, is considered as a possible contributor to this decrease in performance. CI listeners were compared to a group of NH listeners (presented with unprocessed speech and eight-channel noise-vocoded speech) in their ability to segment phrases with word segmentation ambiguities (e.g., "an iceman" vs "a nice man"). The results showed that CI listeners and NH listeners were worse at segmenting words when hearing processed speech than NH listeners were when presented with unprocessed speech. When viewed at a broad level, all of the groups used cues to word segmentation in similar ways. Detailed analyses, however, indicated that the two processed speech groups weighted top-down knowledge cues to word boundaries more and weighted acoustic cues to word boundaries less relative to NH listeners presented with unprocessed speech.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Estimulación Acústica , Señales (Psicología) , Audición , Humanos , Masculino , Habla
10.
J Acoust Soc Am ; 149(3): 1488, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33765790

RESUMEN

Cochlear-implant (CI) users experience less success in understanding speech in noisy, real-world listening environments than normal-hearing (NH) listeners. Perceptual restoration is one method NH listeners use to repair noise-interrupted speech. Whereas previous work has reported that CI users can use perceptual restoration in certain cases, they failed to do so under listening conditions in which NH listeners can successfully restore. Providing increased opportunities to use top-down linguistic knowledge is one possible method to increase perceptual restoration use in CI users. This work tested perceptual restoration abilities in 18 CI users and varied whether a semantic cue (presented visually) was available prior to the target sentence (presented auditorily). Results showed that whereas access to a semantic cue generally improved performance with interrupted speech, CI users failed to perceptually restore speech regardless of the semantic cue availability. The lack of restoration in this population directly contradicts previous work in this field and raises questions of whether restoration is possible in CI users. One reason for speech-in-noise understanding difficulty in CI users could be that they are unable to use tools like restoration to process noise-interrupted speech effectively.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Señales (Psicología) , Semántica , Inteligibilidad del Habla
11.
J Acoust Soc Am ; 150(3): 2256, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34598599

RESUMEN

Previous work has found that preschoolers with greater phonological awareness and larger lexicons, who speak more throughout the day, exhibit less intra-syllabic coarticulation in controlled speech production tasks. These findings suggest that both linguistic experience and speech-motor control are important predictors of spoken phonetic development. Still, it remains unclear how preschoolers' speech practice when they talk drives the development of coarticulation because children who talk more are likely to have both increased fine motor control and increased auditory feedback experience. Here, the potential effect of auditory feedback is studied by examining a population-children with cochlear implants (CIs)-which is naturally differing in auditory experience. The results show that (1) developmentally appropriate coarticulation improves with an increased hearing age but not chronological age; (2) children with CIs pattern coarticulatorily closer to their younger, hearing age-matched peers than chronological age-matched peers; and (3) the effects of speech practice on coarticulation, measured using naturalistic, at-home recordings of the children's speech production, only appear in the children with CIs after several years of hearing experience. Together, these results indicate a strong role of auditory feedback experience on coarticulation and suggest that parent-child communicative exchanges could stimulate children's own vocal output, which drives speech development.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera , Percepción del Habla , Sordera/cirugía , Retroalimentación , Audición , Humanos , Fonética
12.
Brain Inj ; 34(4): 567-574, 2020 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-32050797

RESUMEN

Primary Objective: Inform the production of a screening tool for language in children with concussion. The authors predicted that children with a recent concussion would perform the cognitive-linguistic tasks more poorly, but some tasks may be more sensitive to concussion than others.Methods & Procedures: 22 elementary school aged children within 30 days of a concussion and age-matched peers with no history of concussion were assessed on a battery of novel language and cognitive-linguistic tasks. They also completed an auditory attention task and the Raven's Colored Progressive Matrices.Main Outcomes & Results: Children with a recent concussion scored significantly more poorly in novel tasks targeting category identification, grammaticality judgments, and recognizing target words presented in a short story than their age-matched peers with no such injury history. All observed effects had moderate sizes. Inclusion of these three tasks significantly improved prediction of concussion status over symptom score when controlling for the age of participants.Conclusions: The finding supports continued investigation of targeted linguistic tasks in children following concussion, particularly in the domains of semantic and syntactic access and verbal working memory. Future work developing brief language assessments specifically targeting children in this age range may provide a valuable addition to the existing tools for identifying the effects of concussion.


Asunto(s)
Conmoción Encefálica , Lenguaje , Atención , Conmoción Encefálica/complicaciones , Conmoción Encefálica/diagnóstico , Niño , Humanos , Memoria a Corto Plazo
13.
J Acoust Soc Am ; 147(4): 2432, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32359241

RESUMEN

The ability to recognize speech that is degraded spectrally is a critical skill for successfully using a cochlear implant (CI). Previous research has shown that toddlers with normal hearing can successfully recognize noise-vocoded words as long as the signal contains at least eight spectral channels [Newman and Chatterjee. (2013). J. Acoust. Soc. Am. 133(1), 483-494; Newman, Chatterjee, Morini, and Remez. (2015). J. Acoust. Soc. Am. 138(3), EL311-EL317], although they have difficulty with signals that only contain four channels of information. Young children with CIs not only need to match a degraded speech signal to a stored representation (word recognition), but they also need to create new representations (word learning), a task that is likely to be more cognitively demanding. Normal-hearing toddlers aged 34 months were tested on their ability to initially learn (fast-map) new words in noise-vocoded stimuli. While children were successful at fast-mapping new words from 16-channel noise-vocoded stimuli, they failed to do so from 8-channel noise-vocoded speech. The level of degradation imposed by 8-channel vocoding appears sufficient to disrupt fast-mapping in young children. Recent results indicate that only CI patients with high spectral resolution can benefit from more than eight active electrodes. This suggests that for many children with CIs, reduced spectral resolution may limit their acquisition of novel words.


Asunto(s)
Implantes Cocleares , Percepción del Habla , Estimulación Acústica , Preescolar , Humanos , Ruido/efectos adversos , Habla
14.
Folia Phoniatr Logop ; 72(6): 442-453, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31639816

RESUMEN

PURPOSE: Several studies have explored relationships between children's early phonological development and later language performance. This literature has included a more recent focus on the potential for early phonological profiles to predict later language outcomes. METHODS: The present study longitudinally examined the nature of phonetic inventories and syllable structure patterns of 48 typically developing children at 7, 11, and 18 months, and related them to expressive language outcomes at 2 years of age. RESULTS: Findings provide evidence that as early as 11 months, phonetic inventory and mean syllable structure level are related to 24-month expressive language outcomes, including mean length of utterance and vocabulary diversity in spontaneous language samples, and parent-reported vocabulary scores. Consonant inventories in particular differed at 11 and 18 months for 2-year-olds with lower versus higher language skills. CONCLUSION: Limited inventories and syllable repertoires may add to risk profiles for later language delays.


Asunto(s)
Trastornos del Desarrollo del Lenguaje , Desarrollo del Lenguaje , Fonética , Aptitud , Humanos , Lactante , Lenguaje , Vocabulario
15.
Anim Cogn ; 22(3): 423-432, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-30848384

RESUMEN

Like humans, canine companions often find themselves in noisy environments, and are expected to respond to human speech despite potential distractors. Such environments pose particular problems for young children, who have limited linguistic knowledge. Here, we examined whether dogs show similar difficulties. We found that dogs prefer their name to a stress-matched foil in quiet conditions, despite hearing it spoken by a novel talker. They continued to prefer their name in the presence of multitalker human speech babble at signal-to-noise levels as low as 0 dB, when their name was the same intensity as the foil. This surpasses the performance of 1-year-old infants, who fail to prefer their name to a foil at 0 dB (Newman in Dev Psychol 41(2):352-362, 2005). Overall, we find better performance at name recognition in dogs that were trained to do tasks for humans, like service dogs, search-and-rescue dogs, and explosives detection dogs. These dogs were of several different breeds, and their tasks were widely different from one another. This suggests that their superior performance may be due to generally more training and better attention. In summary, these results demonstrate that dogs can recognize their name even in relatively difficult levels of multitalker babble, and that dogs who work with humans are especially adept at name recognition in comparison with companion dogs. Future studies will explore the effect of different types of background noise on word recognition in dogs.


Asunto(s)
Atención , Ruido , Percepción del Habla , Animales , Perros/psicología , Lingüística
16.
J Child Lang ; 46(6): 1238-1248, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31405393

RESUMEN

Hearing words in sentences facilitates word recognition in monolingual children. Many children grow up receiving input in multiple languages - including exposure to sentences that 'mix' the languages. We explored Spanish-English bilingual toddlers' (n = 24) ability to identify familiar words in three conditions: (i) single word (ball!); (ii) same-language sentence (Where's the ball?); or (iii) mixed-language sentence (Dónde está la ball?). Children successfully identified words across conditions; however, the advantage linked to hearing words in sentences was present only in the same-language condition. This work hence suggests that language mixing plays an important role on bilingual children's ability to recognize spoken words.


Asunto(s)
Desarrollo del Lenguaje , Multilingüismo , Reconocimiento en Psicología , Preescolar , Femenino , Humanos , Lactante , Lenguaje , Masculino , Percepción del Habla
17.
Brain Inj ; 32(4): 506-514, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29388844

RESUMEN

PRIMARY OBJECTIVE: The purpose of this investigation was to examine children's accuracy and speed when asked to name rapidly images following a concussion. The authors predicted that children with a recent concussion would not differ in accuracy from peers, but would be slower. RESEARCH DESIGN: Children with and without a recent concussion were compared on their accuracy and speed of naming objects, and speed was correlated with time since injury. METHODS AND PROCEDURES: Fifty-eight participants, aged 10-22 years, 32 within one month of concussion and 26 age-matched participants with no history of concussion, rapidly viewed and verbally named 107 illustrations of common objects, and sensitive measures of response time were recorded. MAIN OUTCOMES AND RESULTS: Groups did not differ in rate of accuracy, but children with recent injury responded significantly more slowly. A trajectory of recovery was calculated, providing qualified evidence for a longer timeline of recovery than the typical two-week period. CONCLUSIONS: These findings affirm the presence of this naming latency effect in children, explore the duration of this effect over the course of recovery, and add nuance to inconsistently reported chronic naming deficits following concussion, informing recommendations for return to full academic and recreational participation.


Asunto(s)
Conmoción Encefálica/fisiopatología , Imaginación/fisiología , Trastornos del Lenguaje/etiología , Recuerdo Mental/fisiología , Nombres , Adolescente , Conmoción Encefálica/diagnóstico , Estudios de Casos y Controles , Niño , Femenino , Humanos , Trastornos del Lenguaje/diagnóstico , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
18.
J Acoust Soc Am ; 143(1): 84, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-29390768

RESUMEN

Adult cochlear-implant (CI) users show small or non-existent perceptual restoration effects when listening to interrupted speech. Perceptual restoration is believed to be a top-down mechanism that enhances speech perception in adverse listening conditions, and appears to be particularly utilized by older normal-hearing participants. Whether older normal-hearing participants can derive any restoration benefits from degraded speech (as would be presented through a CI speech processor) is the focus of this study. Two groups of normal-hearing participants (younger: age ≤30 yrs; older: age ≥60 yrs) were tested for perceptual restoration effects in the context of interrupted sentences. Speech signal degradations were controlled by manipulating parameters of a noise vocoder and were used to analyze effects of spectral resolution and noise burst spectral content on perceptual restoration. Older normal-hearing participants generally showed larger and more consistent perceptual restoration benefits for vocoded speech than did younger normal-hearing participants, even in the lowest spectral resolution conditions. Reduced restoration in CI users thus may be caused by factors like noise reduction strategies or small dynamic ranges rather than an interaction of aging effects and low spectral resolution.


Asunto(s)
Envejecimiento/psicología , Ruido/efectos adversos , Enmascaramiento Perceptual , Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Audiometría del Habla , Umbral Auditivo , Señales (Psicología) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
19.
J Acoust Soc Am ; 141(2): EL164, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-28253666

RESUMEN

When faced with multiple people speaking simultaneously, adult listeners use the sex of the talkers as a cue for separating competing streams of speech. As a result, adult listeners show better performance when a target and a background voice differ from one another in sex. Recent research suggests that infants under 1 year do not show this advantage. So when do infants begin to use talker-gender cues for stream segregation? These studies find that 16-month-olds do not show an advantage when the masker and target differ in sex. However, by 30 months, toddlers show the more adult-like pattern of performance.


Asunto(s)
Conducta del Lactante , Reconocimiento en Psicología , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Factores de Edad , Audiometría del Habla , Preescolar , Señales (Psicología) , Femenino , Humanos , Lactante , Masculino , Ruido/efectos adversos , Enmascaramiento Perceptual , Factores Sexuales
20.
J Child Lang ; 44(5): 1140-1162, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27978860

RESUMEN

There have been many studies examining the differences between infant-directed speech (IDS) and adult-directed speech (ADS). However, investigations asking whether mothers clarify vowel articulation in IDS have reached equivocal findings. Moreover, it is unclear whether maternal speech clarification has any effect on a child's developing language skills. This study examined vowel clarification in mothers' IDS at 0;10-11, 1;6, and 2;0, as compared to their vowel production in ADS. Relationships between vowel space, vowel duration, and vowel variability and child language outcomes at two years were also explored. Results show that vowel space and vowel duration tended to be greater in IDS than in ADS, and that one measure of vowel clarity, a mother's vowel space at 1;6, was significantly related to receptive as well as expressive child language outcomes at two years of age.


Asunto(s)
Lenguaje Infantil , Relaciones Madre-Hijo , Fonética , Inteligibilidad del Habla , Percepción del Habla , Aprendizaje Verbal , Vocabulario , Preescolar , Comunicación , Femenino , Humanos , Lactante , Estudios Longitudinales , Masculino , Medio Social , Espectrografía del Sonido , Acústica del Lenguaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA