Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
1.
Dev Sci ; 27(1): e13419, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37291692

ABSTRACT

Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning.


Subject(s)
Language Development , Speech Perception , Infant , Humans , Child, Preschool , Touch , Verbal Learning , Language
2.
Am J Intellect Dev Disabil ; 128(6): 425-448, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37875276

ABSTRACT

Automated methods for processing of daylong audio recordings are efficient and may be an effective way of assessing developmental stage for typically developing children; however, their utility for children with developmental disabilities may be limited by constraints of algorithms and the scope of variables produced. Here, we present a novel utterance-level processing (ULP) system that 1) extracts utterances from daylong recordings, 2) verifies automated speaker tags using human annotation, and 3) provides vocal maturity metrics unavailable through automated systems. Study 1 examines the reliability and validity of this system in low-risk controls (LRC); Study 2 extends the ULP to children with Angelman syndrome (AS). Results showed that ULP annotations demonstrated high coder agreement across groups. Further, ULP metrics aligned with language assessments for LRC but not AS, perhaps reflecting limitations of language assessments in AS. We argue that ULP increases accuracy, efficiency, and accessibility of detailed vocal analysis for syndromic populations.


Subject(s)
Angelman Syndrome , Speech , Humans , Child , Reproducibility of Results
3.
Infancy ; 28(3): 597-618, 2023 05.
Article in English | MEDLINE | ID: mdl-36757022

ABSTRACT

Caregivers' touches that occur alongside words and utterances could aid in the detection of word/utterance boundaries and the mapping of word forms to word meanings. We examined changes in caregivers' use of touches with their speech directed to infants using a multimodal cross-sectional corpus of 35 Korean mother-child dyads across three age groups of infants (8, 14, and 27 months). We tested the hypothesis that caregivers' frequency and use of touches with speech change with infants' development. Results revealed that the frequency of word/utterance-touch alignment as well as word + touch co-occurrence is highest in speech addressed to the youngest group of infants. Thus, this study provides support for the hypothesis that caregivers' use of touch during dyadic interactions is sensitive to infants' age in a way similar to caregivers' use of speech alone and could provide cues useful to infants' language learning at critical points in early development.


Subject(s)
Mothers , Touch , Female , Humans , Infant , Cross-Sectional Studies , Language , Republic of Korea
4.
J Speech Lang Hear Res ; 66(1): 84-97, 2023 01 12.
Article in English | MEDLINE | ID: mdl-36603544

ABSTRACT

PURPOSE: Recent work suggests that speech perception is influenced by the somatosensory system and that oral sensorimotor disruption has specific effects on the perception of speech both in infants who have not yet begun to talk and in older children and adults with ample speech production experience; however, we do not know how such disruptions affect children with speech sound disorder (SSD). Response to disruption of would-be articulators during speech perception could reveal how sensorimotor linkages work for both typical and atypical speech and language development. Such linkages are crucial to advancing our knowledge on how both typically developing and atypically developing children produce and perceive speech. METHOD: Using a looking-while-listening task, we explored the impact of a sensorimotor restrictor on the recognition of words whose onsets involve late-developing sounds (s, ʃ) for both children with typical development (TD) and their peers with SSD. RESULTS: Children with SSD showed a decrement in performance when they held a restrictor in their mouths during the task, but this was not the case for children with TD. This effect on performance was only observed for the specific speech sounds blocked by the would-be articulators. CONCLUSION: We argue that these findings provide evidence for altered perceptual motor pathways in children with SSD. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.21809442.


Subject(s)
Apraxias , Language Development Disorders , Speech Perception , Speech Sound Disorder , Stuttering , Infant , Humans , Child , Phonetics , Language Development , Auditory Perception , Speech
5.
Brain Sci ; 11(5)2021 May 13.
Article in English | MEDLINE | ID: mdl-34068187

ABSTRACT

The alerting network, a subcomponent of attention, enables humans to respond to novel information. Children with ASD have shown equivalent alerting in response to visual and/or auditory stimuli compared to typically developing (TD) children. However, it is unclear whether children with ASD and TD show equivalent alerting to tactile stimuli. We examined (1) whether tactile cues affect accuracy and reaction times in children with ASD and TD, (2) whether the duration between touch-cues and auditory targets impacts performance, and (3) whether behavioral responses in the tactile cueing task are associated with ASD symptomatology. Six- to 12-year-olds with ASD and TD participated in a tactile-cueing task and were instructed to respond with a button press to a target sound /a/. Tactile cues were presented at 200, 400, and 800 ms (25% each) prior to the auditory target. The remaining trials (25%) were presented without tactile cues. Findings suggested that both groups showed equivalent alerting responses to tactile cues. Additionally, all children were faster to respond to auditory targets at longer cue-target intervals. Finally, there was an association between rate of facilitation and RRB scores in all children, suggesting that patterns of responding to transient phasic cues may be related to ASD symptomatology.

6.
J Speech Lang Hear Res ; 64(7): 2401-2416, 2021 07 16.
Article in English | MEDLINE | ID: mdl-34098723

ABSTRACT

Purpose Recording young children's vocalizations through wearables is a promising method to assess language development. However, accurately and rapidly annotating these files remains challenging. Online crowdsourcing with the collaboration of citizen scientists could be a feasible solution. In this article, we assess the extent to which citizen scientists' annotations align with those gathered in the lab for recordings collected from young children. Method Segments identified by Language ENvironment Analysis as produced by the key child were extracted from one daylong recording for each of 20 participants: 10 low-risk control children and 10 children diagnosed with Angelman syndrome, a neurogenetic syndrome characterized by severe language impairments. Speech samples were annotated by trained annotators in the laboratory as well as by citizen scientists on Zooniverse. All annotators assigned one of five labels to each sample: Canonical, Noncanonical, Crying, Laughing, and Junk. This allowed the derivation of two child-level vocalization metrics: the Linguistic Proportion and the Canonical Proportion. Results At the segment level, Zooniverse classifications had moderate precision and recall. More importantly, the Linguistic Proportion and the Canonical Proportion derived from Zooniverse annotations were highly correlated with those derived from laboratory annotations. Conclusions Annotations obtained through a citizen science platform can help us overcome challenges posed by the process of annotating daylong speech recordings. Particularly when used in composites or derived metrics, such annotations can be used to investigate early markers of language delays.


Subject(s)
Citizen Science , Language Development Disorders , Big Data , Child, Preschool , Humans , Language Development , Language Development Disorders/diagnosis , Speech
7.
Dev Sci ; 24(5): e13090, 2021 09.
Article in English | MEDLINE | ID: mdl-33497512

ABSTRACT

This study evaluates whether early vocalizations develop in similar ways in children across diverse cultural contexts. We analyze data from daylong audio recordings of 49 children (1-36 months) from five different language/cultural backgrounds. Citizen scientists annotated these recordings to determine if child vocalizations contained canonical transitions or not (e.g., "ba" vs. "ee"). Results revealed that the proportion of clips reported to contain canonical transitions increased with age. Furthermore, this proportion exceeded 0.15 by around 7 months, replicating and extending previous findings on canonical vocalization development but using data from the natural environments of a culturally and linguistically diverse sample. This work explores how crowdsourcing can be used to annotate corpora, helping establish developmental milestones relevant to multiple languages and cultures. Lower inter-annotator reliability on the crowdsourcing platform, relative to more traditional in-lab expert annotators, means that a larger number of unique annotators and/or annotations are required, and that crowdsourcing may not be a suitable method for more fine-grained annotation decisions. Audio clips used for this project are compiled into a large-scale infant vocalization corpus that is available for other researchers to use in future work.


Subject(s)
Language Development , Language , Child , Humans , Infant , Reproducibility of Results
8.
Infant Behav Dev ; 62: 101524, 2021 02.
Article in English | MEDLINE | ID: mdl-33373908

ABSTRACT

Research has identified bivariate correlations between speech perception and cognitive measures gathered during infancy as well as correlations between these individual measures and later language outcomes. However, these correlations have not all been explored together in prospective longitudinal studies. The goal of the current research was to compare how early speech perception and cognitive skills predict later language outcomes using a within-participant design. To achieve this goal, we tested 97 5- to 7-month-olds on two speech perception tasks (stress pattern preference, native vowel discrimination) and two cognitive tasks (visual recognition memory, A-not-B) and later assessed their vocabulary outcomes at 18 and 24 months. Frequentist statistical analyses showed that only native vowel discrimination significantly predicted vocabulary. However, Bayesian analyses suggested that evidence was ambiguous between null and alternative hypotheses for all infant predictors. These results highlight the importance of recognizing and addressing challenges related to infant data collection, interpretation, and replication in the developmental field, a roadblock in our route to understanding the contribution of domain-specific and domain-general skills for language acquisition. Future methodological development and research along similar lines is encouraged to assess individual differences in infant speech perception and cognitive skills and their predictability for language development.


Subject(s)
Speech Perception , Vocabulary , Bayes Theorem , Cognition , Humans , Infant , Language Development , Prospective Studies , Speech
9.
Front Hum Neurosci ; 15: 729270, 2021.
Article in English | MEDLINE | ID: mdl-35002650

ABSTRACT

Behavioral differences in responding to tactile and auditory stimuli are widely reported in individuals with autism spectrum disorder (ASD). However, the neural mechanisms underlying distinct tactile and auditory reactivity patterns in ASD remain unclear with theories implicating differences in both perceptual and attentional processes. The current study sought to investigate (1) the neural indices of early perceptual and later attentional factors underlying tactile and auditory processing in children with and without ASD, and (2) the relationship between neural indices of tactile and auditory processing and ASD symptomatology. Participants included 14, 6-12-year-olds with ASD and 14 age- and non-verbal IQ matched typically developing (TD) children. Children participated in an event-related potential (ERP) oddball paradigm during which they watched a silent video while being presented with tactile and auditory stimuli (i.e., 80% standard speech sound/a/; 10% oddball speech sound/i/; 10% novel vibrotactile stimuli on the fingertip with standard speech sound/a/). Children's early and later ERP responses to tactile (P1 and N2) and auditory stimuli (P1, P3a, and P3b) were examined. Non-parametric analyses showed that children with ASD displayed differences in early perceptual processing of auditory (i.e., lower amplitudes at central region of interest), but not tactile, stimuli. Analysis of later attentional components did not show differences in response to tactile and auditory stimuli in the ASD and TD groups. Together, these results suggest that differences in auditory responsivity patterns could be related to perceptual factors in children with ASD. However, despite differences in caregiver-reported sensory measures, children with ASD did not differ in their neural reactivity to infrequent touch-speech stimuli compared to TD children. Nevertheless, correlational analyses confirmed that inter-individual differences in neural responsivity to tactile and auditory stimuli were related to social skills in all children. Finally, we discuss how the paradigm and stimulus type used in the current study may have impacted our results. These findings have implications for everyday life, where individual differences in responding to tactile and auditory stimuli may impact social functioning.

10.
Brain Sci ; 10(12)2020 Dec 06.
Article in English | MEDLINE | ID: mdl-33291300

ABSTRACT

Infants form object categories in the first months of life. By 3 months and throughout the first year, successful categorization varies as a function of the acoustic information presented in conjunction with category members. Here we ask whether tactile information, delivered in conjunction with category members, also promotes categorization. Six- to 9-month-olds participated in an object categorization task in either a touch-cue or no-cue condition. For infants in the touch-cue condition, familiarization images were accompanied by precisely-timed light touches from their caregivers; infants in the no-cue condition saw the same images but received no touches. Only infants in the touch-cue condition formed categories. This provides the first evidence that touch may play a role in supporting infants' object categorization.

11.
J Child Lang ; 47(4): 893-907, 2020 07.
Article in English | MEDLINE | ID: mdl-31852556

ABSTRACT

We examined full-term and preterm infants' perception of frequent and infrequent phonotactic pairings involving sibilants and liquids. Infants were tested on their preference for syllables with onsets involving /s/ or /ʃ/ followed by /l/ or /r/ using the Headturn Preference Procedure. Full-term infants preferred the frequent to the infrequent phonotactic pairings at 9 months, but not at either younger or older ages. Evidence was inconclusive regarding a possible difference between full-term and preterm samples; however, limitations on the preterm sample size limited our power to detect differences. Preference for the frequent pairing was not related to later vocabulary development.


Subject(s)
Infant, Newborn/psychology , Infant, Premature/psychology , Phonetics , Speech Acoustics , Speech Perception , Female , Humans , Male , Vocabulary
12.
J Autism Dev Disord ; 50(3): 1064-1072, 2020 Mar.
Article in English | MEDLINE | ID: mdl-31754946

ABSTRACT

Multimodal communication may facilitate attention in infants. This study examined the presentation of caregiver touch-only and touch + speech input to 12-month-olds at high (HRA) and low risk for ASD. Findings indicated that, although both groups received a greater number of touch + speech bouts compared to touch-only bouts, the duration of overall touch that overlapped with speech was significantly greater in the HRA group. Additionally, HRA infants were less responsive to touch-only bouts compared to touch + speech bouts suggesting that their mothers may use more touch + speech communication to elicit infant responses. Nonetheless, the exact role of touch in multimodal communication directed towards infants at high risk for ASD warrants further exploration.


Subject(s)
Autism Spectrum Disorder/prevention & control , Infant Behavior , Infant Care/methods , Speech , Touch , Autism Spectrum Disorder/epidemiology , Autism Spectrum Disorder/therapy , Female , Humans , Infant , Male , Mother-Child Relations
13.
Autism Res ; 12(11): 1663-1679, 2019 11.
Article in English | MEDLINE | ID: mdl-31407873

ABSTRACT

Fragile X syndrome (FXS) is a neurogenetic syndrome characterized by cognitive impairments and high rates of autism spectrum disorder (ASD). FXS is often highlighted as a model for exploring pathways of symptom expression in ASD due to the high prevalence of ASD symptoms in this population and the known single-gene cause of FXS. Early vocalization features-including volubility, complexity, duration, and pitch-have shown promise in detecting ASD in idiopathic ASD populations but have yet to be extensively studied in a population with a known genetic cause for ASD such as FXS. Investigating early trajectories of these features in FXS may inform our limited knowledge of potential mechanisms that predict later social communication outcomes. The present study addresses this need by presenting preliminary findings which (a) characterize early vocalization features in FXS relative to low-risk controls (LRC) and (b) test the specificity of associations between these features and language and ASD outcomes. We coded vocalization features during a standardized child-examiner interaction for 39 nine-month-olds (22 FXS, 17 LRC) whose clinical outcomes were assessed at 24 months. Our results provide preliminary evidence that within FXS, associations between vocalization features and 24-month language outcomes may diverge from those observed in LRC, and that vocalization features may be associated with later ASD symptoms. These findings provide a starting point for more research exploring these features as potential early markers of ASD in FXS, which in turn may lead to improved early identification methods, treatment approaches, and overall well-being of individuals with ASD. Autism Res2019. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Although vocal features of 9-month-olds with FXS did not differ from those of low-risk controls, several features were associated with later language and ASD outcomes at 24 months in FXS. These preliminary results suggest acoustic data may be related to clinical outcomes in FXS and potentially other high-risk populations. Further characterizing these associations may facilitate understanding of biological mechanisms and risk factors associated with social communication development and ASD.


Subject(s)
Acoustics , Autism Spectrum Disorder/complications , Autism Spectrum Disorder/psychology , Child Language , Fragile X Syndrome/complications , Fragile X Syndrome/psychology , Child, Preschool , Female , Humans , Infant , Male , Risk Factors
14.
J Speech Lang Hear Res ; 62(7): 2372-2385, 2019 07 15.
Article in English | MEDLINE | ID: mdl-31251677

ABSTRACT

Purpose Caregivers may show greater use of nonauditory signals in interactions with children who are deaf or hard of hearing (DHH). This study explored the frequency of maternal touch and the temporal alignment of touch with speech in the input to children who are DHH and age-matched peers with normal hearing. Method We gathered audio and video recordings of mother-child free-play interactions. Maternal speech units were annotated from audio recordings, and touch events were annotated from video recordings. Analyses explored the frequency and duration of touch events and the temporal alignment of touch with speech. Results Greater variance was observed in the frequency of touch and its total duration in the input to children who are DHH. Furthermore, touches produced by mothers of children who are DHH were significantly more likely to be aligned with speech than touches produced by mothers of children with normal hearing. Conclusion Caregivers' modifications in the input to children who are DHH are observed in the combination of speech with touch. The implications for such patterns and how they may impact children's attention and access to the speech signal are discussed.


Subject(s)
Deafness/physiopathology , Hearing Loss/physiopathology , Language Development , Speech/physiology , Touch/physiology , Child, Preschool , Female , Humans , Infant , Male , Persons With Hearing Impairments/psychology , Time Factors
15.
J Autism Dev Disord ; 49(7): 2946-2955, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31016672

ABSTRACT

Atypical response to tactile input is associated with greater socio-communicative impairments in individuals with autism spectrum disorder (ASD). The current study examined overt orienting to caregiver-initiated touch in 12-month-olds at high risk for ASD (HRA) with (HRA+) and without (HRA-) a later diagnosis of ASD compared to low-risk comparison infants. Findings indicate that infants that go on to receive a diagnosis of ASD may more frequently fail to shift their attention in response to caregiver touch and when they do, they may be more likely to orient away from touch. Additionally, failure to respond to touch predicts ADOS severity scores at outcome suggesting that atypical response to touch may be an early indicator of autism severity.


Subject(s)
Autism Spectrum Disorder/diagnosis , Touch , Attention , Caregivers , Female , Humans , Infant , Longitudinal Studies , Male , Prospective Studies
16.
Dev Cogn Neurosci ; 35: 66-74, 2019 02.
Article in English | MEDLINE | ID: mdl-29051028

ABSTRACT

Infants' experiences are defined by the presence of concurrent streams of perceptual information in social environments. Touch from caregivers is an especially pervasive feature of early development. Using three lab experiments and a corpus of naturalistic caregiver-infant interactions, we examined the relevance of touch in supporting infants' learning of structure in an altogether different modality: audition. In each experiment, infants listened to sequences of sine-wave tones following the same abstract pattern (e.g., ABA or ABB) while receiving time-locked touch sequences from an experimenter that provided either informative or uninformative cues to the pattern (e.g., knee-elbow-knee or knee-elbow-elbow). Results showed that intersensorily redundant touch supported infants' learning of tone patterns, but learning varied depending on the typicality of touch sequences in infants' lives. These findings suggest that infants track touch sequences from moment to moment and in aggregate from their caregivers, and use the intersensory redundancy provided by touch to discover patterns in their environment.


Subject(s)
Auditory Perception/physiology , Learning/physiology , Touch/physiology , Female , Humans , Infant , Male
17.
Dev Sci ; 22(1): e12724, 2019 01.
Article in English | MEDLINE | ID: mdl-30369005

ABSTRACT

A range of demographic variables influences how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2-3× more speech from females than males. Second, children in higher-maternal education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of children's language environments. The audio recordings, annotations, and annotation software are readily available for reuse and reanalysis by other researchers.


Subject(s)
Language Development , Speech Perception , Adult , Child, Preschool , Demography , Educational Status , Female , Humans , Infant , Male , Sex Factors , Tape Recording , United States
18.
J Speech Lang Hear Res ; 61(6): 1369-1380, 2018 06 19.
Article in English | MEDLINE | ID: mdl-29801160

ABSTRACT

Purpose: One promising early marker for autism and other communicative and language disorders is early infant speech production. Here we used daylong recordings of high- and low-risk infant-mother dyads to examine whether acoustic-prosodic alignment as well as two automated measures of infant vocalization are related to developmental risk status indexed via familial risk and developmental progress at 36 months of age. Method: Automated analyses of the acoustics of daylong real-world interactions were used to examine whether pitch characteristics of one vocalization by the mother or the child predicted those of the vocalization response by the other speaker and whether other features of infants' speech in daylong recordings were associated with developmental risk status or outcomes. Results: Low-risk and high-risk dyads did not differ in the level of acoustic-prosodic alignment, which was overall not significant. Further analyses revealed that acoustic-prosodic alignment did not predict infants' later developmental progress, which was, however, associated with two automated measures of infant vocalizations (daily vocalizations and conversational turns). Conclusions: Although further research is needed, these findings suggest that automated measures of vocalizations drawn from daylong recordings are a possible early identification tool for later developmental progress/concerns. Supplemental Material: https://osf.io/cdn3v/.


Subject(s)
Child Development , Child Language , Mother-Child Relations , Speech Acoustics , Autism Spectrum Disorder/diagnosis , Child, Preschool , Female , Humans , Imitative Behavior , Infant , Male , Mothers , Risk Factors , Sound Spectrography
19.
J Acoust Soc Am ; 143(2): 858, 2018 02.
Article in English | MEDLINE | ID: mdl-29495738

ABSTRACT

This project explored whether disruption of articulation during listening impacts subsequent speech production in 4-yr-olds with and without speech sound disorder (SSD). During novel word learning, typically-developing children showed effects of articulatory disruption as revealed by larger differences between two acoustic cues to a sound contrast, but children with SSD were unaffected by articulatory disruption. Findings suggest that, when typically developing 4-yr-olds experience an articulatory disruption during a listening task, the children's subsequent production is affected. Children with SSD show less influence of articulatory experience during perception, which could be the result of impaired or attenuated ties between perception and articulation.


Subject(s)
Child Behavior , Child Language , Speech Acoustics , Speech Perception , Speech Sound Disorder/psychology , Voice Quality , Age Factors , Case-Control Studies , Child , Child, Preschool , Cues , Female , Humans , Male , Speech Production Measurement , Speech Sound Disorder/diagnosis
20.
J Acoust Soc Am ; 141(4): 2569, 2017 04.
Article in English | MEDLINE | ID: mdl-28464621

ABSTRACT

Throughout their development, infants are exposed to varying speaking rates. Thus, it is important to determine whether they are able to adapt to speech at varying rates and recognize target words from continuous speech despite speaking rate differences. To address this question, a series of four experiments were conducted to test whether infants can recognize words in continuous speech when rate is variable. In addition, the underlying mechanisms that infants may use to cope with variations induced by different speaking rates were also examined. Specifically, using the Headturn Preference procedure [Jusczyk and Aslin (1995). Cognitive Psychol. 29, 1-23], infants were familiarized with normal-rate passages containing two trisyllabic target words (e.g., elephants and dinosaurs), and tested with familiar (elephants and dinosaurs) and unfamiliar (crocodiles and platypus) words embedded in normal-rate (experiment 1), fast-rate (experiments 2 and 3), or slow-rate passages (experiment 4). The results indicate that 14-month-olds, but not 11-month-olds, recognized target words in passages with a fast speaking rate. In addition, findings suggest that infants used context to normalize speech across different speaking rates.


Subject(s)
Infant Behavior , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adaptation, Physiological , Age Factors , Audiometry, Speech , Female , Humans , Infant , Male , Recognition, Psychology , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...