Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
3.
Dev Sci ; 26(5): e13359, 2023 09.
Article in English | MEDLINE | ID: mdl-36527322

ABSTRACT

The mechanisms by which infant-directed (ID) speech and song support language development in infancy are poorly understood, with most prior investigations focused on the auditory components of these signals. However, the visual components of ID communication are also of fundamental importance for language learning: over the first year of life, infants' visual attention to caregivers' faces during ID speech switches from a focus on the eyes to a focus on the mouth, which provides synchronous visual cues that support speech and language development. Caregivers' facial displays during ID song are highly effective for sustaining infants' attention. Here we investigate if ID song specifically enhances infants' attention to caregivers' mouths. 299 typically developing infants watched clips of female actors engaging them with ID song and speech longitudinally at six time points from 3 to 12 months of age while eye-tracking data was collected. Infants' mouth-looking significantly increased over the first year of life with a significantly greater increase during ID song versus speech. This difference was early-emerging (evident in the first 6 months of age) and sustained over the first year. Follow-up analyses indicated specific properties inherent to ID song (e.g., slower tempo, reduced rhythmic variability) in part contribute to infants' increased mouth-looking, with effects increasing with age. The exaggerated and expressive facial features that naturally accompany ID song may make it a particularly effective context for modulating infants' visual attention and supporting speech and language development in both typically developing infants and those with or at risk for communication challenges. A video abstract of this article can be viewed at https://youtu.be/SZ8xQW8h93A. RESEARCH HIGHLIGHTS: Infants' visual attention to adults' mouths during infant-directed speech has been found to support speech and language development. Infant-directed (ID) song promotes mouth-looking by infants to a greater extent than does ID speech across the first year of life. Features characteristic of ID song such as slower tempo, increased rhythmicity, increased audiovisual synchrony, and increased positive affect, all increase infants' attention to the mouth. The effects of song on infants' attention to the mouth are more prominent during the second half of the first year of life.


Subject(s)
Speech Perception , Humans , Infant , Female , Adult , Speech , Mouth , Language Development , Face
4.
Autism Res ; 15(11): 2099-2111, 2022 11.
Article in English | MEDLINE | ID: mdl-36056678

ABSTRACT

Timing is critical to successful social interactions. The temporal structure of dyadic vocal interactions emerges from the rhythm, timing, and frequency of each individuals' vocalizations and reflects how the dyad dynamically organizes and adapts during an interaction. This study investigated the temporal structure of vocal interactions longitudinally in parent-child dyads of typically developing (TD) infants (n = 49; 9-18 months; 48% male) and toddlers with ASD (n = 23; 27.2 ± 5.0 months; 91.3% male) to identify how developing language and social skills impact the temporal dynamics of the interaction. Acoustic hierarchical temporal structure (HTS), a measure of the nested clustering of acoustic events across multiple timescales, was measured in free play interactions using Allan Factor. HTS reflects a signal's temporal complexity and variability, with greater HTS indicating reduced flexibility of the dyadic system. Child expressive language significantly predicted HTS (ß = -0.2) longitudinally across TD infants, with greater dyadic HTS associated with lower child language skills. ASD dyads exhibited greater HTS (i.e., more rigid temporal structure) than nonverbal matched (d = 0.41) and expressive language matched TD dyads (d = 0.28). Increased HTS in ASD dyads occurred at timescales >1 s, suggesting greater structuring of pragmatic aspects of interaction. Results provide a new window into how language development and social reciprocity serve as constraints to shape parent-child interaction dynamics and showcase a novel automated approach to characterizing vocal interactions across multiple timescales during early childhood.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Infant , Child , Child, Preschool , Male , Humans , Female , Child Language , Autistic Disorder/complications , Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/complications , Parent-Child Relations , Social Skills
5.
Autism Res ; 15(2): 305-316, 2022 02.
Article in English | MEDLINE | ID: mdl-34837352

ABSTRACT

The majority of existing studies investigating characteristics of overt social behavior in individuals with autism spectrum disorder (ASD) relied on informants' evaluation through questionnaires and behavioral coding techniques. As a novelty, this study aimed to quantify the complex movements produced during social interactions in order to test differences in ASD movement dynamics and their convergence, or lack thereof, during social interactions. Twenty children with ASD and twenty-three children with typical development (TD) were videotaped while engaged in a face-to-face conversation with an interviewer. An image differencing technique was utilized to extract the movement time series. Spectral analyses were conducted to quantify the average power of movement, and the fractal scaling of movement. The degree of complexity matching was calculated to capture the level of behavioral coordination between the interviewer and children. Results demonstrated that the average power was significantly higher (p < 0.01), and the fractal scaling was steeper (p < 0.05) in children with ASD, suggesting excessive and less complex movement as compared to the TD peers. Complexity matching occurred between children and interviewers, but there was no reliable difference in the strength of matching between the ASD and TD children. Descriptive trends in the interviewer's behavior suggest that her movements adapted to match both ASD and TD movements equally well. The findings of our study might shed light on seeking novel behavioral markers of ASD, and on developing automatic ASD screening techniques during daily social interactions. LAY SUMMARY: By implementing an objective behavioral quantifying technique, our study demonstrated that children with autism had more body movement during face-to-face conversation, and they moved in a less complex way. The current diagnosis of autism heavily relies on doctor's experiences. These findings suggest a potential that autism might be automatically screened during daily social interactions.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Child , Child Development , Communication , Female , Humans , Social Behavior
6.
Cogn Sci ; 43(3): e12718, 2019 03.
Article in English | MEDLINE | ID: mdl-30900289

ABSTRACT

Communication is a multimodal phenomenon. The cognitive mechanisms supporting it are still understudied. We explored a natural dataset of academic lectures to determine how communication modalities are used and coordinated during the presentation of complex information. Using automated and semi-automated techniques, we extracted and analyzed, from the videos of 30 speakers, measures capturing the dynamics of their body movement, their slide change rate, and various aspects of their speech (speech rate, articulation rate, fundamental frequency, and intensity). There were consistent but statistically subtle patterns in the use of speech rate, articulation rate, intensity, and body motion across the presentation. Principal component analysis also revealed patterns of system-like covariation among modalities. These findings, although tentative, do suggest that the cognitive system is integrating body, slides, and speech in a coordinated manner during natural language use. Further research is needed to clarify the specific coordination patterns that occur between the different modalities.


Subject(s)
Communication , Speech , Cognition , Humans
7.
Front Psychol ; 9: 1278, 2018.
Article in English | MEDLINE | ID: mdl-30250437

ABSTRACT

Through theoretical discussion, literature review, and a computational model, this paper poses a challenge to the notion that perspective-taking involves a fixed architecture in which particular processes have priority. For example, some research suggests that egocentric perspectives can arise more quickly, with other perspectives (such as of task partners) emerging only secondarily. This theoretical dichotomy-between fast egocentric and slow other-centric processes-is challenged here. We propose a general view of perspective-taking as an emergent phenomenon governed by the interplay among cognitive mechanisms that accumulate information at different timescales. We first describe the pervasive relevance of perspective-taking to cognitive science. A dynamical systems model is then introduced that explicitly formulates the timescale interaction proposed. This model illustrates that, rather than having a rigid time course, perspective-taking can be fast or slow depending on factors such as task context. Implications are discussed, with ideas for future empirical research.

SELECTION OF CITATIONS
SEARCH DETAIL
...