Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Neurophysiol ; 130(2): 291-302, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37377190

RESUMO

Traditionally, pitch variation in a sound stream has been integral to music identity. We attempt to expand music's definition, by demonstrating that the neural code for musicality is independent of pitch encoding. That is, pitchless sound streams can still induce music-like perception and a neurophysiological hierarchy similar to pitched melodies. Previous work reported that neural processing of sounds with no-pitch, fixed-pitch, and irregular-pitch (melodic) patterns, exhibits a right-lateralized hierarchical shift, with pitchless sounds favorably processed in Heschl's gyrus (HG), ascending laterally to nonprimary auditory areas for fixed-pitch and even more laterally for melodic patterns. The objective of this EEG study was to assess whether sound encoding maintains a similar hierarchical profile when musical perception is driven by timbre irregularities in the absence of pitch changes. Individuals listened to repetitions of three musical and three nonmusical sound-streams. The nonmusical streams were comprised of seven 200-ms segments of white, pink, or brown noise, separated by silent gaps. Musical streams were created similarly, but with all three noise types combined in a unique order within each stream to induce timbre variations and music-like perception. Subjects classified the sound streams as musical or nonmusical. Musical processing exhibited right dominant α power enhancement, followed by a lateralized increase in θ phase-locking and spectral power. The θ phase-locking was stronger in musicians than in nonmusicians. The lateralization of activity suggests higher-level auditory processing. Our findings validate the existence of a hierarchical shift, traditionally observed with pitched-melodic perception, underscoring that musicality can be achieved with timbre irregularities alone.NEW & NOTEWORTHY EEG induced by streams of pitchless noise segments varying in timbre were classified as music-like and exhibited a right-lateralized hierarchy in processing similar to pitched melodic processing. This study provides evidence that the neural-code of musicality is independent of pitch encoding. The results have implications for understanding music processing in individuals with degraded pitch perception, such as in cochlear-implant listeners, as well as the role of nonpitched sounds in the induction of music-like perceptual states.


Assuntos
Implantes Cocleares , Música , Humanos , Percepção da Altura Sonora/fisiologia , Percepção Auditiva/fisiologia , Som , Estimulação Acústica
2.
Dev Sci ; : e13555, 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39075676
3.
Dev Psychobiol ; 61(3): 430-443, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30588618

RESUMO

Much of what is known about the course of auditory learning in following cochlear implantation is based on behavioral indicators that users are able to perceive sound. Both prelingually deafened children and postlingually deafened adults who receive cochlear implants display highly variable speech and language processing outcomes, although the basis for this is poorly understood. To date, measuring neural activity within the auditory cortex of implant recipients of all ages has been challenging, primarily because the use of traditional neuroimaging techniques is limited by the implant itself. Functional near-infrared spectroscopy (fNIRS) is an imaging technology that works with implant users of all ages because it is non-invasive, compatible with implant devices, and not subject to electrical artifacts. Thus, fNIRS can provide insight into processing factors that contribute to variations in spoken language outcomes in implant users, both children and adults. There are important considerations to be made when using fNIRS, particularly with children, to maximize the signal-to-noise ratio and to best identify and interpret cortical responses. This review considers these issues, recent data, and future directions for using fNIRS as a tool to understand spoken language processing in children and adults who hear through a cochlear implant.


Assuntos
Córtex Auditivo/fisiopatologia , Implantes Cocleares , Surdez/fisiopatologia , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Percepção da Fala/fisiologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Criança , Surdez/diagnóstico por imagem , Humanos , Espectroscopia de Luz Próxima ao Infravermelho/normas
5.
Dev Sci ; 21(3): e12575, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-28557278

RESUMO

Developmental psychology plays a central role in shaping evidence-based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.].


Assuntos
Surdez/fisiopatologia , Desenvolvimento da Linguagem , Aprendizagem/fisiologia , Criança , Implantes Cocleares , Feminino , Humanos , Idioma , Testes de Linguagem , Linguística , Masculino
6.
J Deaf Stud Deaf Educ ; 22(1): 9-21, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27624307

RESUMO

Deaf children are often described as having difficulty with executive function (EF), often manifesting in behavioral problems. Some researchers view these problems as a consequence of auditory deprivation; however, the behavioral problems observed in previous studies may not be due to deafness but to some other factor, such as lack of early language exposure. Here, we distinguish these accounts by using the BRIEF EF parent report questionnaire to test for behavioral problems in a group of Deaf children from Deaf families, who have a history of auditory but not language deprivation. For these children, the auditory deprivation hypothesis predicts behavioral impairments; the language deprivation hypothesis predicts no group differences in behavioral control. Results indicated that scores among the Deaf native signers (n = 42) were age-appropriate and similar to scores among the typically developing hearing sample (n = 45). These findings are most consistent with the language deprivation hypothesis, and provide a foundation for continued research on outcomes of children with early exposure to sign language.


Assuntos
Surdez/fisiopatologia , Função Executiva/fisiologia , Privação Sensorial/fisiologia , Língua de Sinais , Adolescente , Criança , Transtornos do Comportamento Infantil/etiologia , Pré-Escolar , Feminino , Audição/fisiologia , Humanos , Masculino , Fatores de Risco
7.
Ear Hear ; 37(3): e160-72, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26709749

RESUMO

OBJECTIVES: Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, the authors used functional near-infrared spectroscopy to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. DESIGN: The authors studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. The authors used functional near-infrared spectroscopy to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). The authors also used environmental sounds as a control stimulus. Behavioral measures consisted of the speech reception threshold, consonant-nucleus-consonant words, and AzBio sentence tests measured in quiet. RESULTS: Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the consonant-nucleus-consonant words and AzBio sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced the cortical activations in all implanted participants. CONCLUSIONS: Together, these data indicate that the responses the authors measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation.


Assuntos
Córtex Auditivo/diagnóstico por imagem , Implante Coclear , Compreensão , Surdez/reabilitação , Percepção da Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Implantes Cocleares , Feminino , Neuroimagem Funcional , Humanos , Masculino , Pessoa de Meia-Idade , Espectroscopia de Luz Próxima ao Infravermelho , Lobo Temporal/diagnóstico por imagem , Adulto Jovem
8.
J Exp Child Psychol ; 129: 157-64, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25258018

RESUMO

The correspondence between auditory speech and lip-read information can be detected based on a combination of temporal and phonetic cross-modal cues. Here, we determined the point in developmental time at which children start to effectively use phonetic information to match a speech sound with one of two articulating faces. We presented 4- to 11-year-olds (N=77) with three-syllabic sine-wave speech replicas of two pseudo-words that were perceived as non-speech and asked them to match the sounds with the corresponding lip-read video. At first, children had no phonetic knowledge about the sounds, and matching was thus based on the temporal cues that are fully retained in sine-wave speech. Next, we trained all children to perceive the phonetic identity of the sine-wave speech and repeated the audiovisual (AV) matching task. Only at around 6.5 years of age did the benefit of having phonetic knowledge about the stimuli become apparent, thereby indicating that AV matching based on phonetic cues presumably develops more slowly than AV matching based on temporal cues.


Assuntos
Desenvolvimento Infantil , Leitura Labial , Fonética , Percepção da Fala , Fala , Criança , Pré-Escolar , Sinais (Psicologia) , Humanos
9.
Brain Sci ; 13(3)2023 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-36979322

RESUMO

Recent studies have questioned past conclusions regarding the mechanisms of the McGurk illusion, especially how McGurk susceptibility might inform our understanding of audiovisual (AV) integration. We previously proposed that the McGurk illusion is likely attributable to a default mechanism, whereby either the visual system, auditory system, or both default to specific phonemes-those implicated in the McGurk illusion. We hypothesized that the default mechanism occurs because visual stimuli with an indiscernible place of articulation (like those traditionally used in the McGurk illusion) lead to an ambiguous perceptual environment and thus a failure in AV integration. In the current study, we tested the default hypothesis as it pertains to the auditory system. Participants performed two tasks. One task was a typical McGurk illusion task, in which individuals listened to auditory-/ba/ paired with visual-/ga/ and judged what they heard. The second task was an auditory-only task, in which individuals transcribed trisyllabic words with a phoneme replaced by silence. We found that individuals' transcription of missing phonemes often defaulted to '/d/t/th/', the same phonemes often experienced during the McGurk illusion. Importantly, individuals' default rate was positively correlated with their McGurk rate. We conclude that the McGurk illusion arises when people fail to integrate visual percepts with auditory percepts, due to visual ambiguity, thus leading the auditory system to default to phonemes often implicated in the McGurk illusion.

10.
Sci Rep ; 13(1): 7154, 2023 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-37130838

RESUMO

Procedures used to elicit both behavioral and neurophysiological data to address a particular cognitive question can impact the nature of the data collected. We used functional near-infrared spectroscopy (fNIRS) to assess performance of a modified finger tapping task in which participants performed synchronized or syncopated tapping relative to a metronomic tone. Both versions of the tapping task included a pacing phase (tapping with the tone) followed by a continuation phase (tapping without the tone). Both behavioral and brain-based findings revealed two distinct timing mechanisms underlying the two forms of tapping. Here we investigate the impact of an additional-and extremely subtle-manipulation of the study's experimental design. We measured responses in 23 healthy adults as they performed the two versions of the finger-tapping tasks either blocked by tapping type or alternating from one to the other type during the course of the experiment. As in our previous study, behavioral tapping indices and cortical hemodynamics were monitored, allowing us to compare results across the two study designs. Consistent with previous findings, results reflected distinct, context-dependent parameters of the tapping. Moreover, our results demonstrated a significant impact of study design on rhythmic entrainment in the presence/absence of auditory stimuli. Tapping accuracy and hemodynamic responsivity collectively indicate that the block design context is preferable for studying action-based timing behavior.


Assuntos
Dedos , Hemodinâmica , Adulto , Humanos , Dedos/fisiologia , Desempenho Psicomotor/fisiologia
11.
Otolaryngol Head Neck Surg ; 169(5): 1290-1298, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37078337

RESUMO

OBJECTIVE: Untreated sleep-disordered breathing (SDB) is associated with problem behaviors in children. The neurological basis for this relationship is unknown. We used functional near-infrared spectroscopy (fNIRS) to assess the relationship between cerebral hemodynamics of the frontal lobe of the brain and problem behaviors in children with SDB. STUDY DESIGN: Cross-sectional. SETTING: Urban tertiary care academic children's hospital and affiliated sleep center. METHODS: We enrolled children with SDB aged 5 to 16 years old referred for polysomnography. We measured fNIRS-derived cerebral hemodynamics within the frontal lobe during polysomnography. We assessed parent-reported problem behaviors using the Behavioral Response Inventory of Executive Function Second Edition (BRIEF-2). We compared the relationships between (i) the instability in cerebral perfusion in the frontal lobe measured fNIRS, (ii) SDB severity using apnea-hypopnea index (AHI), and (iii) BRIEF-2 clinical scales using Pearson correlation (r). A p < .05 was considered significant. RESULTS: A total of 54 children were included. The average age was 7.8 (95% confidence interval, 7.0-8.7) years; 26 (48%) were boys and 25 (46%) were Black. The mean AHI was 9.9 (5.7-14.1). There is a statistically significant inverse relationship between the coefficient of variation of perfusion in the frontal lobe and BRIEF-2 clinical scales (range of r = 0.24-0.49, range of p = .076 to <.001). The correlations between AHI and BRIEF-2 scales were not statistically significant. CONCLUSION: These results provide preliminary evidence for fNIRS as a child-friendly biomarker for the assessment of adverse outcomes of SDB.


Assuntos
Comportamento Problema , Síndromes da Apneia do Sono , Masculino , Humanos , Criança , Pré-Escolar , Adolescente , Feminino , Estudos Transversais , Síndromes da Apneia do Sono/complicações , Hemodinâmica
12.
Neurophotonics ; 9(3): 035003, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35990173

RESUMO

Significance: Resting-state functional connectivity (RSFC) analyses of functional near-infrared spectroscopy (fNIRS) data reveal cortical connections and networks across the brain. Motion artifacts and systemic physiology in evoked fNIRS signals present unique analytical challenges, and methods that control for systemic physiological noise have been explored. Whether these same methods require modification when applied to resting-state fNIRS (RS-fNIRS) data remains unclear. Aim: We systematically examined the sensitivity and specificity of several RSFC analysis pipelines to identify the best methods for correcting global systemic physiological signals in RS-fNIRS data. Approach: Using numerically simulated RS-fNIRS data, we compared the rates of true and false positives for several connectivity analysis pipelines. Their performance was scored using receiver operating characteristic analysis. Pipelines included partial correlation and multivariate Granger causality, with and without short-separation measurements, and a modified multivariate causality model that included a non-traditional zeroth-lag cross term. We also examined the effects of pre-whitening and robust statistical estimators on performance. Results: Consistent with previous work on bivariate correlation models, our results demonstrate that robust statistics and pre-whitening are effective methods to correct for motion artifacts and autocorrelation in the fNIRS time series. Moreover, we found that pre-filtering using principal components extracted from short-separation fNIRS channels as part of a partial correlation model was most effective in reducing spurious correlations due to shared systemic physiology when the two signals of interest fluctuated synchronously. However, when there was a temporal lag between the signals, a multivariate Granger causality test incorporating the short-separation channels was better. Since it is unknown if such a lag exists in experimental data, we propose a modified version of Granger causality that includes the non-traditional zeroth-lag term as a compromising solution. Conclusions: A combination of pre-whitening, robust statistical methods, and partial correlation in the processing pipeline to reduce autocorrelation, motion artifacts, and global physiology are suggested for obtaining statistically valid connectivity metrics with RS-fNIRS. Further studies should validate the effectiveness of these methods using human data.

13.
iScience ; 25(7): 104671, 2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35845168

RESUMO

Previous work addressing the influence of audition on visual perception has mainly been assessed using non-speech stimuli. Herein, we introduce the Audiovisual Time-Flow Illusion in spoken language, underscoring the role of audition in multisensory processing. When brief pauses were inserted into or brief portions were removed from an acoustic speech stream, individuals perceived the corresponding visual speech as "pausing" or "skipping", respectively-even though the visual stimulus was intact. When the stimulus manipulation was reversed-brief pauses were inserted into, or brief portions were removed from the visual speech stream-individuals failed to perceive the illusion in the corresponding intact auditory stream. Our findings demonstrate that in the context of spoken language, people continually realign the pace of their visual perception based on that of the auditory input. In short, the auditory modality sets the pace of the visual modality during audiovisual speech processing.

14.
Pediatrics ; 149(6)2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35607935

RESUMO

BACKGROUND AND OBJECTIVES: Infants with profound hearing loss are typically considered for cochlear implantation. Many insurance providers deny implantation to children with developmental impairments because they have limited potential to acquire verbal communication. We took advantage of differing insurance coverage restrictions to compare outcomes after cochlear implantation or continued hearing aid use. METHODS: Young children with deafness were identified prospectively from 2 different states, Texas and California, and followed longitudinally for an average of 2 years. Children in cohort 1 (n = 138) had normal cognition and adaptive behavior and underwent cochlear implantation. Children in cohorts 2 (n = 37) and 3 (n = 29) had low cognition and low adaptive behavior. Those in cohort 2 underwent cochlear implantation, whereas those in cohort 3 were treated with hearing aids. RESULTS: Cohorts did not substantially differ in demographic characteristics. Using cohort 2 as the reference, children in cohort 1 showed more rapid gains in cognitive, adaptive function, language, and auditory skills (estimated coefficients, 0.166 to 0.403; P ≤ .001), whereas children in cohort 3 showed slower gains (-0.119 to -0.243; P ≤ .04). Children in cohort 3 also had greater increases in stress within the parent-child system (1.328; P = .02), whereas cohorts 1 and 2 were not different. CONCLUSIONS: Cochlear implantation benefits children with deafness and developmental delays. This finding has health policy implications not only for private insurers but also for large, statewide, publicly administered programs. Cognitive and adaptive skills should not be used as a "litmus test" for pediatric cochlear implantation.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Auxiliares de Audição , Percepção da Fala , Criança , Pré-Escolar , Surdez/psicologia , Deficiências do Desenvolvimento/cirurgia , Humanos , Lactente , Desenvolvimento da Linguagem
15.
Neurophotonics ; 9(Suppl 2): S24001, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36052058

RESUMO

This report is the second part of a comprehensive two-part series aimed at reviewing an extensive and diverse toolkit of novel methods to explore brain health and function. While the first report focused on neurophotonic tools mostly applicable to animal studies, here, we highlight optical spectroscopy and imaging methods relevant to noninvasive human brain studies. We outline current state-of-the-art technologies and software advances, explore the most recent impact of these technologies on neuroscience and clinical applications, identify the areas where innovation is needed, and provide an outlook for the future directions.

16.
Hum Brain Mapp ; 32(9): 1363-70, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20690126

RESUMO

Phonological density refers to the number of words that can be generated by replacing a phoneme in a target word with another phoneme in the same position. Although the precise nature of the phonological neighborhood density effect is not firmly established, many behavioral psycholinguistic studies have shown that visual recognition of individual words is influenced by the number and type of neighbors the words have. This study explored neurobehavioral correlates of phonological neighborhood density in skilled readers of English using near infrared spectroscopy. On the basis of a lexical decision task, our findings showed that words with many phonological neighbors (e.g., FRUIT) were recognized more slowly than words with few phonological neighbors (e.g., PROOF), and that words with many neighbors elicited significantly greater changes in blood oxygenation in the left than in the right hemisphere of the brain, specifically in the areas BA 22/39/40. In previous studies these brain areas have been implicated in fine-grained phonological processing in readers of English. The present findings provide the first demonstration that areas BA 22/39/40 are also sensitive to phonological density effects.


Assuntos
Mapeamento Encefálico , Fonética , Reconhecimento Psicológico , Espectroscopia de Luz Próxima ao Infravermelho , Encéfalo , Lateralidade Funcional , Hemodinâmica/fisiologia , Humanos , Testes Neuropsicológicos , Oxiemoglobinas/metabolismo , Psicolinguística , Tempo de Reação/fisiologia , Estudantes , Universidades , Vocabulário
17.
Brain Sci ; 11(1)2021 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-33435472

RESUMO

A debate over the past decade has focused on the so-called bilingual advantage-the idea that bilingual and multilingual individuals have enhanced domain-general executive functions, relative to monolinguals, due to competition-induced monitoring of both processing and representation from the task-irrelevant language(s). In this commentary, we consider a recent study by Pot, Keijzer, and de Bot (2018), which focused on the relationship between individual differences in language usage and performance on an executive function task among multilingual older adults. We discuss their approach and findings in light of a more general movement towards embracing complexity in this domain of research, including individuals' sociocultural context and position in the lifespan. The field increasingly considers interactions between bilingualism/multilingualism and cognition, employing measures of language use well beyond the early dichotomous perspectives on language background. Moreover, new measures of bilingualism and analytical approaches are helping researchers interrogate the complexities of specific processing issues. Indeed, our review of the bilingualism/multilingualism literature confirms the increased appreciation researchers have for the range of factors-beyond whether someone speaks one, two, or more languages-that impact specific cognitive processes. Here, we highlight some of the most salient of these, and incorporate suggestions for a way forward that likewise encompasses neural perspectives on the topic.

18.
Infant Behav Dev ; 63: 101566, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33894632

RESUMO

Parent-child interactions support the development of a wide range of socio-cognitive abilities in young children. As infants become increasingly mobile, the nature of these interactions change from person-oriented to object-oriented, with the latter relying on children's emerging ability to engage in joint attention. Joint attention is acknowledged to be a foundational ability in early child development, broadly speaking, yet its operationalization has varied substantially over the course of several decades of developmental research devoted to its characterization. Here, we outline two broad research perspectives-social and associative accounts-on what constitutes joint attention. Differences center on the criteria for what qualifies as joint attention and regarding the hypothesized developmental mechanisms that underlie the ability. After providing a theoretical overview, we introduce a joint attention coding scheme that we have developed iteratively based on careful reading of the literature and our own data coding experiences. This coding scheme provides objective guidelines for characterizing mulitmodal parent-child interactions. The need for such guidelines is acute given the widespread use of this and other developmental measures to assess atypically developing populations. We conclude with a call for open discussion about the need for researchers to include a clear description of what qualifies as joint attention in publications pertaining to joint attention, as well as details about their coding. We provide instructions for using our coding scheme in the service of starting such a discussion.


Assuntos
Atenção , Desenvolvimento Infantil , Pré-Escolar , Humanos , Lactente , Relações Pais-Filho
19.
Cogn Psychol ; 60(4): 241-66, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20159653

RESUMO

In a series of studies, we examined how mothers naturally stress words across multiple mentions in speech to their infants and how this marking influences infants' recognition of words in fluent speech. We first collected samples of mothers' infant-directed speech using a technique that induced multiple repetitions of target words. Acoustic analyses revealed that mothers systematically alternated between emphatic and nonemphatic stress when talking to their infants. Using the headturn preference procedure, we then tested 7.5-month-old infants on their ability to detect familiarized bisyllabic words in fluent speech. Stress of target words (emphatic and nonemphatic) was systematically varied across familiarization and recognition phases of four experiments. Results indicated that, although infants generally prefer listening to words produced with emphatic stress, recognition was enhanced when the degree of emphatic stress at familiarization matched the degree of emphatic stress at recognition.


Assuntos
Desenvolvimento da Linguagem , Comportamento Materno , Reconhecimento Psicológico , Semântica , Acústica da Fala , Comportamento Verbal , Aprendizagem Verbal , Aprendizagem por Associação , Feminino , Humanos , Lactente , Masculino , Espectrografia do Som , Percepção da Fala
20.
IEEE Trans Cogn Dev Syst ; 12(2): 243-249, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33748419

RESUMO

Here we characterize establishment of joint attention in hearing parent-deaf child dyads and hearing parent-hearing child dyads. Deaf children were candidates for cochlear implantation who had not yet been implanted and who had no exposure to formal manual communication (e.g., American Sign Language). Because many parents whose deaf children go through early cochlear implant surgery do not themselves know a visual language, these dyads do not share a formal communication system based in a common sensory modality prior to the child's implantation. Joint attention episodes were identified during free play between hearing parents and their hearing children (N = 4) and hearing parents and their deaf children (N = 4). Attentional episode types included successful parent-initiated joint attention, unsuccessful parent-initiated joint attention, passive attention, successful child-initiated joint attention, and unsuccessful child-initiated joint attention. Group differences emerged in both successful and unsuccessful parent-initiated attempts at joint attention, parent passive attention, and successful child-initiated attempts at joint attention based on proportion of time spent in each. These findings highlight joint attention as an indicator of early communicative efficacy in parent-child interaction for different child populations. We discuss the active role parents and children play in communication, regardless of their hearing status.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA