RESUMEN
Music's deeply interpersonal nature suggests that music-derived neuroplasticity relates to interpersonal temporal dynamics, or synchrony. Interpersonal neural synchrony (INS) has been found to correlate with increased behavioral synchrony during social interactions and may represent mechanisms that support them. As social interactions often do not have clearly delineated boundaries, and many start and stop intermittently, we hypothesize that a neural signature of INS may be detectable following an interaction. The present study aimed to investigate this hypothesis using a pre-post paradigm, measuring interbrain phase coherence before and after a cooperative dyadic musical interaction. Ten dyads underwent synchronous electroencephalographic (EEG) recording during silent, non-interactive periods before and after a musical interaction in the form of a cooperative tapping game. Significant post-interaction increases in delta band INS were found in the post-condition and were positively correlated with the duration of the preceding interaction. These findings suggest a mechanism by which social interaction may be efficiently continued after interruption and hold the potential for measuring neuroplastic adaption in longitudinal studies. These findings also support the idea that INS during social interaction represents active mechanisms for maintaining synchrony rather than mere parallel processing of stimuli and motor activity.
RESUMEN
The perception of harmonic complexes provides important information for musical and vocal communication. Numerous studies have shown that musical training and expertise are associated with better processing of harmonic complexes, however, it is unclear whether the perceptual improvement associated with musical training is universal to different pitch models. The current study addresses this issue by measuring discrimination thresholds of musicians (n = 20) and non-musicians (n = 18) to diotic (same sound to both ears) and dichotic (different sounds to each ear) sounds of four stimulus types: (1) pure sinusoidal tones, PT; (2) four-harmonic complex tones, CT; (3) iterated rippled noise, IRN; and (4) interaurally correlated broadband noise, called the "Huggins" or "dichotic" pitch, DP. Frequency difference limens (DLF) for each stimulus type were obtained via a three-alternative-forced-choice adaptive task requiring selection of the interval with the highest pitch, yielding the smallest perceptible fundamental frequency (F0) distance (in Hz) between two sounds. Music skill was measured by an online test of musical pitch, melody and timing maintained by the International Laboratory for Brain Music and Sound Research. Musicianship, length of music experience and self-evaluation of musical skill were assessed by questionnaire. Results showed musicians had smaller DLFs in all four conditions with the largest group difference in the dichotic condition. DLF thresholds were related to both subjective and objective musical ability. In addition, subjective self-report of musical ability was shown to be a significant variable in group classification. Taken together, the results suggest that music-related plasticity benefits multiple mechanisms of pitch encoding and that self-evaluation of musicality can be reliably associated with objective measures of perception.
RESUMEN
Previous evidence has shown that early auditory processing impacts later linguistic development, and targeted training implemented at early ages can enhance auditory processing skills, with better expected language development outcomes. This study focuses on typically developing infants and aims to test the feasibility and preliminary efficacy of music training based on active synchronization with complex musical rhythms on the linguistic outcomes and electrophysiological functioning underlying auditory processing. Fifteen infants participated in the training (RTr+) and were compared with two groups of infants not attending any structured activities during the same time frame (RTr-, N = 14). At pre- and post-training, expressive and receptive language skills were assessed using standardized tests, and auditory processing skills were characterized through an electrophysiological non-speech multi-feature paradigm. Results reveal that RTr+ infants showed significantly broader improvement in both expressive and receptive pre-language skills. Moreover, at post-training, they presented an electrophysiological pattern characterized by shorter latency of two peaks (N2* and P2), reflecting a neural change detection process: these shifts in latency go beyond those seen due to maturation alone. These results provide preliminary evidence on the efficacy of our training in improving early linguistic competences, and in modifying the neural underpinnings of auditory processing in infants.
RESUMEN
Successful mapping of meaningful labels to sound input requires accurate representation of that sound's acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a "leave-one-out" approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject's results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice.
Asunto(s)
Música , Percepción del Habla , Estimulación Acústica , Electroencefalografía , Potenciales Evocados Auditivos , Pérdida Auditiva , Humanos , HablaRESUMEN
OBJECTIVE: To evaluate the feasibility of auditory monitoring of neurophysiological status using frequency-following response (FFR) in neonates with progressive moderate hyperbilirubinemia, measured by transcutaneous (TcB) levels. STUDY DESIGN: ABR and FFR measures were compared and correlated with TcB levels across three groups. Group I was a healthy cohort (n = 13). Group II (n = 28) consisted of neonates with progressive, moderate hyperbilirubinemia and Group III consisted of the same neonates, post physician-ordered phototherapy. RESULT: FFR amplitudes in Group I controls (TcB = 83.1 ± 32.5µmol/L; 4.9 ± 1.9 mg/dL) were greater than Group II (TcB = 209.3 ± 48.0µmol/L; 12.1 ± 2.8 mg/dL). After TcB was lowered by phototherapy, FFR amplitudes in Group III were similar to controls. Lower TcB levels correlated with larger FFR amplitudes (r = -0.291, p = 0.015), but not with ABR wave amplitude or latencies. CONCLUSION: The FFR is a promising measure of the dynamic neurophysiological status in neonates, and may be useful in tracking neurotoxicity in infants with hyperbilirubinemia.
Asunto(s)
Estimulación Acústica , Tronco Encefálico/fisiología , Potenciales Evocados Auditivos del Tronco Encefálico , Hiperbilirrubinemia Neonatal/fisiopatología , Tamizaje Neonatal/métodos , Bilirrubina/sangre , Estudios de Cohortes , Electroencefalografía , Humanos , Hiperbilirrubinemia Neonatal/sangre , Hiperbilirrubinemia Neonatal/terapia , Recién Nacido , Fototerapia , HablaRESUMEN
The ability to rapidly discriminate successive auditory stimuli within tens-of-milliseconds is crucial for speech and language development, particularly in the first year of life. This skill, called Rapid Auditory Processing (RAP), is altered in infants at familial risk for language and learning impairment (LLI) and is a robust predictor of later language outcomes. In the present study, we investigate the neural substrates of RAP, i.e., the underlying neural oscillatory patterns, in a group of Italian 6-month-old infants at risk for LLI (FH+, nâ¯=â¯24), compared to control infants with no known family history of LLI (FH-, nâ¯=â¯32). Brain responses to rapid changes in fundamental frequency and duration were recorded via high-density electroencephalogram during a non-speech double oddball paradigm. Sources of event-related potential generators were localized to right and left auditory regions in both FH+ and FH- groups. Time-frequency analyses showed variations in both theta (Æ) and gamma (É£) ranges across groups. Our results showed that overall RAP stimuli elicited a more left-lateralized pattern of oscillations in FH- infants, whereas FH+ infants demonstrated a more right-lateralized pattern, in both the theta and gamma frequency bands. Interestingly, FH+ infants showed reduced early left gamma power (starting at 50â¯ms after stimulus onset) during deviant discrimination. Perturbed oscillatory dynamics may well constitute a candidate neural mechanism to explain group differences in RAP. Additional group differences in source location suggest that anatomical variations may underlie differences in oscillatory activity. Regarding the predictive value of early oscillatory measures, we found that the amplitude of the source response and the magnitude of oscillatory power and phase synchrony were predictive of expressive vocabulary at 20â¯months of age. These results further our understanding of the interplay among neural mechanisms that support typical and atypical rapid auditory processing in infancy.
Asunto(s)
Corteza Auditiva/fisiopatología , Percepción Auditiva/fisiología , Sincronización de Fase en Electroencefalografía/fisiología , Potenciales Evocados Auditivos/fisiología , Lateralidad Funcional/fisiología , Ritmo Gamma/fisiología , Trastornos del Desarrollo del Lenguaje/fisiopatología , Desarrollo del Lenguaje , Discapacidades para el Aprendizaje/fisiopatología , Ritmo Teta/fisiología , Electroencefalografía , Predisposición Genética a la Enfermedad , Humanos , Lactante , Trastornos del Desarrollo del Lenguaje/genética , Discapacidades para el Aprendizaje/genética , VocabularioRESUMEN
Background: The speech-evoked frequency following response (FFR) has shown to be useful in assessing complex auditory processing abilities and in different age groups. While many aspects of FFR have been studied extensively, the effect of timing, as measured by inter-stimulus-interval (ISI), especially in the older adult population, has yet to be thoroughly investigated. Objective: The purpose of this study was to examine the effects of different ISIs on speech evoked FFR in older and younger adults who speak a tonal language, and to investigate whether the older adults' FFR were more susceptible to the change in ISI. Materials and Methods: Twenty-two normal hearing participants were recruited in our study, including 11 young adult participants and 11 elderly participants. An Intelligent Hearing Systems Smart EP evoke potential system was used to record the FFR in four ISI conditions (40, 80, 120 and 160 ms). A recorded natural speech token with a falling tone /yi/ was used as the stimulus. Two indices, stimulus-to-response correlation coefficient and pitch strength, were used to quantify the FFR responses. Two-way analysis of variance (ANOVA) was used to analyze the differences in different age groups and different ISI conditions. Results: There was no significant difference in stimulus-to-response correlation coefficient and pitch strength among the different ISI conditions, in either age groups. Older adults appeared to have weaker FFR for all ISI conditions when compared to their younger adult counterparts. Conclusion: Shorter ISIs did not result in worse FFRs from older adults or younger adults. For speech-evoked FFR using a recorded natural speech token that is 250 ms in length, an ISI of as short as 40 ms appeared to be sufficient and effective to record FFR for elderly adults.
RESUMEN
OBJECTIVE: Background noise makes hearing speech difficult for people of all ages. This difficulty can be exacerbated by co-occurring developmental deficits that often emerge in childhood. Sentence-type speech-in-noise (SIN) tests are available clinically but cannot be administered to very young individuals. Our objective was to examine the use of an electrophysiological test of SIN, suitable for infants, to track developmental trajectories. METHODS: Speech-evoked brainstem potentials were recorded from 30 typically-developing infants in quiet and +10â¯dB SNR background noise. Infants were divided into two age groups (7-12 and 18-24â¯months) and examined across development. Spectral power of the frequency following response (FFR) was computed using a fast Fourier Transform. Cross-correlations between quiet and noise responses were computed to measure encoding resistance to noise. RESULTS: Older infants had more robust FFR encoding in noise and had higher quiet-noise correlations than their younger counterparts. No group differences were observed in the quiet condition. CONCLUSIONS: By two years of age, infants show less vulnerability to the disruptive effects of background noise, compared to infants under 12â¯months. SIGNIFICANCE: Speech-in-noise electrophysiology can be easily recorded across infancy and provides unique insights into developmental differences that tests conducted in quiet may miss.
Asunto(s)
Tronco Encefálico/fisiología , Desarrollo Infantil , Potenciales Evocados Auditivos del Tronco Encefálico , Ruido , Percepción del Habla , Tronco Encefálico/crecimiento & desarrollo , Femenino , Humanos , Lactante , MasculinoRESUMEN
Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx), over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx) or maturation alone (Naïve Control, NC). Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD) elicited greater Theta-band (4-6Hz) activity in Right Auditory Cortex (RAC), as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV) elicited larger responses in Left Auditory Cortex (LAC). PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33-37Hz) activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.
Asunto(s)
Encéfalo/fisiología , Potenciales Evocados Auditivos/fisiología , Plasticidad Neuronal/fisiología , Corteza Auditiva/fisiología , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Femenino , Humanos , Lactante , MasculinoRESUMEN
The frequency-following response (FFR) is a measure of the brain's periodic sound encoding. It is of increasing importance for studying the human auditory nervous system due to numerous associations with auditory cognition and dysfunction. Although the FFR is widely interpreted as originating from brainstem nuclei, a recent study using MEG suggested that there is also a right-lateralized contribution from the auditory cortex at the fundamental frequency (Coffey et al., 2016b). Our objectives in the present work were to validate and better localize this result using a completely different neuroimaging modality and to document the relationships between the FFR, the onset response, and cortical activity. Using a combination of EEG, fMRI, and diffusion-weighted imaging, we show that activity in the right auditory cortex is related to individual differences in FFR-fundamental frequency (f0) strength, a finding that was replicated with two independent stimulus sets, with and without acoustic energy at the fundamental frequency. We demonstrate a dissociation between this FFR-f0-sensitive response in the right and an area in left auditory cortex that is sensitive to individual differences in the timing of initial response to sound onset. Relationships to timing and their lateralization are supported by parallels in the microstructure of the underlying white matter, implicating a mechanism involving neural conduction efficiency. These data confirm that the FFR has a cortical contribution and suggest ways in which auditory neuroscience may be advanced by connecting early sound representation to measures of higher-level sound processing and cognitive function. SIGNIFICANCE STATEMENT: The frequency-following response (FFR) is an EEG signal that is used to explore how the auditory system encodes temporal regularities in sound and is related to differences in auditory function between individuals. It is known that brainstem nuclei contribute to the FFR, but recent findings of an additional cortical source are more controversial. Here, we use fMRI to validate and extend the prediction from MEG data of a right auditory cortex contribution to the FFR. We also demonstrate a dissociation between FFR-related cortical activity from that related to the latency of the response to sound onset, which is found in left auditory cortex. The findings provide a clearer picture of cortical processes for analysis of sound features.
Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Electroencefalografía/métodos , Imagen por Resonancia Magnética/métodos , Música , Adulto , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Femenino , Humanos , Masculino , Distribución Aleatoria , Adulto JovenRESUMEN
Background: Perceptual and electrophysiological studies have found reduced speech discrimination in quiet and noisy environment, delayed neural timing, decreased neural synchrony, and decreased temporal processing ability in elderlies, even those with normal hearing. However, recent studies have also demonstrated that language experience and auditory training enhance the temporal dynamics of sound encoding in the auditory brainstem response (ABR). The purpose of this study was to explore the pitch processing ability at the brainstem level in an aging population that has a tonal language background. Method: Mandarin speaking younger (n = 12) and older (n = 12) adults were recruited for this study. All participants had normal audiometric test results and normal suprathreshold click-evoked ABR. To record frequency following responses (FFRs) elicited by Mandarin lexical tones, two Mandarin Chinese syllables with different fundamental frequency pitch contours (Flat Tone and Falling Tone) were presented at 70 dB SPL. Fundamental frequencies (f0) of both the stimulus and the responses were extracted and compared to individual brainstem responses. Two indices were used to examine different aspects of pitch processing ability at the brainstem level: Pitch Strength and Pitch Correlation. Results: Lexical tone elicited FFR were overall weaker in the older adult group compared to their younger adult counterpart. Measured by Pitch Strength and Pitch Correlation, statistically significant group differences were only found when the tone with a falling f0 (Falling Tone) were used as the stimulus. Conclusion: Results of this study demonstrated that in a tonal language speaking population, pitch processing ability at the brainstem level of older adults are not as strong and robust as their younger counterparts. Findings of this study are consistent with previous reports on brainstem responses of older adults whose native language is English. On the other hand, lexical tone elicited FFRs have been shown to correlate with the length of language exposure. Older adults' degraded responses in our study may also be due to that, the Mandarin speaking older adults' long term exposure somewhat counteracted the negative impact on aging and helped maintain, or at least reduced, the degradation rate in their temporal processing capacity at the brainstem level.
RESUMEN
The brain's fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts.
RESUMEN
The functional significance of the α rhythm is widely debated. It has been proposed that α reflects sensory inhibition and/or a temporal sampling or "parsing" mechanism. There is also continuing disagreement over the more fundamental questions of which cortical layers generate α rhythms and whether the generation of α is equivalent across sensory systems. To address these latter questions, we analyzed laminar profiles of local field potentials (LFPs) and concomitant multiunit activity (MUA) from macaque V1, S1, and A1 during both spontaneous activity and sensory stimulation. Current source density (CSD) analysis of laminar LFP profiles revealed α current generators in the supragranular, granular, and infragranular layers. MUA phase-locked to local current source/sink configurations confirmed that α rhythms index local neuronal excitability fluctuations. CSD-defined α generators were strongest in the supragranular layers, whereas LFP α power was greatest in the infragranular layers, consistent with some of the previous reports. The discrepancy between LFP and CSD findings appears to be attributable to contamination of the infragranular LFP signal by activity that is volume-conducted from the stronger supragranular α generators. The presence of α generators across cortical depth in V1, S1, and A1 suggests the involvement of α in feedforward as well as feedback processes and is consistent with the view that α rhythms, perhaps in addition to a role in sensory inhibition, may parse sensory input streams in a way that facilitates communication across cortical areas. SIGNIFICANCE STATEMENT: The α rhythm is thought to reflect sensory inhibition and/or a temporal parsing mechanism. Here, we address two outstanding issues: (1) whether α is a general mechanism across sensory systems and (2) which cortical layers generate α oscillations. Using intracranial recordings from macaque V1, S1, and A1, we show α band activity with a similar spectral and laminar profile in each of these sensory areas. Furthermore, α generators were present in each of the cortical layers, with a strong source in superficial layers. We argue that previous findings, locating α generators exclusively in the deeper layers, were biased because of use of less locally specific local field potential measurements. The laminar distribution of α band activity appears more complex than generally assumed.
Asunto(s)
Mapeo Encefálico , Potenciales Evocados/fisiología , Neocórtex/anatomía & histología , Neocórtex/fisiología , Red Nerviosa/fisiología , Periodicidad , Análisis de Varianza , Animales , Femenino , Macaca , Masculino , Estimulación Física , Análisis EspectralRESUMEN
Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net application, recording, dynamic brain response analysis, and representative results.
Asunto(s)
Encéfalo/fisiología , Potenciales Evocados Auditivos/fisiología , Corteza Auditiva/fisiología , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Humanos , LactanteRESUMEN
Studies over several decades have identified many of the neuronal substrates of music perception by pursuing pitch and rhythm perception separately. Here, we address the question of how these mechanisms interact, starting with the observation that the peripheral pathways of the so-called "Core" and "Matrix" thalamocortical system provide the anatomical bases for tone and rhythm channels. We then examine the hypothesis that these specialized inputs integrate acoustic content within rhythm context in auditory cortex using classical types of "driving" and "modulatory" mechanisms. This hypothesis provides a framework for deriving testable predictions about the early stages of music processing. Furthermore, because thalamocortical circuits are shared by speech and music processing, such a model provides concrete implications for how music experience contributes to the development of robust speech encoding mechanisms.
Asunto(s)
Corteza Cerebral/fisiología , Música , Tálamo/fisiología , Estimulación Acústica , Animales , Corteza Auditiva/fisiología , Ganglios Basales/metabolismo , Encéfalo/fisiología , Gatos , Humanos , Macaca , Modelos Neurológicos , Neuronas , Oscilometría , Percepción de la Altura Tonal , Ratas , Corteza Somatosensorial/fisiología , Habla , Factores de TiempoRESUMEN
Young infants discriminate phonetically relevant speech contrasts in a universal manner, that is, similarly across languages. This ability fades by 12 months of age as the brain builds language-specific phonemic maps and increasingly responds preferentially to the infant's native language. However, the neural mechanisms that underlie the development of infant preference for native over non-native phonemes remain unclear. Since gamma-band power is known to signal infants' preference for native language rhythm, we hypothesized that it might also indicate preference for native phonemes. Using high-density electroencephalogram/event-related potential (EEG/ERP) recordings and source-localization techniques to identify and locate the ERP generators, we examined changes in brain oscillations while 6-month-old human infants from monolingual English settings listened to English and Spanish syllable contrasts. Neural dynamics were investigated via single-trial analysis of the temporal-spectral composition of brain responses at source level. Increases in 4-6 Hz (theta) power and in phase synchronization at 2-4 Hz (delta/theta) were found to characterize infants' evoked responses to discrimination of native/non-native syllable contrasts mostly in the left auditory source. However, selective enhancement of induced gamma oscillations in the area of anterior cingulate cortex was seen only during native contrast discrimination. These results suggest that gamma oscillations support syllable discrimination in the earliest stages of language acquisition, particularly during the period in which infants begin to develop preferential processing for linguistically relevant phonemic features in their environment. Our results also suggest that by 6 months of age, infants already treat native phonemic contrasts differently from non-native, implying that perceptual specialization and establishment of enduring phonemic memory representations have been initiated.
Asunto(s)
Electroencefalografía , Desarrollo del Lenguaje , Lenguaje , Percepción del Habla/fisiología , Análisis de Varianza , Encéfalo/fisiología , Mapeo Encefálico , Interpretación Estadística de Datos , Sincronización de Fase en Electroencefalografía , Inglaterra , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Lactante , Recién Nacido , Imagen por Resonancia Magnética , Masculino , Fonética , Ritmo Teta/fisiologíaRESUMEN
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age.
Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Electroencefalografía , Potenciales Evocados/fisiología , Estimulación Acústica , Análisis de Varianza , Femenino , Lateralidad Funcional/fisiología , Humanos , Lactante , Imagen por Resonancia Magnética , Masculino , Oscilometría , Análisis Espectral , Factores de TiempoRESUMEN
Although we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, although the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli.
Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Neuronas/fisiología , Periodicidad , Estimulación Acústica/métodos , Animales , Electroencefalografía/métodos , Potenciales Evocados Auditivos/fisiología , Femenino , Macaca , MasculinoRESUMEN
Most auditory events in nature are accompanied by non-auditory signals, such as a view of the speaker's face during face-to-face communication or the vibration of a string during a musical performance. While it is known that accompanying visual and somatosensory signals can benefit auditory perception, often by making the sound seem louder, the specific neural bases for sensory amplification are still debated. In this review, we want to deal with what we regard as confusion on two topics that are crucial to our understanding of multisensory integration mechanisms in auditory cortex: (1) Anatomical Underpinnings (e.g., what circuits underlie multisensory convergence), and (2) Temporal Dynamics (e.g., what time windows of integration are physiologically feasible). The combined evidence on multisensory structure and function in auditory cortex advances the emerging view of the relationship between perception and low level multisensory integration. In fact, it seems that the question is no longer whether low level, putatively unisensory cortex is accessible to multisensory influences, but how.
Asunto(s)
Corteza Auditiva/anatomía & histología , Corteza Auditiva/fisiología , Neuronas/fisiología , Estimulación Acústica , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Humanos , Modelos Neurológicos , Neuronas/metabolismo , Oscilometría/métodos , Percepción , Corteza Somatosensorial/fisiología , Factores de Tiempo , Vías Visuales , Percepción Visual/fisiologíaRESUMEN
OBJECTIVE: To examine the impact of hearing loss (HL) on audiovisual (AV) processing in the aging population. We hypothesized that age-related HL would have a pervasive effect on sensory processing, extending beyond the auditory domain. Specifically, we predicted that decreased auditory input to the neural system, in the form of HL over time, would have deleterious effects on multisensory mechanisms. DESIGN: This study compared AV processing between older adults with normal hearing (N = 12) and older adults with mild to moderate sensorineural HL (N = 12). To do this, we recorded cortical evoked potentials that were elicited by watching and listening to recordings of a speaker saying the syllable "bi." Stimuli were presented in three conditions: when hearing the syllable "bi" (auditory), when viewing a person say "bi" (visual), and when seeing and hearing the syllables simultaneously (AV). Presentation level of the auditory stimulus was set to +30 dB SL for each listener to equalize auditory input across groups. RESULTS: In the AV condition, the normal-hearing group showed a clear and consistent decrease in P1 and N1 latencies as well as a reduction in P1 amplitude compared with the sum of the unimodal components (auditory + visual). These integration effects were absent or less consistent in HL participants. CONCLUSIONS: Despite controlling for auditory sensation level, visual influence on auditory processing was significantly less pronounced in HL individuals compared with controls, indicating diminished AV integration in this population. These results demonstrate that HL has a deleterious effect on how older adults combine what they see and hear. Although auditory amplification vastly improves the communication abilities in most hearing impaired individuals, the associated atrophy of multisensory mechanisms may contribute to a patient's difficulty in everyday settings. Our findings and related studies emphasize the potential value of multimodal tasks and stimuli in the assessment and rehabilitation of hearing impairments.