Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.040
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(2): e2212120120, 2023 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-36598952

RESUMO

The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex-recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. During task engagement, stimulus identity decoding performance from simultaneously recorded parietal neurons significantly correlated with psychometric sensitivity. In contrast, decoding performance during passive listening was significantly reduced. Principal component and geometric analyses revealed the emergence of low-dimensional encoding of linearly separable manifolds with respect to stimulus identity and decision, but only during task engagement. These findings confirm that the parietal cortex mediates a transition of acoustic representations into decision-related variables. Finally, using a clustering analysis, we identified three functionally distinct subpopulations of neurons that each encoded task-relevant information during separate temporal segments of a trial. Taken together, our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions.


Assuntos
Córtex Auditivo , Lobo Parietal , Animais , Lobo Parietal/fisiologia , Percepção Auditiva/fisiologia , Córtex Auditivo/fisiologia , Estimulação Acústica , Acústica , Gerbillinae
2.
J Neurosci ; 44(15)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38388426

RESUMO

Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal , Imageamento por Ressonância Magnética , Atenção/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica
3.
Cereb Cortex ; 34(7)2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-39051660

RESUMO

What is the function of auditory hemispheric asymmetry? We propose that the identification of sound sources relies on the asymmetric processing of two complementary and perceptually relevant acoustic invariants: actions and objects. In a large dataset of environmental sounds, we observed that temporal and spectral modulations display only weak covariation. We then synthesized auditory stimuli by simulating various actions (frictions) occurring on different objects (solid surfaces). Behaviorally, discrimination of actions relies on temporal modulations, while discrimination of objects relies on spectral modulations. Functional magnetic resonance imaging data showed that actions and objects are decoded in the left and right hemispheres, respectively, in bilateral superior temporal and left inferior frontal regions. This asymmetry reflects a generic differential processing-through differential neural sensitivity to temporal and spectral modulations present in environmental sounds-that supports the efficient categorization of actions and objects. These results support an ecologically valid framework of the functional role of auditory brain asymmetry.


Assuntos
Estimulação Acústica , Percepção Auditiva , Lateralidade Funcional , Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Imageamento por Ressonância Magnética/métodos , Lateralidade Funcional/fisiologia , Adulto , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Adulto Jovem , Mapeamento Encefálico/métodos , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem
4.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38629796

RESUMO

Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.


Assuntos
Córtex Auditivo , Mapeamento Encefálico , Mapeamento Encefálico/métodos , Imaginação , Encéfalo/diagnóstico por imagem , Percepção Auditiva , Córtex Cerebral , Imageamento por Ressonância Magnética/métodos , Córtex Auditivo/diagnóstico por imagem
5.
J Neurosci ; 43(20): 3687-3695, 2023 05 17.
Artigo em Inglês | MEDLINE | ID: mdl-37028932

RESUMO

Modulations in both amplitude and frequency are prevalent in natural sounds and are critical in defining their properties. Humans are exquisitely sensitive to frequency modulation (FM) at the slow modulation rates and low carrier frequencies that are common in speech and music. This enhanced sensitivity to slow-rate and low-frequency FM has been widely believed to reflect precise, stimulus-driven phase locking to temporal fine structure in the auditory nerve. At faster modulation rates and/or higher carrier frequencies, FM is instead thought to be coded by coarser frequency-to-place mapping, where FM is converted to amplitude modulation (AM) via cochlear filtering. Here, we show that patterns of human FM perception that have classically been explained by limits in peripheral temporal coding are instead better accounted for by constraints in the central processing of fundamental frequency (F0) or pitch. We measured FM detection in male and female humans using harmonic complex tones with an F0 within the range of musical pitch but with resolved harmonic components that were all above the putative limits of temporal phase locking (>8 kHz). Listeners were more sensitive to slow than fast FM rates, even though all components were beyond the limits of phase locking. In contrast, AM sensitivity remained better at faster than slower rates, regardless of carrier frequency. These findings demonstrate that classic trends in human FM sensitivity, previously attributed to auditory nerve phase locking, may instead reflect the constraints of a unitary code that operates at a more central level of processing.SIGNIFICANCE STATEMENT Natural sounds involve dynamic frequency and amplitude fluctuations. Humans are particularly sensitive to frequency modulation (FM) at slow rates and low carrier frequencies, which are prevalent in speech and music. This sensitivity has been ascribed to encoding of stimulus temporal fine structure (TFS) via phase-locked auditory nerve activity. To test this long-standing theory, we measured FM sensitivity using complex tones with a low F0 but only high-frequency harmonics beyond the limits of phase locking. Dissociating the F0 from TFS showed that FM sensitivity is limited not by peripheral encoding of TFS but rather by central processing of F0, or pitch. The results suggest a unitary code for FM detection limited by more central constraints.


Assuntos
Nervo Coclear , Música , Masculino , Humanos , Feminino , Nervo Coclear/fisiologia , Cóclea/fisiologia , Som , Fala , Estimulação Acústica
6.
Hum Brain Mapp ; 45(2): e26572, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38339905

RESUMO

Tau rhythms are largely defined by sound responsive alpha band (~8-13 Hz) oscillations generated largely within auditory areas of the superior temporal gyri. Studies of tau have mostly employed magnetoencephalography or intracranial recording because of tau's elusiveness in the electroencephalogram. Here, we demonstrate that independent component analysis (ICA) decomposition can be an effective way to identify tau sources and study tau source activities in EEG recordings. Subjects (N = 18) were passively exposed to complex acoustic stimuli while the EEG was recorded from 68 electrodes across the scalp. Subjects' data were split into 60 parallel processing pipelines entailing use of five levels of high-pass filtering (passbands of 0.1, 0.5, 1, 2, and 4 Hz), three levels of low-pass filtering (25, 50, and 100 Hz), and four different ICA algorithms (fastICA, infomax, adaptive mixture ICA [AMICA], and multi-model AMICA [mAMICA]). Tau-related independent component (IC) processes were identified from this data as being localized near the superior temporal gyri with a spectral peak in the 8-13 Hz alpha band. These "tau ICs" showed alpha suppression during sound presentations that was not seen for other commonly observed IC clusters with spectral peaks in the alpha range (e.g., those associated with somatomotor mu, and parietal or occipital alpha). The choice of analysis parameters impacted the likelihood of obtaining tau ICs from an ICA decomposition. Lower cutoff frequencies for high-pass filtering resulted in significantly fewer subjects showing a tau IC than more aggressive high-pass filtering. Decomposition using the fastICA algorithm performed the poorest in this regard, while mAMICA performed best. The best combination of filters and ICA model choice was able to identify at least one tau IC in the data of ~94% of the sample. Altogether, the data reveal close similarities between tau EEG IC dynamics and tau dynamics observed in MEG and intracranial data. Use of relatively aggressive high-pass filters and mAMICA decomposition should allow researchers to identify and characterize tau rhythms in a majority of their subjects. We believe adopting the ICA decomposition approach to EEG analysis can increase the rate and range of discoveries related to auditory responsive tau rhythms.


Assuntos
Córtex Auditivo , Ondas Encefálicas , Humanos , Algoritmos , Córtex Auditivo/fisiologia , Magnetoencefalografia
7.
Psychol Sci ; : 9567976241237737, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38889285

RESUMO

Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.

8.
Psychol Sci ; 35(1): 34-54, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38019607

RESUMO

Much of what we know and love about music hinges on our ability to make successful predictions, which appears to be an intrinsically rewarding process. Yet the exact process by which learned predictions become pleasurable is unclear. Here we created novel melodies in an alternative scale different from any established musical culture to show how musical preference is generated de novo. Across nine studies (n = 1,185), adult participants learned to like more frequently presented items that adhered to this rapidly learned structure, suggesting that exposure and prediction errors both affected self-report liking ratings. Learning trajectories varied by music-reward sensitivity but were similar for U.S. and Chinese participants. Furthermore, functional MRI activity in auditory areas reflected prediction errors, whereas functional connectivity between auditory and medial prefrontal regions reflected both exposure and prediction errors. Collectively, results support predictive coding as a cognitive mechanism by which new musical sounds become rewarding.


Assuntos
Música , Adulto , Humanos , Música/psicologia , Percepção Auditiva , Aprendizagem , Emoções , Recompensa , Mapeamento Encefálico
9.
Artigo em Inglês | MEDLINE | ID: mdl-38733407

RESUMO

Auditory streaming underlies a receiver's ability to organize complex mixtures of auditory input into distinct perceptual "streams" that represent different sound sources in the environment. During auditory streaming, sounds produced by the same source are integrated through time into a single, coherent auditory stream that is perceptually segregated from other concurrent sounds. Based on human psychoacoustic studies, one hypothesis regarding auditory streaming is that any sufficiently salient perceptual difference may lead to stream segregation. Here, we used the eastern grey treefrog, Hyla versicolor, to test this hypothesis in the context of vocal communication in a non-human animal. In this system, females choose their mate based on perceiving species-specific features of a male's pulsatile advertisement calls in social environments (choruses) characterized by mixtures of overlapping vocalizations. We employed an experimental paradigm from human psychoacoustics to design interleaved pulsatile sequences (ABAB…) that mimicked key features of the species' advertisement call, and in which alternating pulses differed in pulse rise time, which is a robust species recognition cue in eastern grey treefrogs. Using phonotaxis assays, we found no evidence that perceptually salient differences in pulse rise time promoted the segregation of interleaved pulse sequences into distinct auditory streams. These results do not support the hypothesis that any perceptually salient acoustic difference can be exploited as a cue for stream segregation in all species. We discuss these findings in the context of cues used for species recognition and auditory streaming.

10.
Exp Brain Res ; 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39012473

RESUMO

Music is based on various regularities, ranging from the repetition of physical sounds to theoretically organized harmony and counterpoint. How are multidimensional regularities processed when we listen to music? The present study focuses on the redundant signals effect (RSE) as a novel approach to untangling the relationship between these regularities in music. The RSE refers to the occurrence of a shorter reaction time (RT) when two or three signals are presented simultaneously than when only one of these signals is presented, and provides evidence that these signals are processed concurrently. In two experiments, chords that deviated from tonal (harmonic) and acoustic (intensity and timbre) regularities were presented occasionally in the final position of short chord sequences. The participants were asked to detect all deviant chords while withholding their responses to non-deviant chords (i.e., the Go/NoGo task). RSEs were observed in all double- and triple-deviant combinations, reflecting processing of multidimensional regularities. Further analyses suggested evidence of coactivation by separate perceptual modules in the combination of tonal and acoustic deviants, but not in the combination of two acoustic deviants. These results imply that tonal and acoustic regularities are different enough to be processed as two discrete pieces of information. Examining the underlying process of RSE may elucidate the relationship between multidimensional regularity processing in music.

11.
Brain ; 146(10): 4065-4076, 2023 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-37184986

RESUMO

Successful communication in daily life depends on accurate decoding of speech signals that are acoustically degraded by challenging listening conditions. This process presents the brain with a demanding computational task that is vulnerable to neurodegenerative pathologies. However, despite recent intense interest in the link between hearing impairment and dementia, comprehension of acoustically degraded speech in these diseases has been little studied. Here we addressed this issue in a cohort of 19 patients with typical Alzheimer's disease and 30 patients representing the three canonical syndromes of primary progressive aphasia (non-fluent/agrammatic variant primary progressive aphasia; semantic variant primary progressive aphasia; logopenic variant primary progressive aphasia), compared to 25 healthy age-matched controls. As a paradigm for the acoustically degraded speech signals of daily life, we used noise-vocoding: synthetic division of the speech signal into frequency channels constituted from amplitude-modulated white noise, such that fewer channels convey less spectrotemporal detail thereby reducing intelligibility. We investigated the impact of noise-vocoding on recognition of spoken three-digit numbers and used psychometric modelling to ascertain the threshold number of noise-vocoding channels required for 50% intelligibility by each participant. Associations of noise-vocoded speech intelligibility threshold with general demographic, clinical and neuropsychological characteristics and regional grey matter volume (defined by voxel-based morphometry of patients' brain images) were also assessed. Mean noise-vocoded speech intelligibility threshold was significantly higher in all patient groups than healthy controls, and significantly higher in Alzheimer's disease and logopenic variant primary progressive aphasia than semantic variant primary progressive aphasia (all P < 0.05). In a receiver operating characteristic analysis, vocoded intelligibility threshold discriminated Alzheimer's disease, non-fluent variant and logopenic variant primary progressive aphasia patients very well from healthy controls. Further, this central hearing measure correlated with overall disease severity but not with peripheral hearing or clear speech perception. Neuroanatomically, after correcting for multiple voxel-wise comparisons in predefined regions of interest, impaired noise-vocoded speech comprehension across syndromes was significantly associated (P < 0.05) with atrophy of left planum temporale, angular gyrus and anterior cingulate gyrus: a cortical network that has previously been widely implicated in processing degraded speech signals. Our findings suggest that the comprehension of acoustically altered speech captures an auditory brain process relevant to daily hearing and communication in major dementia syndromes, with novel diagnostic and therapeutic implications.


Assuntos
Doença de Alzheimer , Afasia Primária Progressiva , Afasia , Humanos , Doença de Alzheimer/complicações , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/metabolismo , Compreensão , Fala , Encéfalo/patologia , Afasia/patologia , Afasia Primária Progressiva/complicações , Testes Neuropsicológicos
12.
Audiol Neurootol ; : 1-7, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38768568

RESUMO

INTRODUCTION: This study aimed to verify the influence of speech stimulus presentation and speed on auditory recognition in cochlear implant (CI) users with poorer performance. METHODS: The cross-sectional observational study applied auditory speech perception tests to fifteen adults, using three different ways of presenting the stimulus, in the absence of competitive noise: monitored live voice (MLV); recorded speech at typical speed (RSTS); recorded speech at slow speed (RSSS). The scores were assessed using the Percent Sentence Recognition Index (PSRI). The data were inferentially analysed using the Friedman and Wilcoxon tests with a 95% confidence interval and 5% significance level (p < 0.05). RESULTS: The mean age was 41.1 years, the mean duration of CI use was 11.4 years, and the mean hearing threshold was 29.7 ± 5.9 dBHL. Test performance, as determined by the PSRI, was MLV = 42.4 ± 17.9%; RSTS = 20.3 ± 14.3%; RSSS = 40.6 ± 20.7%. There was a significant difference identified for RSTS compared to MLV and RSSS. CONCLUSION: The way the stimulus is presented and the speed at which it is presented enable greater auditory speech recognition in CI users, thus favouring comprehension when the tests are applied in the MLV and RSSS modalities.

13.
Perception ; 53(4): 219-239, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38304994

RESUMO

This study investigates the crossmodal associations between naturally occurring sound textures and tactile textures. Previous research has demonstrated the association between low-level sensory features of sound and touch, as well as higher-level, cognitively mediated associations involving language, emotions, and metaphors. However, stimuli like textures, which are found in both modalities have received less attention. In this study, we conducted two experiments: a free association task and a two alternate forced choice task using everyday tactile textures and sound textures selected from natural sound categories. The results revealed consistent crossmodal associations reported by participants between the textures of the two modalities. They tended to associate more sound textures (e.g., wood shavings and sandpaper) with tactile surfaces that were rated as harder, rougher, and intermediate on the sticky-slippery scale. While some participants based the auditory-tactile association on sensory features, others made the associations based on semantic relationships, co-occurrence in nature, and emotional mediation. Interestingly, the statistical features of the sound textures (mean, variance, kurtosis, power, autocorrelation, and correlation) did not show significant correlations with the crossmodal associations, indicating a higher-level association. This study provides insights into auditory-tactile associations by highlighting the role of sensory and emotional (or cognitive) factors in prompting these associations.


Assuntos
Percepção do Tato , Tato , Humanos , Som , Semântica , Atenção
14.
Adv Exp Med Biol ; 1455: 227-256, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38918355

RESUMO

The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.


Assuntos
Percepção Auditiva , Música , Primatas , Humanos , Animais , Percepção Auditiva/fisiologia , Recém-Nascido , Adulto , Primatas/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Potenciais Evocados/fisiologia , Eletroencefalografia
15.
Proc Natl Acad Sci U S A ; 118(48)2021 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-34819369

RESUMO

To guide behavior, perceptual systems must operate on intrinsically ambiguous sensory input. Observers are usually able to acknowledge the uncertainty of their perception, but in some cases, they critically fail to do so. Here, we show that a physiological correlate of ambiguity can be found in pupil dilation even when the observer is not aware of such ambiguity. We used a well-known auditory ambiguous stimulus, known as the tritone paradox, which can induce the perception of an upward or downward pitch shift within the same individual. In two experiments, behavioral responses showed that listeners could not explicitly access the ambiguity in this stimulus, even though their responses varied from trial to trial. However, pupil dilation was larger for the more ambiguous cases. The ambiguity of the stimulus for each listener was indexed by the entropy of behavioral responses, and this entropy was also a significant predictor of pupil size. In particular, entropy explained additional variation in pupil size independent of the explicit judgment of confidence in the specific situation that we investigated, in which the two measures were decoupled. Our data thus suggest that stimulus ambiguity is implicitly represented in the brain even without explicit awareness of this ambiguity.


Assuntos
Percepção Auditiva/fisiologia , Conscientização/fisiologia , Pupila/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Julgamento , Masculino , Incerteza , Percepção Visual/fisiologia
16.
Proc Natl Acad Sci U S A ; 118(29)2021 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-34266949

RESUMO

The perception of sensory events can be enhanced or suppressed by the surrounding spatial and temporal context in ways that facilitate the detection of novel objects and contribute to the perceptual constancy of those objects under variable conditions. In the auditory system, the phenomenon known as auditory enhancement reflects a general principle of contrast enhancement, in which a target sound embedded within a background sound becomes perceptually more salient if the background is presented first by itself. This effect is highly robust, producing an effective enhancement of the target of up to 25 dB (more than two orders of magnitude in intensity), depending on the task. Despite the importance of the effect, neural correlates of auditory contrast enhancement have yet to be identified in humans. Here, we used the auditory steady-state response to probe the neural representation of a target sound under conditions of enhancement. The probe was simultaneously modulated in amplitude with two modulation frequencies to distinguish cortical from subcortical responses. We found robust correlates for neural enhancement in the auditory cortical, but not subcortical, responses. Our findings provide empirical support for a previously unverified theory of auditory enhancement based on neural adaptation of inhibition and point to approaches for improving sensory prostheses for hearing loss, such as hearing aids and cochlear implants.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Estimulação Acústica , Adolescente , Adulto , Limiar Auditivo , Comportamento , Eletroencefalografia , Feminino , Audição , Humanos , Masculino , Som , Adulto Jovem
17.
Proc Natl Acad Sci U S A ; 118(29)2021 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-34272278

RESUMO

Rhythm perception is fundamental to speech and music. Humans readily recognize a rhythmic pattern, such as that of a familiar song, independently of the tempo at which it occurs. This shows that our perception of auditory rhythms is flexible, relying on global relational patterns more than on the absolute durations of specific time intervals. Given that auditory rhythm perception in humans engages a complex auditory-motor cortical network even in the absence of movement and that the evolution of vocal learning is accompanied by strengthening of forebrain auditory-motor pathways, we hypothesize that vocal learning species share our perceptual facility for relational rhythm processing. We test this by asking whether the best-studied animal model for vocal learning, the zebra finch, can recognize a fundamental rhythmic pattern-equal timing between event onsets (isochrony)-based on temporal relations between intervals rather than on absolute durations. Prior work suggests that vocal nonlearners (pigeons and rats) are quite limited in this regard and are biased to attend to absolute durations when listening to rhythmic sequences. In contrast, using naturalistic sounds at multiple stimulus rates, we show that male zebra finches robustly recognize isochrony independent of absolute time intervals, even at rates distant from those used in training. Our findings highlight the importance of comparative studies of rhythmic processing and suggest that vocal learning species are promising animal models for key aspects of human rhythm perception. Such models are needed to understand the neural mechanisms behind the positive effect of rhythm on certain speech and movement disorders.


Assuntos
Percepção Auditiva , Tentilhões/fisiologia , Animais , Córtex Auditivo/fisiologia , Feminino , Aprendizagem , Masculino , Reconhecimento Fisiológico de Modelo , Som , Voz
18.
Psychiatry Clin Neurosci ; 78(5): 282-290, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38321640

RESUMO

AIM: The current study aimed to infer neurophysiological mechanisms of auditory processing in children with Rett syndrome (RTT)-rare neurodevelopmental disorders caused by MECP2 mutations. We examined two brain responses elicited by 40-Hz click trains: auditory steady-state response (ASSR), which reflects fine temporal analysis of auditory input, and sustained wave (SW), which is associated with integral processing of the auditory signal. METHODS: We recorded electroencephalogram findings in 43 patients with RTT (aged 2.92-17.1 years) and 43 typically developing children of the same age during 40-Hz click train auditory stimulation, which lasted for 500 ms and was presented with interstimulus intervals of 500 to 800 ms. Mixed-model ancova with age as a covariate was used to compare amplitude of ASSR and SW between groups, taking into account the temporal dynamics and topography of the responses. RESULTS: Amplitude of SW was atypically small in children with RTT starting from early childhood, with the difference from typically developing children decreasing with age. ASSR showed a different pattern of developmental changes: the between-group difference was negligible in early childhood but increased with age as ASSR increased in the typically developing group, but not in those with RTT. Moreover, ASSR was associated with expressive speech development in patients, so that children who could use words had more pronounced ASSR. CONCLUSION: ASSR and SW show promise as noninvasive electrophysiological biomarkers of auditory processing that have clinical relevance and can shed light onto the link between genetic impairment and the RTT phenotype.


Assuntos
Percepção Auditiva , Eletroencefalografia , Potenciais Evocados Auditivos , Síndrome de Rett , Humanos , Síndrome de Rett/fisiopatologia , Feminino , Criança , Potenciais Evocados Auditivos/fisiologia , Adolescente , Pré-Escolar , Percepção Auditiva/fisiologia , Estimulação Acústica
19.
J Neurosci ; 42(5): 894-908, 2022 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-34893547

RESUMO

Auditory stimuli are often rhythmic in nature. Brain activity synchronizes with auditory rhythms via neural entrainment, and entrainment seems to be beneficial for auditory perception. However, it is not clear to what extent neural entrainment in the auditory system is reliable over time, which is a necessary prerequisite for targeted intervention. The current study aimed to establish the reliability of neural entrainment over time and to predict individual differences in auditory perception from associated neural activity. Across two different sessions, human listeners (21 females, 17 males) detected silent gaps presented at different phase locations of a 2 Hz frequency-modulated (FM) noise while EEG activity was recorded. As expected, neural activity was entrained by the 2 Hz FM noise. Moreover, gap detection was sinusoidally modulated by the phase of the 2 Hz FM into which the gap fell. Critically, both the strength of neural entrainment as well as the modulation of performance by the stimulus rhythm were highly reliable over sessions. Moreover, gap detection was predictable from pregap neural 2 Hz phase and alpha amplitude. Our results demonstrate that neural entrainment in the auditory system and the resulting behavioral modulation are reliable over time, and both entrained delta and nonentrained alpha oscillatory activity contribute to near-threshold stimulus perception. The latter suggests that improving auditory perception might require simultaneously targeting entrained brain rhythms as well as the alpha rhythm.SIGNIFICANCE STATEMENT Neural activity synchronizes to the rhythms in sounds via neural entrainment, which seems to be important for successful auditory perception. A natural hypothesis is that improving neural entrainment, for example, via brain stimulation, should benefit perception. However, the extent to which neural entrainment is reliable over time, a necessary prerequisite for targeted intervention, has not been established. Using electroencephalogram recordings, we demonstrate that both neural entrainment to FM sounds and stimulus-induced behavioral modulation are reliable over time. Moreover, moment-by-moment fluctuations in perception are best predicted by entrained delta phase and nonentrained alpha amplitude. This work suggests that improving auditory perception might require simultaneously targeting entrained brain rhythms as well as the alpha rhythm.


Assuntos
Ritmo alfa , Córtex Auditivo/fisiologia , Ritmo Delta , Periodicidade , Adulto , Vias Auditivas/fisiologia , Percepção Auditiva , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino
20.
Neuroimage ; 271: 120026, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-36921678

RESUMO

Learning new words in an unfamiliar language is a complex endeavor that requires the orchestration of multiple perceptual and cognitive functions. Although the neural mechanisms governing word learning are becoming better understood, little is known about the predictive value of resting-state (RS) metrics for foreign word discrimination and word learning attainment. In addition, it is still unknown which of the multistep processes involved in word learning have the potential to rapidly reconfigure RS networks. To address these research questions, we used electroencephalography (EEG), measured forty participants, and examined scalp-based power spectra, source-based spectral density maps and functional connectivity metrics before (RS1), in between (RS2) and after (RS3) a series of tasks which are known to facilitate the acquisition of new words in a foreign language, namely word discrimination, word-referent mapping and semantic generalization. Power spectra at the scalp level consistently revealed a reconfiguration of RS networks as a function of foreign word discrimination (RS1 vs. RS2) and word learning (RS1 vs. RS3) tasks in the delta, lower and upper alpha, and upper beta frequency ranges. Otherwise, functional reconfigurations at the source level were restricted to the theta (spectral density maps) and to the lower and upper alpha frequency bands (spectral density maps and functional connectivity). Notably, scalp RS changes related to the word discrimination tasks (difference between RS2 and RS1) correlated with word discrimination abilities (upper alpha band) and semantic generalization performance (theta and upper alpha bands), whereas functional changes related to the word learning tasks (difference between RS3 and RS1) correlated with word discrimination scores (lower alpha band). Taken together, these results highlight that foreign speech sound discrimination and word learning have the potential to rapidly reconfigure RS networks at multiple functional scales.


Assuntos
Fonética , Percepção da Fala , Humanos , Encéfalo , Percepção Auditiva , Aprendizagem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa