Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 1.056
Filtrar
Más filtros

Publication year range
1.
Proc Natl Acad Sci U S A ; 120(2): e2212120120, 2023 01 10.
Artículo en Inglés | MEDLINE | ID: mdl-36598952

RESUMEN

The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex-recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. During task engagement, stimulus identity decoding performance from simultaneously recorded parietal neurons significantly correlated with psychometric sensitivity. In contrast, decoding performance during passive listening was significantly reduced. Principal component and geometric analyses revealed the emergence of low-dimensional encoding of linearly separable manifolds with respect to stimulus identity and decision, but only during task engagement. These findings confirm that the parietal cortex mediates a transition of acoustic representations into decision-related variables. Finally, using a clustering analysis, we identified three functionally distinct subpopulations of neurons that each encoded task-relevant information during separate temporal segments of a trial. Taken together, our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions.


Asunto(s)
Corteza Auditiva , Lóbulo Parietal , Animales , Lóbulo Parietal/fisiología , Percepción Auditiva/fisiología , Corteza Auditiva/fisiología , Estimulación Acústica , Acústica , Gerbillinae
2.
J Neurosci ; 44(15)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38388426

RESUMEN

Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal , Imagen por Resonancia Magnética , Atención/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica
3.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38629796

RESUMEN

Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.


Asunto(s)
Corteza Auditiva , Mapeo Encefálico , Mapeo Encefálico/métodos , Imaginación , Encéfalo/diagnóstico por imagen , Percepción Auditiva , Corteza Cerebral , Imagen por Resonancia Magnética/métodos , Corteza Auditiva/diagnóstico por imagen
4.
Cereb Cortex ; 34(7)2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-39051660

RESUMEN

What is the function of auditory hemispheric asymmetry? We propose that the identification of sound sources relies on the asymmetric processing of two complementary and perceptually relevant acoustic invariants: actions and objects. In a large dataset of environmental sounds, we observed that temporal and spectral modulations display only weak covariation. We then synthesized auditory stimuli by simulating various actions (frictions) occurring on different objects (solid surfaces). Behaviorally, discrimination of actions relies on temporal modulations, while discrimination of objects relies on spectral modulations. Functional magnetic resonance imaging data showed that actions and objects are decoded in the left and right hemispheres, respectively, in bilateral superior temporal and left inferior frontal regions. This asymmetry reflects a generic differential processing-through differential neural sensitivity to temporal and spectral modulations present in environmental sounds-that supports the efficient categorization of actions and objects. These results support an ecologically valid framework of the functional role of auditory brain asymmetry.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Lateralidad Funcional , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Imagen por Resonancia Magnética/métodos , Lateralidad Funcional/fisiología , Adulto , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Adulto Joven , Mapeo Encefálico/métodos , Corteza Auditiva/fisiología , Corteza Auditiva/diagnóstico por imagen
5.
J Neurosci ; 43(20): 3687-3695, 2023 05 17.
Artículo en Inglés | MEDLINE | ID: mdl-37028932

RESUMEN

Modulations in both amplitude and frequency are prevalent in natural sounds and are critical in defining their properties. Humans are exquisitely sensitive to frequency modulation (FM) at the slow modulation rates and low carrier frequencies that are common in speech and music. This enhanced sensitivity to slow-rate and low-frequency FM has been widely believed to reflect precise, stimulus-driven phase locking to temporal fine structure in the auditory nerve. At faster modulation rates and/or higher carrier frequencies, FM is instead thought to be coded by coarser frequency-to-place mapping, where FM is converted to amplitude modulation (AM) via cochlear filtering. Here, we show that patterns of human FM perception that have classically been explained by limits in peripheral temporal coding are instead better accounted for by constraints in the central processing of fundamental frequency (F0) or pitch. We measured FM detection in male and female humans using harmonic complex tones with an F0 within the range of musical pitch but with resolved harmonic components that were all above the putative limits of temporal phase locking (>8 kHz). Listeners were more sensitive to slow than fast FM rates, even though all components were beyond the limits of phase locking. In contrast, AM sensitivity remained better at faster than slower rates, regardless of carrier frequency. These findings demonstrate that classic trends in human FM sensitivity, previously attributed to auditory nerve phase locking, may instead reflect the constraints of a unitary code that operates at a more central level of processing.SIGNIFICANCE STATEMENT Natural sounds involve dynamic frequency and amplitude fluctuations. Humans are particularly sensitive to frequency modulation (FM) at slow rates and low carrier frequencies, which are prevalent in speech and music. This sensitivity has been ascribed to encoding of stimulus temporal fine structure (TFS) via phase-locked auditory nerve activity. To test this long-standing theory, we measured FM sensitivity using complex tones with a low F0 but only high-frequency harmonics beyond the limits of phase locking. Dissociating the F0 from TFS showed that FM sensitivity is limited not by peripheral encoding of TFS but rather by central processing of F0, or pitch. The results suggest a unitary code for FM detection limited by more central constraints.


Asunto(s)
Nervio Coclear , Música , Masculino , Humanos , Femenino , Nervio Coclear/fisiología , Cóclea/fisiología , Sonido , Habla , Estimulación Acústica
6.
Hum Brain Mapp ; 45(2): e26572, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38339905

RESUMEN

Tau rhythms are largely defined by sound responsive alpha band (~8-13 Hz) oscillations generated largely within auditory areas of the superior temporal gyri. Studies of tau have mostly employed magnetoencephalography or intracranial recording because of tau's elusiveness in the electroencephalogram. Here, we demonstrate that independent component analysis (ICA) decomposition can be an effective way to identify tau sources and study tau source activities in EEG recordings. Subjects (N = 18) were passively exposed to complex acoustic stimuli while the EEG was recorded from 68 electrodes across the scalp. Subjects' data were split into 60 parallel processing pipelines entailing use of five levels of high-pass filtering (passbands of 0.1, 0.5, 1, 2, and 4 Hz), three levels of low-pass filtering (25, 50, and 100 Hz), and four different ICA algorithms (fastICA, infomax, adaptive mixture ICA [AMICA], and multi-model AMICA [mAMICA]). Tau-related independent component (IC) processes were identified from this data as being localized near the superior temporal gyri with a spectral peak in the 8-13 Hz alpha band. These "tau ICs" showed alpha suppression during sound presentations that was not seen for other commonly observed IC clusters with spectral peaks in the alpha range (e.g., those associated with somatomotor mu, and parietal or occipital alpha). The choice of analysis parameters impacted the likelihood of obtaining tau ICs from an ICA decomposition. Lower cutoff frequencies for high-pass filtering resulted in significantly fewer subjects showing a tau IC than more aggressive high-pass filtering. Decomposition using the fastICA algorithm performed the poorest in this regard, while mAMICA performed best. The best combination of filters and ICA model choice was able to identify at least one tau IC in the data of ~94% of the sample. Altogether, the data reveal close similarities between tau EEG IC dynamics and tau dynamics observed in MEG and intracranial data. Use of relatively aggressive high-pass filters and mAMICA decomposition should allow researchers to identify and characterize tau rhythms in a majority of their subjects. We believe adopting the ICA decomposition approach to EEG analysis can increase the rate and range of discoveries related to auditory responsive tau rhythms.


Asunto(s)
Corteza Auditiva , Ondas Encefálicas , Humanos , Algoritmos , Corteza Auditiva/fisiología , Magnetoencefalografía
7.
Psychol Sci ; 35(7): 814-824, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38889285

RESUMEN

Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.


Asunto(s)
Percepción Auditiva , Percepción Visual , Humanos , Percepción Auditiva/fisiología , Adulto Joven , Adulto , Masculino , Femenino , Percepción Visual/fisiología , Ruido , Estimulación Acústica
8.
Psychol Sci ; 35(1): 34-54, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38019607

RESUMEN

Much of what we know and love about music hinges on our ability to make successful predictions, which appears to be an intrinsically rewarding process. Yet the exact process by which learned predictions become pleasurable is unclear. Here we created novel melodies in an alternative scale different from any established musical culture to show how musical preference is generated de novo. Across nine studies (n = 1,185), adult participants learned to like more frequently presented items that adhered to this rapidly learned structure, suggesting that exposure and prediction errors both affected self-report liking ratings. Learning trajectories varied by music-reward sensitivity but were similar for U.S. and Chinese participants. Furthermore, functional MRI activity in auditory areas reflected prediction errors, whereas functional connectivity between auditory and medial prefrontal regions reflected both exposure and prediction errors. Collectively, results support predictive coding as a cognitive mechanism by which new musical sounds become rewarding.


Asunto(s)
Música , Adulto , Humanos , Música/psicología , Percepción Auditiva , Aprendizaje , Emociones , Recompensa , Mapeo Encefálico
9.
Artículo en Inglés | MEDLINE | ID: mdl-38733407

RESUMEN

Auditory streaming underlies a receiver's ability to organize complex mixtures of auditory input into distinct perceptual "streams" that represent different sound sources in the environment. During auditory streaming, sounds produced by the same source are integrated through time into a single, coherent auditory stream that is perceptually segregated from other concurrent sounds. Based on human psychoacoustic studies, one hypothesis regarding auditory streaming is that any sufficiently salient perceptual difference may lead to stream segregation. Here, we used the eastern grey treefrog, Hyla versicolor, to test this hypothesis in the context of vocal communication in a non-human animal. In this system, females choose their mate based on perceiving species-specific features of a male's pulsatile advertisement calls in social environments (choruses) characterized by mixtures of overlapping vocalizations. We employed an experimental paradigm from human psychoacoustics to design interleaved pulsatile sequences (ABAB…) that mimicked key features of the species' advertisement call, and in which alternating pulses differed in pulse rise time, which is a robust species recognition cue in eastern grey treefrogs. Using phonotaxis assays, we found no evidence that perceptually salient differences in pulse rise time promoted the segregation of interleaved pulse sequences into distinct auditory streams. These results do not support the hypothesis that any perceptually salient acoustic difference can be exploited as a cue for stream segregation in all species. We discuss these findings in the context of cues used for species recognition and auditory streaming.

10.
J Exp Biol ; 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39263850

RESUMEN

Early-life experiences with signals used in communication are instrumental in shaping an animal's social interactions. In songbirds, which use vocalizations for guiding social interactions and mate choice, recent studies show that sensory effects on development occur earlier than previously expected, even in embryos and nestlings. Here, we explore the neural dynamics underlying experience-dependent song categorization in young birds prior to the traditionally studied sensitive period of vocal learning that begins around 3 weeks post-hatch. We raised zebra finches either with their biological parents, cross-fostered by Bengalese finches beginning at embryonic day 9, or by only the non-singing mother from 2 days post-hatch. Then, 1-5 days after fledging, we conducted behavioral experiments and extracellular recordings in the auditory forebrain to test responses to zebra finch and Bengalese finch songs. Auditory forebrain neurons in cross-fostered and isolate birds showed increases in firing rate and decreases in responsiveness and selectivity. In cross-fostered birds, decreases in responsiveness and selectivity relative to white noise were specific to conspecific song stimuli, which paralleled behavioral attentiveness to conspecific songs in those same birds. This study shows that auditory and social experience can already impact song 'type' processing in the brains of nestlings, and that brain changes at this age can portend the effects of natal experience in adults.

11.
Exp Brain Res ; 242(9): 2207-2217, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39012473

RESUMEN

Music is based on various regularities, ranging from the repetition of physical sounds to theoretically organized harmony and counterpoint. How are multidimensional regularities processed when we listen to music? The present study focuses on the redundant signals effect (RSE) as a novel approach to untangling the relationship between these regularities in music. The RSE refers to the occurrence of a shorter reaction time (RT) when two or three signals are presented simultaneously than when only one of these signals is presented, and provides evidence that these signals are processed concurrently. In two experiments, chords that deviated from tonal (harmonic) and acoustic (intensity and timbre) regularities were presented occasionally in the final position of short chord sequences. The participants were asked to detect all deviant chords while withholding their responses to non-deviant chords (i.e., the Go/NoGo task). RSEs were observed in all double- and triple-deviant combinations, reflecting processing of multidimensional regularities. Further analyses suggested evidence of coactivation by separate perceptual modules in the combination of tonal and acoustic deviants, but not in the combination of two acoustic deviants. These results imply that tonal and acoustic regularities are different enough to be processed as two discrete pieces of information. Examining the underlying process of RSE may elucidate the relationship between multidimensional regularity processing in music.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Música , Tiempo de Reacción , Humanos , Femenino , Masculino , Tiempo de Reacción/fisiología , Adulto Joven , Adulto , Estimulación Acústica/métodos , Percepción Auditiva/fisiología
12.
Brain ; 146(10): 4065-4076, 2023 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-37184986

RESUMEN

Successful communication in daily life depends on accurate decoding of speech signals that are acoustically degraded by challenging listening conditions. This process presents the brain with a demanding computational task that is vulnerable to neurodegenerative pathologies. However, despite recent intense interest in the link between hearing impairment and dementia, comprehension of acoustically degraded speech in these diseases has been little studied. Here we addressed this issue in a cohort of 19 patients with typical Alzheimer's disease and 30 patients representing the three canonical syndromes of primary progressive aphasia (non-fluent/agrammatic variant primary progressive aphasia; semantic variant primary progressive aphasia; logopenic variant primary progressive aphasia), compared to 25 healthy age-matched controls. As a paradigm for the acoustically degraded speech signals of daily life, we used noise-vocoding: synthetic division of the speech signal into frequency channels constituted from amplitude-modulated white noise, such that fewer channels convey less spectrotemporal detail thereby reducing intelligibility. We investigated the impact of noise-vocoding on recognition of spoken three-digit numbers and used psychometric modelling to ascertain the threshold number of noise-vocoding channels required for 50% intelligibility by each participant. Associations of noise-vocoded speech intelligibility threshold with general demographic, clinical and neuropsychological characteristics and regional grey matter volume (defined by voxel-based morphometry of patients' brain images) were also assessed. Mean noise-vocoded speech intelligibility threshold was significantly higher in all patient groups than healthy controls, and significantly higher in Alzheimer's disease and logopenic variant primary progressive aphasia than semantic variant primary progressive aphasia (all P < 0.05). In a receiver operating characteristic analysis, vocoded intelligibility threshold discriminated Alzheimer's disease, non-fluent variant and logopenic variant primary progressive aphasia patients very well from healthy controls. Further, this central hearing measure correlated with overall disease severity but not with peripheral hearing or clear speech perception. Neuroanatomically, after correcting for multiple voxel-wise comparisons in predefined regions of interest, impaired noise-vocoded speech comprehension across syndromes was significantly associated (P < 0.05) with atrophy of left planum temporale, angular gyrus and anterior cingulate gyrus: a cortical network that has previously been widely implicated in processing degraded speech signals. Our findings suggest that the comprehension of acoustically altered speech captures an auditory brain process relevant to daily hearing and communication in major dementia syndromes, with novel diagnostic and therapeutic implications.


Asunto(s)
Enfermedad de Alzheimer , Afasia Progresiva Primaria , Afasia , Humanos , Enfermedad de Alzheimer/complicaciones , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/metabolismo , Comprensión , Habla , Encéfalo/patología , Afasia/patología , Afasia Progresiva Primaria/complicaciones , Pruebas Neuropsicológicas
13.
Audiol Neurootol ; 29(5): 408-417, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38527427

RESUMEN

INTRODUCTION: Auditory performance in noise of cochlear implant recipients can be assessed with the adaptive Matrix test (MT); however, when the speech-to-noise ratio (SNR) exceeds 15 dB, the background noise has any negative impact on the speech recognition. Here, we aim to evaluate the predictive power of aided pure-tone audiometry and speech recognition in quiet and establish cut-off values for both tests that indicate whether auditory performance in noise can be assessed using the Matrix sentence test in a diffuse noise environment. METHODS: Here, we assessed the power of pure-tone audiometry and speech recognition in quiet to predict the response to the MT. Ninety-eight cochlear implant recipients were assessed using different sound processors from Advanced Bionics (n = 56) and CochlearTM (n = 42). Auditory tests were performed at least 1 year after cochlear implantation or upgrading the sound processor to ensure the best benefit of the implant. Auditory assessment of the implanted ear in free-field conditions included: pure-tone average (PTA), speech discrimination score (SDS) in quiet at 65 dB, and speech recognition threshold (SRT) in noise that is the SNR at which the patient can correctly recognize 50% of the words using the MT in a diffuse sound field. RESULTS: The SRT in noise was determined in 60 patients (61%) and undetermined in 38 (39%) using the MT. When cut-off values for PTA <36 dB and SDS >41% were used separately, they were able to predict a positive response to the MT in 83% of recipients; using both cut-off values together, the predictive value reached 92%. DISCUSSION: As the pure-tone audiometry is standardized universally and the speech recognition in quiet could vary depending on the language used; we propose that the MT should be performed in recipients with PTA <36 dB, and in recipients with PTA >36 dB, a list of Matrix sentences at a fixed SNR should be presented to determine the percentage of words understood. This approach should enable clinicians to obtain information about auditory performance in noise whenever possible.


Asunto(s)
Audiometría de Tonos Puros , Implantes Cocleares , Ruido , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Persona de Mediana Edad , Femenino , Masculino , Anciano , Adulto , Implantación Coclear , Anciano de 80 o más Años , Adulto Joven , Valor Predictivo de las Pruebas , Adolescente , Umbral Auditivo/fisiología
14.
Audiol Neurootol ; : 1-7, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38768568

RESUMEN

INTRODUCTION: This study aimed to verify the influence of speech stimulus presentation and speed on auditory recognition in cochlear implant (CI) users with poorer performance. METHODS: The cross-sectional observational study applied auditory speech perception tests to fifteen adults, using three different ways of presenting the stimulus, in the absence of competitive noise: monitored live voice (MLV); recorded speech at typical speed (RSTS); recorded speech at slow speed (RSSS). The scores were assessed using the Percent Sentence Recognition Index (PSRI). The data were inferentially analysed using the Friedman and Wilcoxon tests with a 95% confidence interval and 5% significance level (p < 0.05). RESULTS: The mean age was 41.1 years, the mean duration of CI use was 11.4 years, and the mean hearing threshold was 29.7 ± 5.9 dBHL. Test performance, as determined by the PSRI, was MLV = 42.4 ± 17.9%; RSTS = 20.3 ± 14.3%; RSSS = 40.6 ± 20.7%. There was a significant difference identified for RSTS compared to MLV and RSSS. CONCLUSION: The way the stimulus is presented and the speed at which it is presented enable greater auditory speech recognition in CI users, thus favouring comprehension when the tests are applied in the MLV and RSSS modalities.

15.
Perception ; 53(4): 219-239, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38304994

RESUMEN

This study investigates the crossmodal associations between naturally occurring sound textures and tactile textures. Previous research has demonstrated the association between low-level sensory features of sound and touch, as well as higher-level, cognitively mediated associations involving language, emotions, and metaphors. However, stimuli like textures, which are found in both modalities have received less attention. In this study, we conducted two experiments: a free association task and a two alternate forced choice task using everyday tactile textures and sound textures selected from natural sound categories. The results revealed consistent crossmodal associations reported by participants between the textures of the two modalities. They tended to associate more sound textures (e.g., wood shavings and sandpaper) with tactile surfaces that were rated as harder, rougher, and intermediate on the sticky-slippery scale. While some participants based the auditory-tactile association on sensory features, others made the associations based on semantic relationships, co-occurrence in nature, and emotional mediation. Interestingly, the statistical features of the sound textures (mean, variance, kurtosis, power, autocorrelation, and correlation) did not show significant correlations with the crossmodal associations, indicating a higher-level association. This study provides insights into auditory-tactile associations by highlighting the role of sensory and emotional (or cognitive) factors in prompting these associations.


Asunto(s)
Percepción del Tacto , Tacto , Humanos , Sonido , Semántica , Atención
16.
Adv Exp Med Biol ; 1455: 227-256, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38918355

RESUMEN

The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.


Asunto(s)
Percepción Auditiva , Música , Primates , Humanos , Animales , Percepción Auditiva/fisiología , Recién Nacido , Adulto , Primates/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , Potenciales Evocados/fisiología , Electroencefalografía
17.
Proc Natl Acad Sci U S A ; 118(48)2021 11 30.
Artículo en Inglés | MEDLINE | ID: mdl-34819369

RESUMEN

To guide behavior, perceptual systems must operate on intrinsically ambiguous sensory input. Observers are usually able to acknowledge the uncertainty of their perception, but in some cases, they critically fail to do so. Here, we show that a physiological correlate of ambiguity can be found in pupil dilation even when the observer is not aware of such ambiguity. We used a well-known auditory ambiguous stimulus, known as the tritone paradox, which can induce the perception of an upward or downward pitch shift within the same individual. In two experiments, behavioral responses showed that listeners could not explicitly access the ambiguity in this stimulus, even though their responses varied from trial to trial. However, pupil dilation was larger for the more ambiguous cases. The ambiguity of the stimulus for each listener was indexed by the entropy of behavioral responses, and this entropy was also a significant predictor of pupil size. In particular, entropy explained additional variation in pupil size independent of the explicit judgment of confidence in the specific situation that we investigated, in which the two measures were decoupled. Our data thus suggest that stimulus ambiguity is implicitly represented in the brain even without explicit awareness of this ambiguity.


Asunto(s)
Percepción Auditiva/fisiología , Concienciación/fisiología , Pupila/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Juicio , Masculino , Incertidumbre , Percepción Visual/fisiología
18.
Proc Natl Acad Sci U S A ; 118(29)2021 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-34272278

RESUMEN

Rhythm perception is fundamental to speech and music. Humans readily recognize a rhythmic pattern, such as that of a familiar song, independently of the tempo at which it occurs. This shows that our perception of auditory rhythms is flexible, relying on global relational patterns more than on the absolute durations of specific time intervals. Given that auditory rhythm perception in humans engages a complex auditory-motor cortical network even in the absence of movement and that the evolution of vocal learning is accompanied by strengthening of forebrain auditory-motor pathways, we hypothesize that vocal learning species share our perceptual facility for relational rhythm processing. We test this by asking whether the best-studied animal model for vocal learning, the zebra finch, can recognize a fundamental rhythmic pattern-equal timing between event onsets (isochrony)-based on temporal relations between intervals rather than on absolute durations. Prior work suggests that vocal nonlearners (pigeons and rats) are quite limited in this regard and are biased to attend to absolute durations when listening to rhythmic sequences. In contrast, using naturalistic sounds at multiple stimulus rates, we show that male zebra finches robustly recognize isochrony independent of absolute time intervals, even at rates distant from those used in training. Our findings highlight the importance of comparative studies of rhythmic processing and suggest that vocal learning species are promising animal models for key aspects of human rhythm perception. Such models are needed to understand the neural mechanisms behind the positive effect of rhythm on certain speech and movement disorders.


Asunto(s)
Percepción Auditiva , Pinzones/fisiología , Animales , Corteza Auditiva/fisiología , Femenino , Aprendizaje , Masculino , Patrones de Reconocimiento Fisiológico , Sonido , Voz
19.
Proc Natl Acad Sci U S A ; 118(29)2021 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-34266949

RESUMEN

The perception of sensory events can be enhanced or suppressed by the surrounding spatial and temporal context in ways that facilitate the detection of novel objects and contribute to the perceptual constancy of those objects under variable conditions. In the auditory system, the phenomenon known as auditory enhancement reflects a general principle of contrast enhancement, in which a target sound embedded within a background sound becomes perceptually more salient if the background is presented first by itself. This effect is highly robust, producing an effective enhancement of the target of up to 25 dB (more than two orders of magnitude in intensity), depending on the task. Despite the importance of the effect, neural correlates of auditory contrast enhancement have yet to be identified in humans. Here, we used the auditory steady-state response to probe the neural representation of a target sound under conditions of enhancement. The probe was simultaneously modulated in amplitude with two modulation frequencies to distinguish cortical from subcortical responses. We found robust correlates for neural enhancement in the auditory cortical, but not subcortical, responses. Our findings provide empirical support for a previously unverified theory of auditory enhancement based on neural adaptation of inhibition and point to approaches for improving sensory prostheses for hearing loss, such as hearing aids and cochlear implants.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva , Estimulación Acústica , Adolescente , Adulto , Umbral Auditivo , Conducta , Electroencefalografía , Femenino , Audición , Humanos , Masculino , Sonido , Adulto Joven
20.
Psychiatry Clin Neurosci ; 78(5): 282-290, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38321640

RESUMEN

AIM: The current study aimed to infer neurophysiological mechanisms of auditory processing in children with Rett syndrome (RTT)-rare neurodevelopmental disorders caused by MECP2 mutations. We examined two brain responses elicited by 40-Hz click trains: auditory steady-state response (ASSR), which reflects fine temporal analysis of auditory input, and sustained wave (SW), which is associated with integral processing of the auditory signal. METHODS: We recorded electroencephalogram findings in 43 patients with RTT (aged 2.92-17.1 years) and 43 typically developing children of the same age during 40-Hz click train auditory stimulation, which lasted for 500 ms and was presented with interstimulus intervals of 500 to 800 ms. Mixed-model ancova with age as a covariate was used to compare amplitude of ASSR and SW between groups, taking into account the temporal dynamics and topography of the responses. RESULTS: Amplitude of SW was atypically small in children with RTT starting from early childhood, with the difference from typically developing children decreasing with age. ASSR showed a different pattern of developmental changes: the between-group difference was negligible in early childhood but increased with age as ASSR increased in the typically developing group, but not in those with RTT. Moreover, ASSR was associated with expressive speech development in patients, so that children who could use words had more pronounced ASSR. CONCLUSION: ASSR and SW show promise as noninvasive electrophysiological biomarkers of auditory processing that have clinical relevance and can shed light onto the link between genetic impairment and the RTT phenotype.


Asunto(s)
Percepción Auditiva , Electroencefalografía , Potenciales Evocados Auditivos , Síndrome de Rett , Humanos , Síndrome de Rett/fisiopatología , Femenino , Niño , Potenciales Evocados Auditivos/fisiología , Adolescente , Preescolar , Percepción Auditiva/fisiología , Estimulación Acústica
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda