Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 16.647
1.
eNeuro ; 11(5)2024 May.
Article En | MEDLINE | ID: mdl-38702194

Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.


Electroencephalography , Event-Related Potentials, P300 , Humans , Male , Female , Adult , Electroencephalography/methods , Young Adult , Event-Related Potentials, P300/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology
2.
Brain Behav ; 14(5): e3517, 2024 May.
Article En | MEDLINE | ID: mdl-38702896

INTRODUCTION: Attention and working memory are key cognitive functions that allow us to select and maintain information in our mind for a short time, being essential for our daily life and, in particular, for learning and academic performance. It has been shown that musical training can improve working memory performance, but it is still unclear if and how the neural mechanisms of working memory and particularly attention are implicated in this process. In this work, we aimed to identify the oscillatory signature of bimodal attention and working memory that contributes to improved working memory in musically trained children. MATERIALS AND METHODS: We recruited children with and without musical training and asked them to complete a bimodal (auditory/visual) attention and working memory task, whereas their brain activity was measured using electroencephalography. Behavioral, time-frequency, and source reconstruction analyses were made. RESULTS: Results showed that, overall, musically trained children performed better on the task than children without musical training. When comparing musically trained children with children without musical training, we found modulations in the alpha band pre-stimuli onset and the beginning of stimuli onset in the frontal and parietal regions. These correlated with correct responses to the attended modality. Moreover, during the end phase of stimuli presentation, we found modulations correlating with correct responses independent of attention condition in the theta and alpha bands, in the left frontal and right parietal regions. CONCLUSIONS: These results suggest that musically trained children have improved neuronal mechanisms for both attention allocation and memory encoding. Our results can be important for developing interventions for people with attention and working memory difficulties.


Alpha Rhythm , Attention , Memory, Short-Term , Music , Theta Rhythm , Humans , Memory, Short-Term/physiology , Attention/physiology , Male , Female , Child , Theta Rhythm/physiology , Alpha Rhythm/physiology , Auditory Perception/physiology , Electroencephalography , Visual Perception/physiology , Brain/physiology
3.
J Acoust Soc Am ; 155(5): 3101-3117, 2024 May 01.
Article En | MEDLINE | ID: mdl-38722101

Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.


Acoustic Stimulation , Auditory Perception , Cochlear Implants , Music , Humans , Female , Male , Adult , Middle Aged , Aged , Auditory Perception/physiology , Young Adult , Patient Preference , Cochlear Implantation/instrumentation , Touch Perception/physiology , Vibration , Touch
4.
JASA Express Lett ; 4(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38727569

Bimodal stimulation, a cochlear implant (CI) in one ear and a hearing aid (HA) in the other, provides highly asymmetrical inputs. To understand how asymmetry affects perception and memory, forward and backward digit spans were measured in nine bimodal listeners. Spans were unchanged from monotic to diotic presentation; there was an average two-digit decrease for dichotic presentation with some extreme cases of decreases to zero spans. Interaurally asymmetrical decreases were not predicted based on the device or better-functioning ear. Therefore, bimodal listeners can demonstrate a strong ear dominance, diminishing memory recall dichotically even when perception was intact monaurally.


Cochlear Implants , Humans , Middle Aged , Aged , Male , Female , Dichotic Listening Tests , Adult , Auditory Perception/physiology , Hearing Aids
5.
Multisens Res ; 37(2): 89-124, 2024 Feb 13.
Article En | MEDLINE | ID: mdl-38714311

Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes - a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of N = 130. Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.


Attention , Video Games , Visual Perception , Humans , Male , Female , Visual Perception/physiology , Young Adult , Adult , Attention/physiology , Auditory Perception/physiology , Photic Stimulation , Adolescent , Reaction Time/physiology , Cues , Acoustic Stimulation
6.
Multisens Res ; 37(2): 143-162, 2024 Apr 30.
Article En | MEDLINE | ID: mdl-38714315

A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.


Acoustic Stimulation , Auditory Perception , Photic Stimulation , Reaction Time , Visual Perception , Humans , Visual Perception/physiology , Auditory Perception/physiology , Male , Female , Reaction Time/physiology , Adult , Young Adult , Judgment/physiology
7.
JASA Express Lett ; 4(5)2024 May 01.
Article En | MEDLINE | ID: mdl-38717467

A long-standing quest in audition concerns understanding relations between behavioral measures and neural representations of changes in sound intensity. Here, we examined relations between aspects of intensity perception and central neural responses within the inferior colliculus of unanesthetized rabbits (by averaging the population's spike count/level functions). We found parallels between the population's neural output and: (1) how loudness grows with intensity; (2) how loudness grows with duration; (3) how discrimination of intensity improves with increasing sound level; (4) findings that intensity discrimination does not depend on duration; and (5) findings that duration discrimination is a constant fraction of base duration.


Inferior Colliculi , Loudness Perception , Animals , Rabbits , Loudness Perception/physiology , Inferior Colliculi/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Auditory Perception/physiology , Neurons/physiology
8.
J Neurodev Disord ; 16(1): 24, 2024 May 08.
Article En | MEDLINE | ID: mdl-38720271

BACKGROUND: Autism spectrum disorder (ASD) is currently diagnosed in approximately 1 in 44 children in the United States, based on a wide array of symptoms, including sensory dysfunction and abnormal language development. Boys are diagnosed ~ 3.8 times more frequently than girls. Auditory temporal processing is crucial for speech recognition and language development. Abnormal development of temporal processing may account for ASD language impairments. Sex differences in the development of temporal processing may underlie the differences in language outcomes in male and female children with ASD. To understand mechanisms of potential sex differences in temporal processing requires a preclinical model. However, there are no studies that have addressed sex differences in temporal processing across development in any animal model of ASD. METHODS: To fill this major gap, we compared the development of auditory temporal processing in male and female wildtype (WT) and Fmr1 knock-out (KO) mice, a model of Fragile X Syndrome (FXS), a leading genetic cause of ASD-associated behaviors. Using epidural screw electrodes, we recorded auditory event related potentials (ERP) and auditory temporal processing with a gap-in-noise auditory steady state response (ASSR) paradigm at young (postnatal (p)21 and p30) and adult (p60) ages from both auditory and frontal cortices of awake, freely moving mice. RESULTS: The results show that ERP amplitudes were enhanced in both sexes of Fmr1 KO mice across development compared to WT counterparts, with greater enhancement in adult female than adult male KO mice. Gap-ASSR deficits were seen in the frontal, but not auditory, cortex in early development (p21) in female KO mice. Unlike male KO mice, female KO mice show WT-like temporal processing at p30. There were no temporal processing deficits in the adult mice of both sexes. CONCLUSIONS: These results show a sex difference in the developmental trajectories of temporal processing and hypersensitive responses in Fmr1 KO mice. Male KO mice show slower maturation of temporal processing than females. Female KO mice show stronger hypersensitive responses than males later in development. The differences in maturation rates of temporal processing and hypersensitive responses during various critical periods of development may lead to sex differences in language function, arousal and anxiety in FXS.


Disease Models, Animal , Evoked Potentials, Auditory , Fragile X Mental Retardation Protein , Fragile X Syndrome , Mice, Knockout , Sex Characteristics , Animals , Fragile X Syndrome/physiopathology , Female , Male , Mice , Evoked Potentials, Auditory/physiology , Fragile X Mental Retardation Protein/genetics , Auditory Perception/physiology , Autism Spectrum Disorder/physiopathology , Auditory Cortex/physiopathology , Mice, Inbred C57BL
9.
PLoS One ; 19(5): e0303347, 2024.
Article En | MEDLINE | ID: mdl-38805449

Musical compositions are distinguished by their unique rhythmic patterns, determined by subtle differences in how regular beats are subdivided. Precise perception of these subdivisions is essential for discerning nuances in rhythmic patterns. While musical rhythm typically comprises sound elements with a variety of timbres or spectral cues, the impact of such spectral variations on the perception of rhythmic patterns remains unclear. Here, we show that consistency in spectral cues affects perceptual accuracy in discriminating subdivided rhythmic patterns. We conducted online experiments using rhythmic sound sequences consisting of band-passed noise bursts to measure discrimination accuracy. Participants were asked to discriminate between a swing-like rhythm sequence, characterized by a 2:1 interval ratio, and its more or less exaggerated version. This task was also performed under two additional rhythm conditions: inversed-swing rhythm (1:2 ratio) and regular subdivision (1:1 ratio). The center frequency of the band noises was either held constant or alternated between two values. Our results revealed a significant decrease in discrimination accuracy when the center frequency was alternated, irrespective of the rhythm ratio condition. This suggests that rhythm perception is shaped by temporal structure and affected by spectral properties.


Acoustic Stimulation , Auditory Perception , Music , Humans , Male , Female , Adult , Auditory Perception/physiology , Young Adult , Periodicity , Sound , Discrimination, Psychological/physiology
10.
PLoS Biol ; 22(5): e3002631, 2024 May.
Article En | MEDLINE | ID: mdl-38805517

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Acoustic Stimulation , Auditory Perception , Music , Speech Perception , Humans , Male , Female , Adult , Auditory Perception/physiology , Acoustic Stimulation/methods , Speech Perception/physiology , Young Adult , Speech/physiology , Adolescent
11.
Sci Rep ; 14(1): 12203, 2024 May 28.
Article En | MEDLINE | ID: mdl-38806554

Developmental Coordination Disorder (DCD) is a common neurodevelopmental disorder featuring deficits in motor coordination and motor timing among children. Deficits in rhythmic tracking, including perceptually tracking and synchronizing action with auditory rhythms, have been studied in a wide range of motor disorders, providing a foundation for developing rehabilitation programs incorporating auditory rhythms. We tested whether DCD also features these auditory-motor deficits among 7-10 year-old children. In a speech recognition task with no overt motor component, modulating the speech rhythm interfered more with the performance of children at risk for DCD than typically developing (TD) children. A set of auditory-motor tapping tasks further showed that, although children at risk for DCD performed worse than TD children in general, the presence of an auditory rhythmic cue (isochronous metronome or music) facilitated the temporal consistency of tapping. Finally, accuracy in the recognition of rhythmically modulated speech and tapping consistency correlated with performance on the standardized motor assessment. Together, the results show auditory rhythmic regularity benefits auditory perception and auditory-motor coordination in children at risk for DCD. This provides a foundation for future clinical studies to develop evidence-based interventions involving auditory-motor rhythmic coordination for children with DCD.


Auditory Perception , Motor Skills Disorders , Humans , Child , Motor Skills Disorders/physiopathology , Female , Male , Auditory Perception/physiology , Psychomotor Performance/physiology , Acoustic Stimulation , Speech Perception/physiology
12.
Codas ; 36(4): e20230047, 2024.
Article Pt, En | MEDLINE | ID: mdl-38808777

PURPOSE: To compare the acoustic measurements of Cepstral Peak Prominence Smoothed (CPPS) and Acoustic Voice Quality Index (AVQI) of children with normal and altered voices, to relationship with auditory-perceptual judgment (APJ) and to establish cut-off points. METHODS: Vocal recordings of the sustained vowel and number counting tasks of 185 children were selected from a database and submitted to acoustic analysis with extraction of CPPS and AVQI measurements, and to APJ. The APJ was performed individually for each task, classified as normal or altered, and for the tasks together defining whether the child would pass or fail in a situation of vocal screening. RESULTS: Children with altered APJ and who failed the screening had lower CPPS values and higher AVQI values, than those with normal APJ and who passed the screening. The APJ of the sustained vowel task was related to CPPS and AVQI, and APJ of the number counting task was related only to AVQI and CPPS numbers. The cut-off points that differentiate children with and without vocal deviation are 14.07 for the vowel CPPS, 7.62 for the CPPS numbers and 2.01 for the AVQI. CONCLUSION: Children with altered voices, have higher AVQI values and lower CPPS values, when detected in children with voices within the normal range. The acoustic measurements were related to the auditory perceptual judgment of vocal quality in the sustained vowel task, however, the number counting task was related only to the AVQI and CPPS. The cut-off points that differentiate children with and without vocal deviation are 14.07 for the CPPS vowel, 7.62 for the CPPS numbers and 2.01 for the AVQI. The three measures were similar in identifying voices without deviation and dysphonic voices.


OBJETIVO: Comparar as medidas acústicas de Cepstral Peak Prominence Smoothed (CPPS) e Acoustic Voice Quality Index (AVQI) de crianças com vozes normais e alteradas, relacionar com o julgamento perceptivo-auditivo (JPA) da voz e estabelecer pontos de corte. MÉTODO: Gravações vocais das tarefas de vogal sustentada e contagem de números de 185 crianças foram selecionadas em um banco de dados e submetidas a análise acústica com extração das medidas de CPPS e AVQI, e ao JPA. O JPA foi realizado individualmente para cada tarefa e as amostras foram classificadas posteriormente como normal ou alterada, e para as tarefas em conjunto definindo-se se a criança passaria ou falharia em uma situação de triagem vocal. RESULTADOS: Crianças com JPA alterado e que falharam na triagem apresentaram valores menores de CPPS e maiores de AVQI, do que as com JPA normal e que passaram na triagem. O JPA da tarefa de vogal sustentada se relacionou ao CPPS e AVQI, e da tarefa de contagem de números relacionou-se apenas ao AVQI e CPPS números. Os pontos de corte que diferenciam crianças com e sem desvio vocal são 14,07 para o CPPS vogal, 7,62 para o CPPS números e 2,01 para o AVQI. CONCLUSÃO: Crianças com JPA alterado apresentaram maiores valores de AVQI e menores valores de CPPs. O JPA da tarefa de vogal previu todas as medidas acústicas, porém, de contagem previu apenas as medidas extraídas dela. As três medidas foram semelhantes na identificação de vozes sem desvio e vozes disfônicas.


Speech Acoustics , Voice Quality , Humans , Voice Quality/physiology , Child , Female , Male , Auditory Perception/physiology , Voice Disorders/diagnosis , Voice Disorders/physiopathology , Adolescent , Case-Control Studies , Speech Production Measurement , Judgment
13.
Nat Commun ; 15(1): 4313, 2024 May 21.
Article En | MEDLINE | ID: mdl-38773109

Our brain is constantly extracting, predicting, and recognising key spatiotemporal features of the physical world in order to survive. While neural processing of visuospatial patterns has been extensively studied, the hierarchical brain mechanisms underlying conscious recognition of auditory sequences and the associated prediction errors remain elusive. Using magnetoencephalography (MEG), we describe the brain functioning of 83 participants during recognition of previously memorised musical sequences and systematic variations. The results show feedforward connections originating from auditory cortices, and extending to the hippocampus, anterior cingulate gyrus, and medial cingulate gyrus. Simultaneously, we observe backward connections operating in the opposite direction. Throughout the sequences, the hippocampus and cingulate gyrus maintain the same hierarchical level, except for the final tone, where the cingulate gyrus assumes the top position within the hierarchy. The evoked responses of memorised sequences and variations engage the same hierarchical brain network but systematically differ in terms of temporal dynamics, strength, and polarity. Furthermore, induced-response analysis shows that alpha and beta power is stronger for the variations, while gamma power is enhanced for the memorised sequences. This study expands on the predictive coding theory by providing quantitative evidence of hierarchical brain mechanisms during conscious memory and predictive processing of auditory sequences.


Auditory Cortex , Auditory Perception , Magnetoencephalography , Humans , Male , Female , Adult , Auditory Perception/physiology , Young Adult , Auditory Cortex/physiology , Brain/physiology , Acoustic Stimulation , Brain Mapping , Music , Gyrus Cinguli/physiology , Memory/physiology , Hippocampus/physiology , Recognition, Psychology/physiology
14.
Nat Commun ; 15(1): 4071, 2024 May 22.
Article En | MEDLINE | ID: mdl-38778078

Adaptive behavior requires integrating prior knowledge of action outcomes and sensory evidence for making decisions while maintaining prior knowledge for future actions. As outcome- and sensory-based decisions are often tested separately, it is unclear how these processes are integrated in the brain. In a tone frequency discrimination task with two sound durations and asymmetric reward blocks, we found that neurons in the medial prefrontal cortex of male mice represented the additive combination of prior reward expectations and choices. The sensory inputs and choices were selectively decoded from the auditory cortex irrespective of reward priors and the secondary motor cortex, respectively, suggesting localized computations of task variables are required within single trials. In contrast, all the recorded regions represented prior values that needed to be maintained across trials. We propose localized and global computations of task variables in different time scales in the cerebral cortex.


Auditory Cortex , Choice Behavior , Reward , Animals , Male , Choice Behavior/physiology , Mice , Auditory Cortex/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Acoustic Stimulation , Mice, Inbred C57BL , Cerebral Cortex/physiology , Motor Cortex/physiology , Auditory Perception/physiology
15.
PLoS One ; 19(5): e0303565, 2024.
Article En | MEDLINE | ID: mdl-38781127

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Acoustic Stimulation , Attention , Brain-Computer Interfaces , Electroencephalography , Humans , Male , Female , Electroencephalography/methods , Adult , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Event-Related Potentials, P300/physiology , Electrooculography/methods
16.
Curr Biol ; 34(9): R346-R348, 2024 05 06.
Article En | MEDLINE | ID: mdl-38714161

Animals including humans often react to sounds by involuntarily moving their face and body. A new study shows that facial movements provide a simple and reliable readout of a mouse's hearing ability that is more sensitive than traditional measurements.


Face , Animals , Mice , Face/physiology , Auditory Perception/physiology , Hearing/physiology , Sound , Movement/physiology , Humans
17.
Curr Biol ; 34(10): 2162-2174.e5, 2024 05 20.
Article En | MEDLINE | ID: mdl-38718798

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.


Auditory Cortex , Cues , Sound Localization , Auditory Cortex/physiology , Humans , Male , Sound Localization/physiology , Animals , Female , Adult , Electroencephalography , Macaca mulatta/physiology , Magnetoencephalography , Acoustic Stimulation , Young Adult , Auditory Perception/physiology
18.
Curr Biol ; 34(10): 2200-2211.e6, 2024 05 20.
Article En | MEDLINE | ID: mdl-38733991

The activity of neurons in sensory areas sometimes covaries with upcoming choices in decision-making tasks. However, the prevalence, causal origin, and functional role of choice-related activity remain controversial. Understanding the circuit-logic of decision signals in sensory areas will require understanding their laminar specificity, but simultaneous recordings of neural activity across the cortical layers in forced-choice discrimination tasks have not yet been performed. Here, we describe neural activity from such recordings in the auditory cortex of mice during a frequency discrimination task with delayed report, which, as we show, requires the auditory cortex. Stimulus-related information was widely distributed across layers but disappeared very quickly after stimulus offset. Choice selectivity emerged toward the end of the delay period-suggesting a top-down origin-but only in the deep layers. Early stimulus-selective and late choice-selective deep neural ensembles were correlated, suggesting that the choice-selective signal fed back to the auditory cortex is not just action specific but develops as a consequence of the sensory-motor contingency imposed by the task.


Auditory Cortex , Choice Behavior , Animals , Auditory Cortex/physiology , Mice , Choice Behavior/physiology , Acoustic Stimulation , Mice, Inbred C57BL , Auditory Perception/physiology , Male , Neurons/physiology
19.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38700440

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Attention , Auditory Perception , Electroencephalography , Visual Perception , Humans , Attention/physiology , Male , Female , Young Adult , Adult , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Evoked Potentials/physiology , Brain/physiology , Adolescent
20.
PLoS One ; 19(5): e0303309, 2024.
Article En | MEDLINE | ID: mdl-38748741

Catchiness and groove are common phenomena when listening to popular music. Catchiness may be a potential factor for experiencing groove but quantitative evidence for such a relationship is missing. To examine whether and how catchiness influences a key component of groove-the pleasurable urge to move to music (PLUMM)-we conducted a listening experiment with 450 participants and 240 short popular music clips of drum patterns, bass lines or keys/guitar parts. We found four main results: (1) catchiness as measured in a recognition task was only weakly associated with participants' perceived catchiness of music. We showed that perceived catchiness is multi-dimensional, subjective, and strongly associated with pleasure. (2) We found a sizeable positive relationship between PLUMM and perceived catchiness. (3) However, the relationship is complex, as further analysis showed that pleasure suppresses perceived catchiness' effect on the urge to move. (4) We compared common factors that promote perceived catchiness and PLUMM and found that listener-related variables contributed similarly, while the effects of musical content diverged. Overall, our data suggests music perceived as catchy is likely to foster groove experiences.


Auditory Perception , Music , Pleasure , Humans , Music/psychology , Female , Male , Adult , Auditory Perception/physiology , Young Adult , Pleasure/physiology , Adolescent , Acoustic Stimulation
...