Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32.811
Filter
1.
J Acoust Soc Am ; 155(5): 3101-3117, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38722101

ABSTRACT

Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.


Subject(s)
Acoustic Stimulation , Auditory Perception , Cochlear Implants , Music , Humans , Female , Male , Adult , Middle Aged , Aged , Auditory Perception/physiology , Young Adult , Patient Preference , Cochlear Implantation/instrumentation , Touch Perception/physiology , Vibration , Touch
2.
J Neurodev Disord ; 16(1): 24, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38720271

ABSTRACT

BACKGROUND: Autism spectrum disorder (ASD) is currently diagnosed in approximately 1 in 44 children in the United States, based on a wide array of symptoms, including sensory dysfunction and abnormal language development. Boys are diagnosed ~ 3.8 times more frequently than girls. Auditory temporal processing is crucial for speech recognition and language development. Abnormal development of temporal processing may account for ASD language impairments. Sex differences in the development of temporal processing may underlie the differences in language outcomes in male and female children with ASD. To understand mechanisms of potential sex differences in temporal processing requires a preclinical model. However, there are no studies that have addressed sex differences in temporal processing across development in any animal model of ASD. METHODS: To fill this major gap, we compared the development of auditory temporal processing in male and female wildtype (WT) and Fmr1 knock-out (KO) mice, a model of Fragile X Syndrome (FXS), a leading genetic cause of ASD-associated behaviors. Using epidural screw electrodes, we recorded auditory event related potentials (ERP) and auditory temporal processing with a gap-in-noise auditory steady state response (ASSR) paradigm at young (postnatal (p)21 and p30) and adult (p60) ages from both auditory and frontal cortices of awake, freely moving mice. RESULTS: The results show that ERP amplitudes were enhanced in both sexes of Fmr1 KO mice across development compared to WT counterparts, with greater enhancement in adult female than adult male KO mice. Gap-ASSR deficits were seen in the frontal, but not auditory, cortex in early development (p21) in female KO mice. Unlike male KO mice, female KO mice show WT-like temporal processing at p30. There were no temporal processing deficits in the adult mice of both sexes. CONCLUSIONS: These results show a sex difference in the developmental trajectories of temporal processing and hypersensitive responses in Fmr1 KO mice. Male KO mice show slower maturation of temporal processing than females. Female KO mice show stronger hypersensitive responses than males later in development. The differences in maturation rates of temporal processing and hypersensitive responses during various critical periods of development may lead to sex differences in language function, arousal and anxiety in FXS.


Subject(s)
Disease Models, Animal , Evoked Potentials, Auditory , Fragile X Mental Retardation Protein , Fragile X Syndrome , Mice, Knockout , Sex Characteristics , Animals , Fragile X Syndrome/physiopathology , Female , Male , Mice , Evoked Potentials, Auditory/physiology , Fragile X Mental Retardation Protein/genetics , Auditory Perception/physiology , Autism Spectrum Disorder/physiopathology , Auditory Cortex/physiopathology , Mice, Inbred C57BL
3.
Headache ; 64(5): 482-493, 2024 May.
Article in English | MEDLINE | ID: mdl-38693749

ABSTRACT

OBJECTIVE: In this cross-sectional observational study, we aimed to investigate sensory profiles and multisensory integration processes in women with migraine using virtual dynamic interaction systems. BACKGROUND: Compared to studies on unimodal sensory processing, fewer studies show that multisensory integration differs in patients with migraine. Multisensory integration of visual, auditory, verbal, and haptic modalities has not been evaluated in migraine. METHODS: A 12-min virtual dynamic interaction game consisting of four parts was played by the participants. During the game, the participants were exposed to either visual stimuli only or multisensory stimuli in which auditory, verbal, and haptic stimuli were added to the visual stimuli. A total of 78 women participants (28 with migraine without aura and 50 healthy controls) were enrolled in this prospective exploratory study. Patients with migraine and healthy participants who met the inclusion criteria were randomized separately into visual and multisensory groups: Migraine multisensory (14 adults), migraine visual (14 adults), healthy multisensory (25 adults), and healthy visual (25 adults). The Sensory Profile Questionnaire was utilized to assess the participants' sensory profiles. The game scores and survey results were analyzed. RESULTS: In visual stimulus, the gaming performance scores of patients with migraine without aura were similar to the healthy controls, at a median (interquartile range [IQR]) of 81.8 (79.5-85.8) and 80.9 (77.1-84.2) (p = 0.149). Error rate of visual stimulus in patients with migraine without aura were comparable to healthy controls, at a median (IQR) of 0.11 (0.08-0.13) and 0.12 (0.10-0.14), respectively (p = 0,166). In multisensory stimulation, average gaming score was lower in patients with migraine without aura compared to healthy individuals (median [IQR] 82.2 [78.8-86.3] vs. 78.6 [74.0-82.4], p = 0.028). In women with migraine, exposure to new sensory modality upon visual stimuli in the fourth, seventh, and tenth rounds (median [IQR] 78.1 [74.1-82.0], 79.7 [77.2-82.5], 76.5 [70.2-82.1]) exhibited lower game scores compared to visual stimuli only (median [IQR] 82.3 [77.9-87.8], 84.2 [79.7-85.6], 80.8 [79.0-85.7], p = 0.044, p = 0.049, p = 0.016). According to the Sensory Profile Questionnaire results, sensory sensitivity, and sensory avoidance scores of patients with migraine (median [IQR] score 45.5 [41.0-54.7] and 47.0 [41.5-51.7]) were significantly higher than healthy participants (median [IQR] score 39.0 [34.0-44.2] and 40.0 [34.0-48.0], p < 0.001, p = 0.001). CONCLUSION: The virtual dynamic game approach showed for the first time that the gaming performance of patients with migraine without aura was negatively affected by the addition of auditory, verbal, and haptic stimuli onto visual stimuli. Multisensory integration of sensory modalities including haptic stimuli is disturbed even in the interictal period in women with migraine. Virtual games can be employed to assess the impact of sensory problems in the course of the disease. Also, sensory training could be a potential therapy target to improve multisensory processing in migraine.


Subject(s)
Migraine Disorders , Humans , Female , Adult , Cross-Sectional Studies , Migraine Disorders/physiopathology , Prospective Studies , Video Games , Visual Perception/physiology , Young Adult , Virtual Reality , Photic Stimulation/methods , Auditory Perception/physiology
4.
Brain Cogn ; 177: 106161, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38696928

ABSTRACT

Narrative comprehension relies on basic sensory processing abilities, such as visual and auditory processing, with recent evidence for utilizing executive functions (EF), which are also engaged during reading. EF was previously related to the "supporter" of engaging the auditory and visual modalities in different cognitive tasks, with evidence of lower efficiency in this process among those with reading difficulties in the absence of a visual stimulus (i.e. while listening to stories). The current study aims to fill out the gap related to the level of reliance on these neural circuits while visual aids (pictures) are involved during story listening in relation to reading skills. Functional MRI data were collected from 44 Hebrew-speaking children aged 8-12 years while listening to stories with vs without visual stimuli (i.e., pictures). Functional connectivity of networks supporting reading was defined in each condition and compared between the conditions against behavioral reading measures. Lower reading skills were related to greater functional connectivity values between EF networks (default mode and memory networks), and between the auditory and memory networks for the stories with vs without the visual stimulation. A greater difference in functional connectivity between the conditions was related to lower reading scores. We conclude that lower reading skills in children may be related to a need for greater scaffolding, i.e., visual stimulation such as pictures describing the narratives when listening to stories, which may guide future intervention approaches.


Subject(s)
Executive Function , Magnetic Resonance Imaging , Reading , Visual Perception , Humans , Child , Male , Female , Executive Function/physiology , Visual Perception/physiology , Auditory Perception/physiology , Comprehension/physiology , Photic Stimulation/methods , Nerve Net/physiology , Nerve Net/diagnostic imaging , Brain/physiology
5.
Trends Hear ; 28: 23312165241239541, 2024.
Article in English | MEDLINE | ID: mdl-38738337

ABSTRACT

Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.


Subject(s)
Cochlea , Speech Perception , Tinnitus , Humans , Cochlea/physiopathology , Tinnitus/physiopathology , Tinnitus/diagnosis , Animals , Speech Perception/physiology , Hyperacusis/physiopathology , Noise/adverse effects , Auditory Perception/physiology , Synapses/physiology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/diagnosis , Loudness Perception
6.
JASA Express Lett ; 4(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38727569

ABSTRACT

Bimodal stimulation, a cochlear implant (CI) in one ear and a hearing aid (HA) in the other, provides highly asymmetrical inputs. To understand how asymmetry affects perception and memory, forward and backward digit spans were measured in nine bimodal listeners. Spans were unchanged from monotic to diotic presentation; there was an average two-digit decrease for dichotic presentation with some extreme cases of decreases to zero spans. Interaurally asymmetrical decreases were not predicted based on the device or better-functioning ear. Therefore, bimodal listeners can demonstrate a strong ear dominance, diminishing memory recall dichotically even when perception was intact monaurally.


Subject(s)
Cochlear Implants , Humans , Middle Aged , Aged , Male , Female , Dichotic Listening Tests , Adult , Auditory Perception/physiology , Hearing Aids
7.
Codas ; 36(2): e20230048, 2024.
Article in Portuguese, English | MEDLINE | ID: mdl-38695432

ABSTRACT

PURPOSE: To correlate behavioral assessment results of central auditory processing and the self-perception questionnaire after acoustically controlled auditory training. METHODS: The study assessed 10 individuals with a mean age of 44.5 years who had suffered mild traumatic brain injury. They underwent behavioral assessment of central auditory processing and answered the Formal Auditory Training self-perception questionnaire after the therapeutic intervention - whose questions address auditory perception, understanding orders, request to repeat statements, occurrence of misunderstandings, attention span, auditory performance in noisy environments, telephone communication, and self-esteem. Patients were asked to indicate the frequency with which the listed behaviors occurred. RESULTS: Figure-ground, sequential memory for sounds, and temporal processing correlated with improvement in following instructions, fewer requests to repeat statements, increased attention span, improved communication, and understanding on the phone and when watching TV. CONCLUSION: Auditory closure, figure-ground, and temporal processing had improved in the assessment after the acoustically controlled auditory training, and there were fewer auditory behavior complaints.


OBJETIVO: Correlacionar os resultados da avaliação comportamental do processamento auditivo central e do questionário de autopercepção após o treinamento auditivo acusticamente controlado. MÉTODO: Foram avaliados dez indivíduos com média de idade de 44,5 anos, que sofreram traumatismo cranioencefálico de grau leve. Os indivíduos foram submetidos a avaliação comportamental do processamento auditivo central e também responderam ao questionário de autopercepção "Treinamento Auditivo Formal" após a intervenção terapêutica. O questionário foi composto por questões referentes a percepção auditiva, compreensão de ordens, solicitação de repetição de enunciados, ocorrência mal-entendidos, tempo de atenção, desempenho auditivo em ambiente ruidoso, comunicação ao telefone e autoestima e os pacientes foram solicitados a assinalar a frequência de ocorrência dos comportamentos listados. RESULTADOS: As habilidades auditivas de figura-fundo e memória para sons em sequência e processamento temporal correlacionaram-se com melhora para seguir instruções, diminuição das solicitações de repetições e aumento do tempo de atenção e melhora da comunicação e da compreensão ao telefone e para assistir TV. CONCLUSÃO: Observou-se adequação das habilidades auditivas de fechamento auditivo, figura fundo, e processamento temporal na avaliação pós-treinamento auditivo acusticamente controlado, além de redução das queixas quanto ao comportamento auditivo.


Subject(s)
Auditory Perception , Self Concept , Humans , Adult , Male , Female , Auditory Perception/physiology , Surveys and Questionnaires , Middle Aged , Brain Concussion/psychology , Brain Concussion/rehabilitation , Acoustic Stimulation/methods , Young Adult
8.
Nat Commun ; 15(1): 3941, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38729937

ABSTRACT

A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Callithrix , Electrocorticography , Animals , Auditory Cortex/physiology , Callithrix/physiology , Male , Female , Evoked Potentials/physiology , Frontal Lobe/physiology , Evoked Potentials, Auditory/physiology , Auditory Perception/physiology , Brain Mapping/methods
9.
Commun Biol ; 7(1): 598, 2024 May 18.
Article in English | MEDLINE | ID: mdl-38762691

ABSTRACT

Many songbirds learn to produce songs through vocal practice in early life and continue to sing daily throughout their lifetime. While it is well-known that adult songbirds sing as part of their mating rituals, the functions of singing behavior outside of reproductive contexts remain unclear. Here, we investigated this issue in adult male zebra finches by suppressing their daily singing for two weeks and examining the effects on song performance. We found that singing suppression decreased the pitch, amplitude, and duration of songs, and that those song features substantially recovered through subsequent free singing. These reversible song changes were not dependent on auditory feedback or the age of the birds, contrasting with the adult song plasticity that has been reported previously. These results demonstrate that adult song structure is not stable without daily singing, and suggest that adult songbirds maintain song performance by preventing song changes through physical act of daily singing throughout their life. Such daily singing likely functions as vocal training to maintain the song production system in optimal conditions for song performance in reproductive contexts, similar to how human singers and athletes practice daily to maintain their performance.


Subject(s)
Feedback, Sensory , Finches , Vocalization, Animal , Animals , Vocalization, Animal/physiology , Male , Finches/physiology , Feedback, Sensory/physiology , Age Factors , Aging/physiology , Auditory Perception/physiology
10.
Philos Trans R Soc Lond B Biol Sci ; 379(1905): 20230186, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38768210

ABSTRACT

Communication takes place within a network of multiple signallers and receivers. Social network analysis provides tools to quantify how an individual's social positioning affects group dynamics and the subsequent biological consequences. However, network analysis is rarely applied to animal communication, likely due to the logistical difficulties of monitoring natural communication networks. We generated a simulated communication network to investigate how variation in individual communication behaviours generates network effects, and how this communication network's structure feeds back to affect future signalling interactions. We simulated competitive acoustic signalling interactions among chorusing individuals and varied several parameters related to communication and chorus size to examine their effects on calling output and social connections. Larger choruses had higher noise levels, and this reduced network density and altered the relationships between individual traits and communication network position. Hearing sensitivity interacted with chorus size to affect both individuals' positions in the network and the acoustic output of the chorus. Physical proximity to competitors influenced signalling, but a distinctive communication network structure emerged when signal active space was limited. Our model raises novel predictions about communication networks that could be tested experimentally and identifies aspects of information processing in complex environments that remain to be investigated. This article is part of the theme issue 'The power of sound: unravelling how acoustic communication shapes group dynamics'.


Subject(s)
Auditory Perception , Animals , Auditory Perception/physiology , Vocalization, Animal/physiology , Animal Communication , Models, Biological , Birds/physiology , Acoustics , Social Behavior
11.
eNeuro ; 11(5)2024 May.
Article in English | MEDLINE | ID: mdl-38702187

ABSTRACT

Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Humans , Male , Female , Electroencephalography/methods , Young Adult , Adult , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain/physiology , Brain Mapping , Adolescent
12.
Curr Biol ; 34(10): 2162-2174.e5, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38718798

ABSTRACT

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.


Subject(s)
Auditory Cortex , Cues , Sound Localization , Auditory Cortex/physiology , Humans , Male , Sound Localization/physiology , Animals , Female , Adult , Electroencephalography , Macaca mulatta/physiology , Magnetoencephalography , Acoustic Stimulation , Young Adult , Auditory Perception/physiology
13.
J Neural Eng ; 21(3)2024 May 22.
Article in English | MEDLINE | ID: mdl-38729132

ABSTRACT

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Subject(s)
Attention , Auditory Perception , Deep Learning , Electroencephalography , Hearing Loss , Humans , Attention/physiology , Female , Electroencephalography/methods , Male , Middle Aged , Hearing Loss/physiopathology , Hearing Loss/rehabilitation , Hearing Loss/diagnosis , Aged , Auditory Perception/physiology , Noise , Adult , Hearing Aids , Speech Perception/physiology , Neural Networks, Computer
14.
Curr Biol ; 34(10): 2200-2211.e6, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38733991

ABSTRACT

The activity of neurons in sensory areas sometimes covaries with upcoming choices in decision-making tasks. However, the prevalence, causal origin, and functional role of choice-related activity remain controversial. Understanding the circuit-logic of decision signals in sensory areas will require understanding their laminar specificity, but simultaneous recordings of neural activity across the cortical layers in forced-choice discrimination tasks have not yet been performed. Here, we describe neural activity from such recordings in the auditory cortex of mice during a frequency discrimination task with delayed report, which, as we show, requires the auditory cortex. Stimulus-related information was widely distributed across layers but disappeared very quickly after stimulus offset. Choice selectivity emerged toward the end of the delay period-suggesting a top-down origin-but only in the deep layers. Early stimulus-selective and late choice-selective deep neural ensembles were correlated, suggesting that the choice-selective signal fed back to the auditory cortex is not just action specific but develops as a consequence of the sensory-motor contingency imposed by the task.


Subject(s)
Auditory Cortex , Choice Behavior , Animals , Auditory Cortex/physiology , Mice , Choice Behavior/physiology , Acoustic Stimulation , Mice, Inbred C57BL , Auditory Perception/physiology , Male , Neurons/physiology
15.
Sci Rep ; 14(1): 11036, 2024 05 14.
Article in English | MEDLINE | ID: mdl-38744906

ABSTRACT

The perception of a continuous phantom in a sensory domain in the absence of an external stimulus is explained as a maladaptive compensation of aberrant predictive coding, a proposed unified theory of brain functioning. If this were true, these changes would occur not only in the domain of the phantom percept but in other sensory domains as well. We confirm this hypothesis by using tinnitus (continuous phantom sound) as a model and probe the predictive coding mechanism using the established local-global oddball paradigm in both the auditory and visual domains. We observe that tinnitus patients are sensitive to changes in predictive coding not only in the auditory but also in the visual domain. We report changes in well-established components of event-related EEG such as the mismatch negativity. Furthermore, deviations in stimulus characteristics were correlated with the subjective tinnitus distress. These results provide an empirical confirmation that aberrant perceptions are a symptom of a higher-order systemic disorder transcending the domain of the percept.


Subject(s)
Auditory Perception , Electroencephalography , Tinnitus , Humans , Tinnitus/physiopathology , Tinnitus/psychology , Male , Female , Auditory Perception/physiology , Adult , Middle Aged , Acoustic Stimulation , Visual Perception/physiology
16.
PLoS One ; 19(5): e0303309, 2024.
Article in English | MEDLINE | ID: mdl-38748741

ABSTRACT

Catchiness and groove are common phenomena when listening to popular music. Catchiness may be a potential factor for experiencing groove but quantitative evidence for such a relationship is missing. To examine whether and how catchiness influences a key component of groove-the pleasurable urge to move to music (PLUMM)-we conducted a listening experiment with 450 participants and 240 short popular music clips of drum patterns, bass lines or keys/guitar parts. We found four main results: (1) catchiness as measured in a recognition task was only weakly associated with participants' perceived catchiness of music. We showed that perceived catchiness is multi-dimensional, subjective, and strongly associated with pleasure. (2) We found a sizeable positive relationship between PLUMM and perceived catchiness. (3) However, the relationship is complex, as further analysis showed that pleasure suppresses perceived catchiness' effect on the urge to move. (4) We compared common factors that promote perceived catchiness and PLUMM and found that listener-related variables contributed similarly, while the effects of musical content diverged. Overall, our data suggests music perceived as catchy is likely to foster groove experiences.


Subject(s)
Auditory Perception , Music , Pleasure , Humans , Music/psychology , Female , Male , Adult , Auditory Perception/physiology , Young Adult , Pleasure/physiology , Adolescent , Acoustic Stimulation
17.
Sci Rep ; 14(1): 11164, 2024 05 15.
Article in English | MEDLINE | ID: mdl-38750185

ABSTRACT

Electrophysiological studies have investigated predictive processing in music by examining event-related potentials (ERPs) elicited by the violation of musical expectations. While several studies have reported that the predictability of stimuli can modulate the amplitude of ERPs, it is unclear how specific the representation of the expected note is. The present study addressed this issue by recording the omitted stimulus potentials (OSPs) to avoid contamination of bottom-up sensory processing with top-down predictive processing. Decoding of the omitted content was attempted using a support vector machine, which is a type of machine learning. ERP responses to the omission of four target notes (E, F, A, and C) at the same position in familiar and unfamiliar melodies were recorded from 25 participants. The results showed that the omission N1 were larger in the familiar melody condition than in the unfamiliar melody condition. The decoding accuracy of the four omitted notes was significantly higher in the familiar melody condition than in the unfamiliar melody condition. These results suggest that the OSPs contain discriminable predictive information, and the higher the predictability, the more the specific representation of the expected note is generated.


Subject(s)
Acoustic Stimulation , Electroencephalography , Music , Humans , Female , Male , Young Adult , Adult , Auditory Perception/physiology , Support Vector Machine , Evoked Potentials, Auditory/physiology , Evoked Potentials/physiology
18.
Anim Cogn ; 27(1): 38, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38750339

ABSTRACT

This study investigates the musical perception skills of dogs through playback experiments. Dogs were trained to distinguish between two different target locations based on a sequence of four ascending or descending notes. A total of 16 dogs of different breeds, age, and sex, but all of them with at least basic training, were recruited for the study. Dogs received training from their respective owners in a suitable environment within their familiar home settings. The training sequence consisted of notes [Do-Mi-Sol#-Do (C7-E7-G7#-C8; Hz frequency: 2093, 2639, 3322, 4186)] digitally generated as pure sinusoidal tones. The training protocol comprised 3 sequential training levels, with each level consisting of 4 sessions with a minimum of 10 trials per session. In the test phase, the sequence was transposed to evaluate whether dogs used relative pitch when identifying the sequences. A correct response by the dog was recorded as 1, while an incorrect response, occurring when the dog chose the opposite zone of the bowl, was marked as 0. Statistical analyses were performed using a binomial test. Among 16 dogs, only two consistently performed above the chance level, demonstrating the ability to recognize relative pitch, even with transposed sequences. This study suggests that dogs may have the ability to attend to relative pitch, a critical aspect of human musicality.


Subject(s)
Music , Dogs , Animals , Male , Female , Auditory Perception , Pitch Perception , Acoustic Stimulation
19.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38700440

ABSTRACT

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Subject(s)
Attention , Auditory Perception , Electroencephalography , Visual Perception , Humans , Attention/physiology , Male , Female , Young Adult , Adult , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Evoked Potentials/physiology , Brain/physiology , Adolescent
20.
PLoS One ; 19(5): e0299393, 2024.
Article in English | MEDLINE | ID: mdl-38691540

ABSTRACT

A wealth of research has investigated the associations between bilingualism and cognition, especially in regards to executive function. Some developmental studies reveal different cognitive profiles between monolinguals and bilinguals in visual or audio-visual attention tasks, which might stem from their attention allocation differences. Yet, whether such distinction exists in the auditory domain alone is unknown. In this study, we compared differences in auditory attention, measured by standardized tests, between monolingual and bilingual children. A comprehensive literature search was conducted in three electronic databases: OVID Medline, OVID PsycInfo, and EBSCO CINAHL. Twenty studies using standardized tests to assess auditory attention in monolingual and bilingual participants aged less than 18 years were identified. We assessed the quality of these studies using a scoring tool for evaluating primary research. For statistical analysis, we pooled the effect size in a random-effects meta-analytic model, where between-study heterogeneity was quantified using the I2 statistic. No substantial publication bias was observed based on the funnel plot. Further, meta-regression modelling suggests that test measure (accuracy vs. response times) significantly affected the studies' effect sizes whereas other factors (e.g., participant age, stimulus type) did not. Specifically, studies reporting accuracy observed marginally greater accuracy in bilinguals (g = 0.10), whereas those reporting response times indicated faster latency in monolinguals (g = -0.34). There was little difference between monolingual and bilingual children's performance on standardized auditory attention tests. We also found that studies tend to include a wide variety of bilingual children but report limited language background information of the participants. This, unfortunately, limits the potential theoretical contributions of the reviewed studies. Recommendations to improve the quality of future research are discussed.


Subject(s)
Attention , Multilingualism , Humans , Attention/physiology , Child , Auditory Perception/physiology , Adolescent , Cognition/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...