Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33.036
Filter
1.
Cereb Cortex ; 34(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39087881

ABSTRACT

Perception integrates both sensory inputs and internal models of the environment. In the auditory domain, predictions play a critical role because of the temporal nature of sounds. However, the precise contribution of cortical and subcortical structures in these processes and their interaction remain unclear. It is also unclear whether these brain interactions are specific to abstract rules or if they also underlie the predictive coding of local features. We used high-field 7T functional magnetic resonance imaging to investigate interactions between cortical and subcortical areas during auditory predictive processing. Volunteers listened to tone sequences in an oddball paradigm where the predictability of the deviant was manipulated. Perturbations in periodicity were also introduced to test the specificity of the response. Results indicate that both cortical and subcortical auditory structures encode high-order predictive dynamics, with the effect of predictability being strongest in the auditory cortex. These predictive dynamics were best explained by modeling a top-down information flow, in contrast to unpredicted responses. No error signals were observed to deviations of periodicity, suggesting that these responses are specific to abstract rule violations. Our results support the idea that the high-order predictive dynamics observed in subcortical areas propagate from the auditory cortex.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Auditory Perception , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Male , Female , Adult , Auditory Perception/physiology , Young Adult , Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Brain Mapping/methods
2.
Cephalalgia ; 44(7): 3331024241258722, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39093997

ABSTRACT

BACKGROUND: Altered sensory processing in migraine has been demonstrated by several studies in unimodal, and especially visual, tasks. While there is some limited evidence hinting at potential alterations in multisensory processing among migraine sufferers, this aspect remains relatively unexplored. This study investigated the interictal cognitive performance of migraine patients without aura compared to matched controls, focusing on associative learning, recall, and transfer abilities through the Sound-Face Test, an audiovisual test based on the principles of the Rutgers Acquired Equivalence Test. MATERIALS AND METHODS: The performance of 42 volunteering migraine patients was compared to the data of 42 matched controls, selected from a database of healthy volunteers who had taken the test earlier. The study aimed to compare the groups' performance in learning, recall, and the ability to transfer learned associations. RESULTS: Migraine patients demonstrated significantly superior associative learning as compared to controls, requiring fewer trials, and making fewer errors during the acquisition phase. However, no significant differences were observed in retrieval error ratios, generalization error ratios, or reaction times between migraine patients and controls in later stages of the test. CONCLUSION: The results of our study support those of previous investigations, which concluded that multisensory processing exhibits a unique pattern in migraine. The specific finding that associative audiovisual pair learning is more effective in adult migraine patients than in matched controls is unexpected. If the phenomenon is not an artifact, it may be assumed to be a combined result of the hypersensitivity present in migraine and the sensory threshold-lowering effect of multisensory integration.


Subject(s)
Association Learning , Migraine without Aura , Humans , Adult , Female , Male , Association Learning/physiology , Migraine without Aura/physiopathology , Young Adult , Visual Perception/physiology , Auditory Perception/physiology , Middle Aged , Photic Stimulation/methods , Acoustic Stimulation/methods
3.
Trends Hear ; 28: 23312165241265199, 2024.
Article in English | MEDLINE | ID: mdl-39095047

ABSTRACT

Participation in complex listening situations such as group conversations in noisy environments sets high demands on the auditory system and on cognitive processing. Reports of hearing-impaired people indicate that strenuous listening situations occurring throughout the day lead to feelings of fatigue at the end of the day. The aim of the present study was to develop a suitable test sequence to evoke and measure listening effort (LE) and listening-related fatigue (LRF), and, to evaluate the influence of hearing aid use on both dimensions in mild to moderately hearing-impaired participants. The chosen approach aims to reconstruct a representative acoustic day (Time Compressed Acoustic Day [TCAD]) by means of an eight-part hearing-test sequence with a total duration of approximately 2½ h. For this purpose, the hearing test sequence combined four different listening tasks with five different acoustic scenarios and was presented to the 20 test subjects using virtual acoustics in an open field measurement in aided and unaided conditions. Besides subjective ratings of LE and LRF, behavioral measures (response accuracy, reaction times), and an attention test (d2-R) were performed prior to and after the TCAD. Furthermore, stress hormones were evaluated by taking salivary samples. Subjective ratings of LRF increased throughout the test sequence. This effect was observed to be higher when testing unaided. In three of the eight listening tests, the aided condition led to significantly faster reaction times/response accuracies than in the unaided condition. In the d2-R test, an interaction in processing speed between time (pre- vs. post-TCAD) and provision (unaided vs. aided) was found suggesting an influence of hearing aid provision on LRF. A comparison of the averaged subjective ratings at the beginning and end of the TCAD shows a significant increase in LRF for both conditions. At the end of the TCAD, subjective fatigue was significantly lower when wearing hearing aids. The analysis of stress hormones did not reveal significant effects.


Subject(s)
Acoustic Stimulation , Hearing Aids , Noise , Humans , Male , Female , Middle Aged , Aged , Noise/adverse effects , Correction of Hearing Impairment/instrumentation , Correction of Hearing Impairment/methods , Attention , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Adult , Auditory Fatigue , Time Factors , Reaction Time , Virtual Reality , Auditory Perception/physiology , Fatigue , Hearing Loss/psychology , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Hearing Loss/diagnosis , Speech Perception/physiology , Saliva/metabolism , Saliva/chemistry , Hearing , Auditory Threshold
4.
Cereb Cortex ; 34(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39110413

ABSTRACT

Music is a non-verbal human language, built on logical, hierarchical structures, that offers excellent opportunities to explore how the brain processes complex spatiotemporal auditory sequences. Using the high temporal resolution of magnetoencephalography, we investigated the unfolding brain dynamics of 70 participants during the recognition of previously memorized musical sequences compared to novel sequences matched in terms of entropy and information content. Measures of both whole-brain activity and functional connectivity revealed a widespread brain network underlying the recognition of the memorized auditory sequences, which comprised primary auditory cortex, superior temporal gyrus, insula, frontal operculum, cingulate gyrus, orbitofrontal cortex, basal ganglia, thalamus, and hippocampus. Furthermore, while the auditory cortex responded mainly to the first tones of the sequences, the activity of higher-order brain areas such as the cingulate gyrus, frontal operculum, hippocampus, and orbitofrontal cortex largely increased over time during the recognition of the memorized versus novel musical sequences. In conclusion, using a wide range of analytical techniques spanning from decoding to functional connectivity and building on previous works, our study provided new insights into the spatiotemporal whole-brain mechanisms for conscious recognition of auditory sequences.


Subject(s)
Auditory Perception , Brain , Magnetoencephalography , Music , Humans , Male , Female , Adult , Magnetoencephalography/methods , Auditory Perception/physiology , Young Adult , Brain/physiology , Recognition, Psychology/physiology , Brain Mapping/methods , Nerve Net/physiology , Nerve Net/diagnostic imaging , Acoustic Stimulation/methods
5.
PLoS One ; 19(8): e0306271, 2024.
Article in English | MEDLINE | ID: mdl-39110701

ABSTRACT

Music is omnipresent in daily life and may interact with critical cognitive processes including memory. Despite music's presence during diverse daily activities including studying, commuting, or working, existing literature has yielded mixed results as to whether music improves or impairs memory for information experienced in parallel. To elucidate how music memory and its predictive structure modulate the encoding of novel information, we developed a cross-modal sequence learning task during which participants acquired sequences of abstract shapes accompanied with paired music. Our goal was to investigate whether familiar and structurally regular music could provide a "temporal schema" (rooted in the organized and hierarchical structure of music) to enhance the acquisition of parallel temporally-ordered visual information. Results revealed a complex interplay between music familiarity and music structural regularity in learning paired visual sequences. Notably, compared to a control condition, listening to well-learned, regularly-structured music (music with high predictability) significantly facilitated visual sequence encoding, yielding quicker learning and retrieval speed. Conversely, learned but irregular music (where music memory violated musical syntax) significantly impaired sequence encoding. While those findings supported our mechanistic framework, intriguingly, unlearned irregular music-characterized by the lowest predictability-also demonstrated memory enhancement. In conclusion, this study demonstrates that concurrent music can modulate visual sequence learning, and the effect varies depending on the interaction between both music familiarity and regularity, offering insights into potential applications for enhancing human memory.


Subject(s)
Music , Recognition, Psychology , Humans , Music/psychology , Female , Male , Recognition, Psychology/physiology , Young Adult , Adult , Learning/physiology , Auditory Perception/physiology , Visual Perception/physiology , Memory/physiology , Acoustic Stimulation , Photic Stimulation
6.
Cereb Cortex ; 34(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39110411

ABSTRACT

Speech perception requires the binding of spatiotemporally disjoint auditory-visual cues. The corresponding brain network-level information processing can be characterized by two complementary mechanisms: functional segregation which refers to the localization of processing in either isolated or distributed modules across the brain, and integration which pertains to cooperation among relevant functional modules. Here, we demonstrate using functional magnetic resonance imaging recordings that subjective perceptual experience of multisensory speech stimuli, real and illusory, are represented in differential states of segregation-integration. We controlled the inter-subject variability of illusory/cross-modal perception parametrically, by introducing temporal lags in the incongruent auditory-visual articulations of speech sounds within the McGurk paradigm. The states of segregation-integration balance were captured using two alternative computational approaches. First, the module responsible for cross-modal binding of sensory signals defined as the perceptual binding network (PBN) was identified using standardized parametric statistical approaches and their temporal correlations with all other brain areas were computed. With increasing illusory perception, the majority of the nodes of PBN showed decreased cooperation with the rest of the brain, reflecting states of high segregation but reduced global integration. Second, using graph theoretic measures, the altered patterns of segregation-integration were cross-validated.


Subject(s)
Brain , Magnetic Resonance Imaging , Speech Perception , Visual Perception , Humans , Brain/physiology , Brain/diagnostic imaging , Male , Female , Adult , Young Adult , Speech Perception/physiology , Visual Perception/physiology , Brain Mapping , Acoustic Stimulation , Nerve Net/physiology , Nerve Net/diagnostic imaging , Photic Stimulation/methods , Illusions/physiology , Neural Pathways/physiology , Auditory Perception/physiology
7.
Trends Hear ; 28: 23312165241263485, 2024.
Article in English | MEDLINE | ID: mdl-39099537

ABSTRACT

Older adults with normal hearing or with age-related hearing loss face challenges when listening to speech in noisy environments. To better serve individuals with communication difficulties, precision diagnostics are needed to characterize individuals' auditory perceptual and cognitive abilities beyond pure tone thresholds. These abilities can be heterogenous across individuals within the same population. The goal of the present study is to consider the suprathreshold variability and develop characteristic profiles for older adults with normal hearing (ONH) and with hearing loss (OHL). Auditory perceptual and cognitive abilities were tested on ONH (n = 20) and OHL (n = 20) on an abbreviated test battery using portable automated rapid testing. Using cluster analyses, three main profiles were revealed for each group, showing differences in auditory perceptual and cognitive abilities despite similar audiometric thresholds. Analysis of variance showed that ONH profiles differed in spatial release from masking, speech-in-babble testing, cognition, tone-in-noise, and binaural temporal processing abilities. The OHL profiles differed in spatial release from masking, speech-in-babble testing, cognition, and tolerance to background noise performance. Correlation analyses showed significant relationships between auditory and cognitive abilities in both groups. This study showed that auditory perceptual and cognitive deficits can be present to varying degrees in the presence of audiometrically normal hearing and among listeners with similar degrees of hearing loss. The results of this study inform the need for taking individual differences into consideration and developing targeted intervention options beyond pure tone thresholds and speech testing.


Subject(s)
Audiometry, Pure-Tone , Auditory Threshold , Cognition , Noise , Perceptual Masking , Speech Perception , Humans , Male , Cognition/physiology , Female , Aged , Auditory Threshold/physiology , Speech Perception/physiology , Middle Aged , Noise/adverse effects , Acoustic Stimulation , Auditory Perception/physiology , Aged, 80 and over , Hearing/physiology , Age Factors , Case-Control Studies , Presbycusis/diagnosis , Presbycusis/physiopathology , Predictive Value of Tests , Audiology/methods , Individuality , Persons With Hearing Impairments/psychology , Cluster Analysis , Audiometry, Speech/methods
8.
Sci Rep ; 14(1): 18059, 2024 08 05.
Article in English | MEDLINE | ID: mdl-39103461

ABSTRACT

The aim of the present study was to identify cognitive alterations, as indicated by event-related potentials (ERPs), after one month of daily exposure to theta binaural beats (BBs) for 10 minutes. The recruited healthy subjects (n = 60) were equally divided into experimental and control groups. For a month, the experimental group was required to practice BBs listening daily, while the control group did not. ERPs were assessed at three separate visits over a span of one month, with a two-week interval between each visit. At each visit, ERPs were measured before and after listening. The auditory and visual ERPs significantly increased the auditory and visual P300 amplitudes consistently at each visit. BBs enhanced the auditory N200 amplitude consistently across all visits, but the visual N200 amplitude increased only at the second and third visits. Compared to the healthy controls, daily exposure to BBs for two weeks resulted in increased auditory P300 amplitude. Additionally, four weeks of BBs exposure not only increased auditory P300 amplitude but also reduced P300 latency. These preliminary findings suggest that listening to BBs at 6 Hz for 10 minutes daily may enhance certain aspects of cognitive function. However, further research is needed to confirm these effects and to understand the underlying mechanisms. Identifying the optimal duration and practice of listening to 6 Hz BBs could potentially contribute to cognitive enhancement strategies in healthy individuals.


Subject(s)
Acoustic Stimulation , Humans , Male , Female , Adult , Young Adult , Evoked Potentials, Auditory/physiology , Electroencephalography , Evoked Potentials/physiology , Auditory Perception/physiology , Event-Related Potentials, P300/physiology , Cognition/physiology
9.
J Acoust Soc Am ; 156(2): 879-890, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39120867

ABSTRACT

This paper introduces a ranking and selection approach to psychoacoustic and psychophysical experimentation, with the aim of identifying top-ranking samples in listening experiments with minimal pairwise comparisons. We draw inspiration from sports tournament designs and propose to adopt modified knockout (KO) tournaments. Two variants of modified KO tournaments are described, which adapt the tree selection sorting algorithm and the replacement selection algorithm known from computer science. To validate the proposed method, a listening experiment is conducted, where binaural renderings of seven chamber music halls are compared regarding loudness and reverberance. The rankings obtained by the modified KO tournament method are compared to those obtained from a traditional round-robin (RR) design, where all possible pairs are compared. Moreover, the paper presents simulations to illustrate the method's robustness when choosing different parameters and assuming different underlying data distributions. The study's findings demonstrate that modified KO tournaments are more efficient than full RR designs in terms of the number of comparisons required for identifying the top ranking samples. Thus, they provide a promising alternative for this task. We offer an open-source implementation so that researchers can easily integrate KO designs into their studies.


Subject(s)
Algorithms , Psychoacoustics , Humans , Music , Computer Simulation , Auditory Perception , Acoustics , Loudness Perception , Acoustic Stimulation/methods
10.
Brain Behav ; 14(8): e3637, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39099332

ABSTRACT

BACKGROUND: Information about the development of cognitive skills and the effect of sensory integration in children using auditory brainstem implants (ABIs) is still limited. OBJECTIVE: This study primarily aims to investigate the relationship between sensory processing skills and attention and memory abilities in children with ABI, and secondarily aims to examine the effects of implant duration on sensory processing and cognitive skills in these children. METHODS: The study included 25 children between the ages of 6 and 10 years (mean age: 14 girls and 11 boys) with inner ear and/or auditory nerve anomalies using auditory brainstem implants. Visual-Aural Digit Span Test B, Marking Test, Dunn Sensory Profile Questionnaire were applied to all children. RESULTS: The sensory processing skills of children are statistically significant and positive, and moderately related to their cognitive skills. As the duration of implant use increases, better attention and memory performances have been observed (p < .05). CONCLUSION: The study demonstrated the positive impact of sensory processing on the development of memory and attention skills in children with ABI. It will contribute to evaluating the effectiveness of attention, memory, and sensory integration skills, and aiding in the development of more effective educational strategies for these children.


Subject(s)
Attention , Auditory Brain Stem Implants , Cognition , Humans , Female , Child , Male , Cognition/physiology , Attention/physiology , Memory/physiology , Auditory Perception/physiology
12.
Commun Biol ; 7(1): 965, 2024 Aug 09.
Article in English | MEDLINE | ID: mdl-39122960

ABSTRACT

Predictive coding theory suggests the brain anticipates sensory information using prior knowledge. While this theory has been extensively researched within individual sensory modalities, evidence for predictive processing across sensory modalities is limited. Here, we examine how crossmodal knowledge is represented and learned in the brain, by identifying the hierarchical networks underlying crossmodal predictions when information of one sensory modality leads to a prediction in another modality. We record electroencephalogram (EEG) during a crossmodal audiovisual local-global oddball paradigm, in which the predictability of transitions between tones and images are manipulated at both the stimulus and sequence levels. To dissect the complex predictive signals in our EEG data, we employed a model-fitting approach to untangle neural interactions across modalities and hierarchies. The model-fitting result demonstrates that audiovisual integration occurs at both the levels of individual stimulus interactions and multi-stimulus sequences. Furthermore, we identify the spatio-spectro-temporal signatures of prediction-error signals across hierarchies and modalities, and reveal that auditory and visual prediction errors are rapidly redirected to the central-parietal electrodes during learning through alpha-band interactions. Our study suggests a crossmodal predictive coding mechanism where unimodal predictions are processed by distributed brain networks to form crossmodal knowledge.


Subject(s)
Auditory Perception , Brain , Electroencephalography , Visual Perception , Humans , Brain/physiology , Auditory Perception/physiology , Visual Perception/physiology , Male , Female , Adult , Young Adult , Acoustic Stimulation , Photic Stimulation
13.
J Vis Exp ; (209)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39141538

ABSTRACT

Vocal communication plays a crucial role in the social interactions of primates, particularly in survival and social organization. Humans have developed a unique and advanced vocal communication strategy in the form of language. To study the evolution of human language, it is necessary to investigate the neural mechanisms underlying vocal processing in humans, as well as to understand how brain mechanisms have evolved by comparing them with those in nonhuman primates. Herein, we developed a method to noninvasively measure the electroencephalography (EEG) of awake nonhuman primates. This recording method allows for long-term studies without harming the animals, and, importantly, allows us to directly compare nonhuman primate EEG data with human data, providing insights into the evolution of human language. In the current study, we used the scalp EEG recording method to investigate brain activity in response to species-specific vocalizations in marmosets. This study provides novel insights by using scalp EEG to capture widespread neural representations in marmosets during vocal perception, filling gaps in existing knowledge.


Subject(s)
Callithrix , Electroencephalography , Vocalization, Animal , Animals , Electroencephalography/methods , Vocalization, Animal/physiology , Callithrix/physiology , Auditory Perception/physiology , Male , Wakefulness/physiology , Female
14.
J Acoust Soc Am ; 156(2): 1111-1122, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39145812

ABSTRACT

Previous psychological studies have shown that musical consonance is not only determined by the frequency ratios between tones, but also by the frequency spectra of those tones. However, these prior studies used artificial tones, specifically tones built from a small number of pure tones, which do not match the acoustic complexity of real musical instruments. The present experiment therefore investigates tones recorded from a real musical instrument, the Westerkerk Carillon, conducting a "dense rating" experiment where participants (N = 113) rated musical intervals drawn from the continuous range 0-15 semitones. Results show that the traditional consonances of the major third and the minor sixth become dissonances in the carillon and that small intervals (in particular 0.5-2.5 semitones) also become particularly dissonant. Computational modelling shows that these effects are primarily caused by interference between partials (e.g., beating), but that preference for harmonicity is also necessary to produce an accurate overall account of participants' preferences. The results support musicians' writings about the carillon and contribute to ongoing debates about the psychological mechanisms underpinning consonance perception, in particular disputing the recent claim that interference is largely irrelevant to consonance perception.


Subject(s)
Acoustic Stimulation , Music , Humans , Male , Female , Adult , Young Adult , Computer Simulation , Sound Spectrography , Adolescent , Pitch Perception , Time Factors , Acoustics , Middle Aged , Auditory Perception
15.
J Acoust Soc Am ; 156(2): 989-1003, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39136635

ABSTRACT

In order to improve the prediction accuracy of the sound quality of vehicle interior noise, a novel sound quality prediction model was proposed based on the physiological response predicted metrics, i.e., loudness, sharpness, and roughness. First, a human-ear sound transmission model was constructed by combining the outer and middle ear finite element model with the cochlear transmission line model. This model converted external input noise into cochlear basilar membrane response. Second, the physiological perception models of loudness, sharpness, and roughness were constructed by transforming the basilar membrane response into sound perception related to neuronal firing. Finally, taking the calculated loudness, sharpness, and roughness of the physiological model and the subjective evaluation values of vehicle interior noise as the parameters, a sound quality prediction model was constructed by TabNet model. The results demonstrate that the loudness, sharpness, and roughness computed by the human-ear physiological model exhibit a stronger correlation with the subjective evaluation of sound quality annoyance compared to traditional psychoacoustic parameters. Furthermore, the average error percentage of sound quality prediction based on the physiological model is only 3.81%, which is lower than that based on traditional psychoacoustic parameters.


Subject(s)
Loudness Perception , Noise, Transportation , Psychoacoustics , Humans , Loudness Perception/physiology , Acoustic Stimulation/methods , Finite Element Analysis , Models, Biological , Automobiles , Basilar Membrane/physiology , Cochlea/physiology , Auditory Perception/physiology , Noise , Ear, Middle/physiology , Computer Simulation
16.
Cereb Cortex ; 34(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39128940

ABSTRACT

The orbitofrontal cortex and amygdala collaborate in outcome-guided decision-making through reciprocal projections. While serotonin transporter knockout (SERT-/-) rodents show changes in outcome-guided decision-making, and in orbitofrontal cortex and amygdala neuronal activity, it remains unclear whether SERT genotype modulates orbitofrontal cortex-amygdala synchronization. We trained SERT-/- and SERT+/+ male rats to execute a task requiring to discriminate between two auditory stimuli, one predictive of a reward (CS+) and the other not (CS-), by responding through nose pokes in opposite-side ports. Overall, task acquisition was not influenced by genotype. Next, we simultaneously recorded local field potentials in the orbitofrontal cortex and amygdala of both hemispheres while the rats performed the task. Behaviorally, SERT-/- rats showed a nonsignificant trend for more accurate responses to the CS-. Electrophysiologically, orbitofrontal cortex-amygdala synchronization in the beta and gamma frequency bands during response selection was significantly reduced and associated with decreased hubness and clustering coefficient in both regions in SERT-/- rats compared to SERT+/+ rats. Conversely, theta synchronization at the time of behavioral response in the port associated with reward was similar in both genotypes. Together, our findings reveal the modulation by SERT genotype of the orbitofrontal cortex-amygdala functional connectivity during an auditory discrimination task.


Subject(s)
Amygdala , Discrimination, Psychological , Gamma Rhythm , Prefrontal Cortex , Serotonin Plasma Membrane Transport Proteins , Animals , Male , Prefrontal Cortex/physiology , Serotonin Plasma Membrane Transport Proteins/genetics , Serotonin Plasma Membrane Transport Proteins/deficiency , Amygdala/physiology , Gamma Rhythm/physiology , Rats , Discrimination, Psychological/physiology , Beta Rhythm/physiology , Neural Pathways/physiology , Reward , Auditory Perception/physiology , Acoustic Stimulation , Rats, Transgenic
17.
Sci Adv ; 10(33): eadp9816, 2024 Aug 16.
Article in English | MEDLINE | ID: mdl-39141740

ABSTRACT

Perceptual learning leads to improvement in behavioral performance, yet how the brain supports challenging perceptual demands is unknown. We used two photon imaging in the mouse primary auditory cortex during behavior in a Go-NoGo task designed to test perceptual difficulty. Using general linear model analysis, we found a subset of neurons that increased their responses during high perceptual demands. Single neurons increased their responses to both Go and NoGo sounds when mice were engaged in the more difficult perceptual discrimination. This increased responsiveness contributes to enhanced cortical network discriminability for the learned sounds. Under passive listening conditions, the same neurons responded weaker to the more similar sound pairs of the difficult task, and the training protocol by itself induced specific suppression to the learned sounds. Our findings identify how neuronal activity in auditory cortex is modulated during high perceptual demands, which is a fundamental feature associated with perceptual improvement.


Subject(s)
Auditory Cortex , Auditory Perception , Neurons , Animals , Auditory Cortex/physiology , Mice , Neurons/physiology , Auditory Perception/physiology , Acoustic Stimulation , Male , Learning/physiology
18.
Trends Hear ; 28: 23312165241273342, 2024.
Article in English | MEDLINE | ID: mdl-39150412

ABSTRACT

During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This perspective provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.


Subject(s)
Hearing Aids , Hearing Loss , Humans , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Hearing Loss/therapy , Hearing , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Correction of Hearing Impairment/instrumentation , Auditory Perception , Health Knowledge, Attitudes, Practice , Patient Education as Topic
19.
Sci Rep ; 14(1): 19048, 2024 08 16.
Article in English | MEDLINE | ID: mdl-39152203

ABSTRACT

Aesthetic preference is intricately linked to learning and creativity. Previous studies have largely examined the perception of novelty in terms of pleasantness and the generation of novelty via creativity separately. The current study examines the connection between perception and generation of novelty in music; specifically, we investigated how pleasantness judgements and brain responses to musical notes of varying probability (estimated by a computational model of auditory expectation) are linked to learning and creativity. To facilitate learning de novo, 40 non-musicians were trained on an unfamiliar artificial music grammar. After learning, participants evaluated the pleasantness of the final notes of melodies, which varied in probability, while their EEG was recorded. They also composed their own musical pieces using the learned grammar which were subsequently assessed by experts. As expected, there was an inverted U-shaped relationship between liking and probability: participants were more likely to rate the notes with intermediate probabilities as pleasant. Further, intermediate probability notes elicited larger N100 and P200 at posterior and frontal sites, respectively, associated with prediction error processing. Crucially, individuals who produced less creative compositions preferred higher probability notes, whereas individuals who composed more creative pieces preferred notes with intermediate probability. Finally, evoked brain responses to note probability were relatively independent of learning and creativity, suggesting that these higher-level processes are not mediated by brain responses related to performance monitoring. Overall, our findings shed light on the relationship between perception and generation of novelty, offering new insights into aesthetic preference and its neural correlates.


Subject(s)
Auditory Perception , Creativity , Electroencephalography , Learning , Music , Humans , Music/psychology , Male , Female , Learning/physiology , Adult , Young Adult , Auditory Perception/physiology , Brain/physiology , Acoustic Stimulation
20.
Hear Res ; 451: 109093, 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-39094370

ABSTRACT

The discovery and development of electrocochleography (ECochG) in animal models has been fundamental for its implementation in clinical audiology and neurotology. In our laboratory, the use of round-window ECochG recordings in chinchillas has allowed a better understanding of auditory efferent functioning. In previous works, we gave evidence of the corticofugal modulation of auditory-nerve and cochlear responses during visual attention and working memory. However, whether these cognitive top-down mechanisms to the most peripheral structures of the auditory pathway are also active during audiovisual crossmodal stimulation is unknown. Here, we introduce a new technique, wireless ECochG to record compound-action potentials of the auditory nerve (CAP), cochlear microphonics (CM), and round-window noise (RWN) in awake chinchillas during a paradigm of crossmodal (visual and auditory) stimulation. We compared ECochG data obtained from four awake chinchillas recorded with a wireless ECochG system with wired ECochG recordings from six anesthetized animals. Although ECochG experiments with the wireless system had a lower signal-to-noise ratio than wired recordings, their quality was sufficient to compare ECochG potentials in awake crossmodal conditions. We found non-significant differences in CAP and CM amplitudes in response to audiovisual stimulation compared to auditory stimulation alone (clicks and tones). On the other hand, spontaneous auditory-nerve activity (RWN) was modulated by visual crossmodal stimulation, suggesting that visual crossmodal simulation can modulate spontaneous but not evoked auditory-nerve activity. However, given the limited sample of 10 animals (4 wireless and 6 wired), these results should be interpreted cautiously. Future experiments are required to substantiate these conclusions. In addition, we introduce the use of wireless ECochG in animal models as a useful tool for translational research.


Subject(s)
Acoustic Stimulation , Audiometry, Evoked Response , Auditory Pathways , Chinchilla , Cochlear Nerve , Photic Stimulation , Wakefulness , Wireless Technology , Animals , Cochlear Nerve/physiology , Wakefulness/physiology , Wireless Technology/instrumentation , Auditory Pathways/physiology , Audiometry, Evoked Response/methods , Models, Animal , Auditory Perception/physiology , Cochlea/physiology , Visual Perception , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL