Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.746
Filter
1.
PLoS One ; 19(7): e0299784, 2024.
Article in English | MEDLINE | ID: mdl-38950011

ABSTRACT

Observers can discriminate between correct versus incorrect perceptual decisions with feelings of confidence. The centro-parietal positivity build-up rate (CPP slope) has been suggested as a likely neural signature of accumulated evidence, which may guide both perceptual performance and confidence. However, CPP slope also covaries with reaction time, which also covaries with confidence in previous studies, and performance and confidence typically covary; thus, CPP slope may index signatures of perceptual performance rather than confidence per se. Moreover, perceptual metacognition-including neural correlates-has largely been studied in vision, with few exceptions. Thus, we lack understanding of domain-general neural signatures of perceptual metacognition outside vision. Here we designed a novel auditory pitch identification task and collected behavior with simultaneous 32-channel EEG in healthy adults. Participants saw two tone labels which varied in tonal distance on each trial (e.g., C vs D, C vs F), then heard a single auditory tone; they identified which label was correct and rated confidence. We found that pitch identification confidence varied with tonal distance, but performance, metacognitive sensitivity (trial-by-trial covariation of confidence with accuracy), and reaction time did not. Interestingly, however, while CPP slope covaried with performance and reaction time, it did not significantly covary with confidence. We interpret these results to mean that CPP slope is likely a signature of first-order perceptual processing and not confidence-specific signals or computations in auditory tasks. Our novel pitch identification task offers a valuable method to examine the neural correlates of auditory and domain-general perceptual confidence.


Subject(s)
Electroencephalography , Pitch Perception , Reaction Time , Humans , Male , Female , Adult , Reaction Time/physiology , Young Adult , Pitch Perception/physiology , Acoustic Stimulation , Metacognition/physiology , Auditory Perception/physiology
2.
Optom Vis Sci ; 101(6): 393-398, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38990237

ABSTRACT

SIGNIFICANCE: It is important to know whether early-onset vision loss and late-onset vision loss are associated with differences in the estimation of distances of sound sources within the environment. People with vision loss rely heavily on auditory cues for path planning, safe navigation, avoiding collisions, and activities of daily living. PURPOSE: Loss of vision can lead to substantial changes in auditory abilities. It is unclear whether differences in sound distance estimation exist in people with early-onset partial vision loss, late-onset partial vision loss, and normal vision. We investigated distance estimates for a range of sound sources and auditory environments in groups of participants with early- or late-onset partial visual loss and sighted controls. METHODS: Fifty-two participants heard static sounds with virtual distances ranging from 1.2 to 13.8 m within a simulated room. The room simulated either anechoic (no echoes) or reverberant environments. Stimuli were speech, music, or noise. Single sounds were presented, and participants reported the estimated distance of the sound source. Each participant took part in 480 trials. RESULTS: Analysis of variance showed significant main effects of visual status (p<0.05) environment (reverberant vs. anechoic, p<0.05) and also of the stimulus (p<0.05). Significant differences (p<0.05) were shown in the estimation of distances of sound sources between early-onset visually impaired participants and sighted controls for closer distances for all conditions except the anechoic speech condition and at middle distances for all conditions except the reverberant speech and music conditions. Late-onset visually impaired participants and sighted controls showed similar performance (p>0.05). CONCLUSIONS: The findings suggest that early-onset partial vision loss results in significant changes in judged auditory distance in different environments, especially for close and middle distances. Late-onset partial visual loss has less of an impact on the ability to estimate the distance of sound sources. The findings are consistent with a theoretical framework, the perceptual restructuring hypothesis, which was recently proposed to account for the effects of vision loss on audition.


Subject(s)
Sound Localization , Humans , Male , Female , Middle Aged , Aged , Adult , Sound Localization/physiology , Judgment , Auditory Perception/physiology , Distance Perception/physiology , Acoustic Stimulation/methods , Young Adult , Visual Acuity/physiology , Age of Onset , Aged, 80 and over , Cues
3.
Proc Natl Acad Sci U S A ; 121(30): e2320378121, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39008675

ABSTRACT

The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.


Subject(s)
Auditory Perception , Emotions , Magnetic Resonance Imaging , Music , Visual Perception , Humans , Music/psychology , Female , Male , Adult , Visual Perception/physiology , Auditory Perception/physiology , Emotions/physiology , Young Adult , Brain Mapping , Acoustic Stimulation , Visual Cortex/physiology , Visual Cortex/diagnostic imaging , Primary Visual Cortex/physiology , Photic Stimulation/methods
4.
J Acoust Soc Am ; 156(1): 511-523, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39013168

ABSTRACT

Echolocating bats rely on precise auditory temporal processing to detect echoes generated by calls that may be emitted at rates reaching 150-200 Hz. High call rates can introduce forward masking perceptual effects that interfere with echo detection; however, bats may have evolved specializations to prevent repetition suppression of auditory responses and facilitate detection of sounds separated by brief intervals. Recovery of the auditory brainstem response (ABR) was assessed in two species that differ in the temporal characteristics of their echolocation behaviors: Eptesicus fuscus, which uses high call rates to capture prey, and Carollia perspicillata, which uses lower call rates to avoid obstacles and forage for fruit. We observed significant species differences in the effects of forward masking on ABR wave 1, in which E. fuscus maintained comparable ABR wave 1 amplitudes when stimulated at intervals of <3 ms, whereas post-stimulus recovery in C. perspicillata required 12 ms. When the intensity of the second stimulus was reduced by 20-30 dB relative to the first, however, C. perspicillata showed greater recovery of wave 1 amplitudes. The results demonstrate that species differences in temporal resolution are established at early levels of the auditory pathway and that these differences reflect auditory processing requirements of species-specific echolocation behaviors.


Subject(s)
Acoustic Stimulation , Chiroptera , Echolocation , Evoked Potentials, Auditory, Brain Stem , Perceptual Masking , Species Specificity , Animals , Chiroptera/physiology , Acoustic Stimulation/methods , Evoked Potentials, Auditory, Brain Stem/physiology , Time Factors , Male , Female , Auditory Threshold , Auditory Perception/physiology
5.
Sci Rep ; 14(1): 16412, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39013995

ABSTRACT

A series of eleven public concerts (staging chamber music by Ludwig van Beethoven, Brett Dean, Johannes Brahms) was organized with the goal to analyze physiological synchronies within the audiences and associations of synchrony with psychological variables. We hypothesized that the music would induce synchronized physiology, which would be linked to participants' aesthetic experiences, affect, and personality traits. Physiological measures (cardiac, electrodermal, respiration) of 695 participants were recorded during presentations. Before and after concerts, questionnaires provided self-report scales and standardized measures of participants' affectivity, personality traits, aesthetic experiences and listening modes. Synchrony was computed by a cross-correlational algorithm to obtain, for each participant and physiological variable (heart rate, heart-rate variability, respiration rate, respiration, skin-conductance response), how much each individual participant contributed to overall audience synchrony. In hierarchical models, such synchrony contribution was used as the dependent and the various self-report scales as predictor variables. We found that physiology throughout audiences was significantly synchronized, as expected with the exception of breathing behavior. There were links between synchrony and affectivity. Personality moderated the synchrony levels: Openness was positively associated, Extraversion and Neuroticism negatively. Several factors of experiences and listening modes predicted synchrony. Emotional listening was associated with reduced, whereas both structual and sound-focused listening was associated with increased synchrony. We concluded with an updated, nuanced understanding of synchrony on the timescale of whole concerts, inviting elaboration by synchony studies on shorter timescales of music passages.


Subject(s)
Music , Personality , Humans , Music/psychology , Male , Female , Adult , Personality/physiology , Heart Rate/physiology , Auditory Perception/physiology , Young Adult , Middle Aged , Galvanic Skin Response/physiology , Attitude , Adolescent , Surveys and Questionnaires , Emotions/physiology , Respiratory Rate/physiology
6.
Sci Rep ; 14(1): 16462, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39014043

ABSTRACT

The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.


Subject(s)
Aptitude , Emotions , Music , Humans , Music/psychology , Male , Female , Emotions/physiology , Aptitude/physiology , Adult , Young Adult , Speech Perception/physiology , Auditory Perception/physiology , Adolescent , Recognition, Psychology/physiology , Voice/physiology
7.
Cereb Cortex ; 34(7)2024 Jul 03.
Article in English | MEDLINE | ID: mdl-39016432

ABSTRACT

Sound is an important navigational cue for mammals. During spatial navigation, hippocampal place cells encode spatial representations of the environment based on visual information, but to what extent audiospatial information can enable reliable place cell mapping is largely unknown. We assessed this by recording from CA1 place cells in the dark, under circumstances where reliable visual, tactile, or olfactory information was unavailable. Male rats were exposed to auditory cues of different frequencies that were delivered from local or distal spatial locations. We observed that distal, but not local cue presentation, enables and supports stable place fields, regardless of the sound frequency used. Our data suggest that a context dependency exists regarding the relevance of auditory information for place field mapping: whereas locally available auditory cues do not serve as a salient spatial basis for the anchoring of place fields, auditory cue localization supports spatial representations by place cells when available in the form of distal information. Furthermore, our results demonstrate that CA1 neurons can effectively use auditory stimuli to generate place fields, and that hippocampal pyramidal neurons are not solely dependent on visual cues for the generation of place field representations based on allocentric reference frames.


Subject(s)
Acoustic Stimulation , Cues , Place Cells , Rats, Long-Evans , Space Perception , Animals , Male , Place Cells/physiology , Space Perception/physiology , CA1 Region, Hippocampal/physiology , CA1 Region, Hippocampal/cytology , Rats , Auditory Perception/physiology , Action Potentials/physiology , Spatial Navigation/physiology
8.
PLoS One ; 19(7): e0304027, 2024.
Article in English | MEDLINE | ID: mdl-39018315

ABSTRACT

Rhythms are the most natural cue for temporal anticipation because many sounds in our living environment have rhythmic structures. Humans have cortical mechanisms that can predict the arrival of the next sound based on rhythm and periodicity. Herein, we showed that temporal anticipation, based on the regularity of sound sequences, modulates peripheral auditory responses via efferent innervation. The medial olivocochlear reflex (MOCR), a sound-activated efferent feedback mechanism that controls outer hair cell motility, was inferred noninvasively by measuring the suppression of otoacoustic emissions (OAE). First, OAE suppression was compared between conditions in which sound sequences preceding the MOCR elicitor were presented at regular (predictable condition) or irregular (unpredictable condition) intervals. We found that OAE suppression in the predictable condition was stronger than that in the unpredictable condition. This implies that the MOCR is strengthened by the regularity of preceding sound sequences. In addition, to examine how many regularly presented preceding sounds are required to enhance the MOCR, we compared OAE suppression within stimulus sequences with 0-3 preceding tones. The OAE suppression was strengthened only when there were at least three regular preceding tones. This suggests that the MOCR was not automatically enhanced by a single stimulus presented immediately before the MOCR elicitor, but rather that it was enhanced by the regularity of the preceding sound sequences.


Subject(s)
Acoustic Stimulation , Cochlea , Humans , Male , Adult , Female , Young Adult , Cochlea/physiology , Olivary Nucleus/physiology , Reflex/physiology , Sound , Auditory Perception/physiology , Otoacoustic Emissions, Spontaneous/physiology , Reflex, Acoustic/physiology
9.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39001584

ABSTRACT

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Music , Humans , Female , Male , Auditory Perception/physiology , Infant, Newborn , Singing/physiology , Infant, Premature/physiology , Brain Mapping , Acoustic Stimulation , Brain/physiology , Brain/diagnostic imaging , Voice/physiology
10.
Philos Trans R Soc Lond B Biol Sci ; 379(1908): 20230257, 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39005025

ABSTRACT

Misophonia is commonly classified by intense emotional reactions to common everyday sounds. The condition has an impact both on the mental health of its sufferers and societally. As yet, formal models on the basis of misophonia are in their infancy. Based on developing behavioural and neuroscientific research we are gaining a growing understanding of the phenomenology and empirical findings in misophonia, such as the importance of context, types of coping strategies used and the activation of particular brain regions. In this article, we argue for a model of misophonia that includes not only the sound but also the context within which sound is perceived and the emotional reaction triggered. We review the current behavioural and neuroimaging literature, which lends support to this idea. Based on the current evidence, we propose that misophonia should be understood within the broader context of social perception and cognition, and not restricted within the narrow domain of being a disorder of auditory processing. We discuss the evidence in support of this hypothesis, as well as the implications for potential treatment approaches. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.


Subject(s)
Emotions , Social Cognition , Humans , Emotions/physiology , Auditory Perception/physiology , Cognition , Social Perception
11.
Commun Biol ; 7(1): 856, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38997514

ABSTRACT

The neuroscience of consciousness aims to identify neural markers that distinguish brain dynamics in healthy individuals from those in unconscious conditions. Recent research has revealed that specific brain connectivity patterns correlate with conscious states and diminish with loss of consciousness. However, the contribution of these patterns to shaping conscious processing remains unclear. Our study investigates the functional significance of these neural dynamics by examining their impact on participants' ability to process external information during wakefulness. Using fMRI recordings during an auditory detection task and rest, we show that ongoing dynamics are underpinned by brain patterns consistent with those identified in previous research. Detection of auditory stimuli at threshold is specifically improved when the connectivity pattern at stimulus presentation corresponds to patterns characteristic of conscious states. Conversely, the occurrence of these conscious state-associated patterns increases after detection, indicating a mutual influence between ongoing brain dynamics and conscious perception. Our findings suggest that certain brain configurations are more favorable to the conscious processing of external stimuli. Targeting these favorable patterns in patients with consciousness disorders may help identify windows of greater receptivity to the external world, guiding personalized treatments.


Subject(s)
Acoustic Stimulation , Auditory Perception , Brain , Consciousness , Magnetic Resonance Imaging , Humans , Consciousness/physiology , Auditory Perception/physiology , Male , Female , Adult , Young Adult , Brain/physiology , Brain/diagnostic imaging , Brain Mapping/methods
12.
Front Neural Circuits ; 18: 1431119, 2024.
Article in English | MEDLINE | ID: mdl-39011279

ABSTRACT

Memory-guided motor shaping is necessary for sensorimotor learning. Vocal learning, such as speech development in human babies and song learning in bird juveniles, begins with the formation of an auditory template by hearing adult voices followed by vocally matching to the memorized template using auditory feedback. In zebra finches, the widely used songbird model system, only males develop individually unique stereotyped songs. The production of normal songs relies on auditory experience of tutor's songs (commonly their father's songs) during a critical period in development that consists of orchestrated auditory and sensorimotor phases. "Auditory templates" of tutor songs are thought to form in the brain to guide later vocal learning, while formation of "motor templates" of own song has been suggested to be necessary for the maintenance of stereotyped adult songs. Where these templates are formed in the brain and how they interact with other brain areas to guide song learning, presumably with template-matching error correction, remains to be clarified. Here, we review and discuss studies on auditory and motor templates in the avian brain. We suggest that distinct auditory and motor template systems exist that switch their functions during development.


Subject(s)
Auditory Perception , Learning , Vocalization, Animal , Animals , Vocalization, Animal/physiology , Learning/physiology , Auditory Perception/physiology , Memory/physiology , Finches/physiology , Brain/physiology , Male
13.
14.
Hum Brain Mapp ; 45(11): e26793, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39037186

ABSTRACT

The auditory system can selectively attend to the target source in complex environments, the phenomenon known as the "cocktail party" effect. However, the spatiotemporal dynamics of electrophysiological activity associated with auditory selective spatial attention (ASSA) remain largely unexplored. In this study, single-source and multiple-source paradigms were designed to simulate different auditory environments, and microstate analysis was introduced to reveal the electrophysiological correlates of ASSA. Furthermore, cortical source analysis was employed to reveal the neural activity regions of these microstates. The results showed that five microstates could explain the spatiotemporal dynamics of ASSA, ranging from MS1 to MS5. Notably, MS2 and MS3 showed significantly lower partial properties in multiple-source situations than in single-source situations, whereas MS4 had shorter durations and MS5 longer durations in multiple-source situations than in single-source situations. MS1 had insignificant differences between the two situations. Cortical source analysis showed that the activation regions of these microstates initially transferred from the right temporal cortex to the temporal-parietal cortex, and subsequently to the dorsofrontal cortex. Moreover, the neural activity of the single-source situations was greater than that of the multiple-source situations in MS2 and MS3, correlating with the N1 and P2 components, with the greatest differences observed in the superior temporal gyrus and inferior parietal lobule. These findings suggest that these specific microstates and their associated activation regions may serve as promising substrates for decoding ASSA in complex environments.


Subject(s)
Attention , Auditory Perception , Electroencephalography , Evoked Potentials, Auditory , Space Perception , Humans , Male , Attention/physiology , Female , Young Adult , Space Perception/physiology , Evoked Potentials, Auditory/physiology , Adult , Auditory Perception/physiology , Acoustic Stimulation , Brain Mapping
15.
Sci Rep ; 14(1): 14575, 2024 06 25.
Article in English | MEDLINE | ID: mdl-38914752

ABSTRACT

People often interact with groups (i.e., ensembles) during social interactions. Given that group-level information is important in navigating social environments, we expect perceptual sensitivity to aspects of groups that are relevant for personal threat as well as social belonging. Most ensemble perception research has focused on visual ensembles, with little research looking at auditory or vocal ensembles. Across four studies, we present evidence that (i) perceivers accurately extract the sex composition of a group from voices alone, (ii) judgments of threat increase concomitantly with the number of men, and (iii) listeners' sense of belonging depends on the number of same-sex others in the group. This work advances our understanding of social cognition, interpersonal communication, and ensemble coding to include auditory information, and reveals people's ability to extract relevant social information from brief exposures to vocalizing groups.


Subject(s)
Voice , Humans , Male , Female , Adult , Sex Ratio , Social Perception , Young Adult , Auditory Perception/physiology , Interpersonal Relations , Social Interaction
16.
Cogn Sci ; 48(6): e13469, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38923050

ABSTRACT

Words that describe sensory perception give insight into how language mediates human experience, and the acquisition of these words is one way to examine how we learn to categorize and communicate sensation. We examine the differential predictions of the typological prevalence hypothesis and embodiment hypothesis regarding the acquisition of perception verbs. Studies 1 and 2 examine the acquisition trajectories of perception verbs across 12 languages using parent questionnaire responses, while Study 3 examines their relative frequencies in English corpus data. We find the vision verbs see and look are acquired first, consistent with the typological prevalence hypothesis. However, for children at 12-23 months, touch-not audition-verbs take precedence in terms of their age of acquisition, frequency in child-produced speech, and frequency in child-directed speech, consistent with the embodiment hypothesis. Later at 24-35 months old, frequency rates are observably different and audition begins to align with what has previously been reported in adult English data. It seems the initial orientation to verbalizing touch over audition in child-caregiver interaction is especially related to the control of physically and socially appropriate behaviors. Taken together, the results indicate children's acquisition of perception verbs arises from the complex interplay of embodiment, language-specific input, and child-directed socialization routines.


Subject(s)
Language Development , Language , Humans , Infant , Female , Male , Child, Preschool , Visual Perception/physiology , Speech , Touch , Auditory Perception/physiology
17.
Elife ; 122024 Jun 21.
Article in English | MEDLINE | ID: mdl-38904659

ABSTRACT

Dynamic attending theory proposes that the ability to track temporal cues in the auditory environment is governed by entrainment, the synchronization between internal oscillations and regularities in external auditory signals. Here, we focused on two key properties of internal oscillators: their preferred rate, the default rate in the absence of any input; and their flexibility, how they adapt to changes in rhythmic context. We developed methods to estimate oscillator properties (Experiment 1) and compared the estimates across tasks and individuals (Experiment 2). Preferred rates, estimated as the stimulus rates with peak performance, showed a harmonic relationship across measurements and were correlated with individuals' spontaneous motor tempo. Estimates from motor tasks were slower than those from the perceptual task, and the degree of slowing was consistent for each individual. Task performance decreased with trial-to-trial changes in stimulus rate, and responses on individual trials were biased toward the preceding trial's stimulus properties. Flexibility, quantified as an individual's ability to adapt to faster-than-previous rates, decreased with age. These findings show domain-specific rate preferences for the assumed oscillatory system underlying rhythm perception and production, and that this system loses its ability to flexibly adapt to changes in the external rhythmic context during aging.


Subject(s)
Attention , Auditory Perception , Humans , Adult , Attention/physiology , Female , Male , Young Adult , Aged , Auditory Perception/physiology , Middle Aged , Aging/physiology , Acoustic Stimulation , Adolescent
18.
Sensors (Basel) ; 24(11)2024 May 27.
Article in English | MEDLINE | ID: mdl-38894232

ABSTRACT

Sound localization is a crucial aspect of human auditory perception. VR (virtual reality) technologies provide immersive audio platforms that allow human listeners to experience natural sounds based on their ability to localize sound. However, the simulations of sound generated by these platforms, which are based on the general head-related transfer function (HRTF), often lack accuracy in terms of individual sound perception and localization due to significant individual differences in this function. In this study, we aimed to investigate the disparities between the perceived locations of sound sources by users and the locations generated by the platform. Our goal was to determine if it is possible to train users to adapt to the platform-generated sound sources. We utilized the Microsoft HoloLens 2 virtual platform and collected data from 12 subjects based on six separate training sessions arranged in 2 weeks. We employed three modes of training to assess their effects on sound localization, in particular for studying the impacts of multimodal error, visual, and sound guidance in combination with kinesthetic/postural guidance, on the effectiveness of the training. We analyzed the collected data in terms of the training effect between pre- and post-sessions as well as the retention effect between two separate sessions based on subject-wise paired statistics. Our findings indicate that, as far as the training effect between pre- and post-sessions is concerned, the effect is proven to be statistically significant, in particular in the case wherein kinesthetic/postural guidance is mixed with visual and sound guidance. Conversely, visual error guidance alone was found to be largely ineffective. On the other hand, as far as the retention effect between two separate sessions is concerned, we could not find any meaningful statistical implication on the effect for all three error guidance modes out of the 2-week session of training. These findings can contribute to the improvement of VR technologies by ensuring they are designed to optimize human sound localization abilities.


Subject(s)
Sound Localization , Humans , Sound Localization/physiology , Female , Male , Adult , Virtual Reality , Young Adult , Auditory Perception/physiology , Sound
19.
Neuroimage ; 296: 120686, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38871037

ABSTRACT

Centromedian nucleus (CM) is one of several intralaminar nuclei of the thalamus and is thought to be involved in consciousness, arousal, and attention. CM has been suggested to play a key role in the control of attention, by regulating the flow of information to different brain regions such as the ascending reticular system, basal ganglia, and cortex. While the neurophysiology of attention in visual and auditory systems has been studied in animal models, combined single unit and LFP recordings in human have not, to our knowledge, been reported. Here, we recorded neuronal activity in the CM nucleus in 11 patients prior to insertion of deep brain stimulation electrodes for the treatment of epilepsy while subjects performed an auditory attention task. Patients were requested to attend and count the infrequent (p = 0.2) odd or "deviant" tones, ignore the frequent standard tones and report the total number of deviant tones at trial completion. Spikes were discriminated, and LFPs were band pass filtered (5-45 Hz). Average peri­stimulus time histograms and spectra were constructed by aligning on tone onsets and statistically compared. The firing rate of CM neurons showed selective, multi-phasic responses to deviant tones in 81% of the tested neurons. Local field potential analysis showed selective beta and low gamma (13-45 Hz) modulations in response to deviant tones, also in a multi-phasic pattern. The current study demonstrates that CM neurons are under top-down control and participate in the selective processing during auditory attention and working memory. These results, taken together, implicate the CM in selective auditory attention and working memory and support a role of beta and low gamma oscillatory activity in cognitive processes. It also has potential implications for DBS therapy for epilepsy and non-motor symptoms of PD, such as apathy and other disorders of attention.


Subject(s)
Attention , Auditory Perception , Intralaminar Thalamic Nuclei , Memory, Short-Term , Neurons , Humans , Attention/physiology , Male , Female , Memory, Short-Term/physiology , Adult , Auditory Perception/physiology , Intralaminar Thalamic Nuclei/physiology , Middle Aged , Neurons/physiology , Young Adult , Acoustic Stimulation , Deep Brain Stimulation/methods
20.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38897817

ABSTRACT

Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.


Subject(s)
Auditory Cortex , Blindness , Magnetic Resonance Imaging , Visual Cortex , Humans , Blindness/physiopathology , Blindness/diagnostic imaging , Male , Adult , Female , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Auditory Cortex/physiopathology , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Young Adult , Neuronal Plasticity/physiology , Acoustic Stimulation , Brain Mapping , Middle Aged , Auditory Perception/physiology , Echolocation/physiology
SELECTION OF CITATIONS
SEARCH DETAIL