Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 48.064
1.
J Acoust Soc Am ; 155(6): 3615-3626, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38833283

The current work investigated the effects of mass-loading the eardrum on wideband absorbance in humans. A non-invasive approach to mass-loading the eardrum was utilized in which water was placed on the eardrum via ear canal access. The mass-loaded absorbance was compared to absorbance measured for two alternative middle ear states: normal and stiffened. To stiffen the ear, subjects pressurized the middle ear through either exsufflation or insufflation concurrent with Eustachian tube opening. Mass-loading the eardrum was hypothesized to reduce high-frequency absorbance, whereas pressurizing the middle ear was hypothesized to reduce low- to mid-frequency absorbance. Discriminant linear analysis classification was performed to evaluate the utility of absorbance in differentiating between conditions. Water on the eardrum reduced absorbance over the 0.7- to 6-kHz frequency range and increased absorbance at frequencies below approximately 0.5 kHz; these changes approximated the pattern of changes reported in both hearing thresholds and stapes motion upon mass-loading the eardrum. Pressurizing the middle ear reduced the absorbance over the 0.125- to 4-kHz frequency range. Several classification models based on the absorbance in two- or three-frequency bands had accuracy exceeding 88%.


Ear, Middle , Pressure , Tympanic Membrane , Humans , Male , Female , Tympanic Membrane/physiology , Tympanic Membrane/anatomy & histology , Ear, Middle/physiology , Ear, Middle/anatomy & histology , Adult , Young Adult , Elasticity , Acoustic Stimulation , Eustachian Tube/physiology , Eustachian Tube/anatomy & histology , Stapes/physiology , Water , Discriminant Analysis
2.
J Acoust Soc Am ; 155(6): 3639-3653, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38836771

The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.


Acoustic Stimulation , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Humans , Acoustic Stimulation/methods , Male , Adult , Electroencephalography/methods , Female , Least-Squares Analysis , Young Adult , Signal Processing, Computer-Assisted , Reaction Time , Auditory Perception/physiology
3.
Sci Rep ; 14(1): 13039, 2024 06 06.
Article En | MEDLINE | ID: mdl-38844793

Sleep onset insomnia is a pervasive problem that contributes significantly to the poor health outcomes associated with insufficient sleep. Auditory stimuli phase-locked to slow-wave sleep oscillations have been shown to augment deep sleep, but it is unknown whether a similar approach can be used to accelerate sleep onset. The present randomized controlled crossover trial enrolled adults with objectively verified sleep onset latencies (SOLs) greater than 30 min to test the effect of auditory stimuli delivered at specific phases of participants' alpha oscillations prior to sleep onset. During the intervention week, participants wore an electroencephalogram (EEG)-enabled headband that delivered acoustic pulses timed to arrive anti-phase with alpha for 30 min (Stimulation). During the Sham week, the headband silently recorded EEG. The primary outcome was SOL determined by blinded scoring of EEG records. For the 21 subjects included in the analyses, stimulation had a significant effect on SOL according to a linear mixed effects model (p = 0.0019), and weekly average SOL decreased by 10.5 ± 15.9 min (29.3 ± 44.4%). These data suggest that phase-locked acoustic stimulation can be a viable alternative to pharmaceuticals to accelerate sleep onset in individuals with prolonged sleep onset latencies. Trial Registration: This trial was first registered on clinicaltrials.gov on 24/02/2023 under the name Sounds Locked to ElectroEncephalogram Phase For the Acceleration of Sleep Onset Time (SLEEPFAST), and assigned registry number NCT05743114.


Acoustic Stimulation , Electroencephalography , Sleep Initiation and Maintenance Disorders , Humans , Male , Female , Adult , Sleep Initiation and Maintenance Disorders/therapy , Sleep Initiation and Maintenance Disorders/physiopathology , Acoustic Stimulation/methods , Middle Aged , Cross-Over Studies , Treatment Outcome , Alpha Rhythm/physiology
4.
Trends Hear ; 28: 23312165241259704, 2024.
Article En | MEDLINE | ID: mdl-38835268

The use of in-situ audiometry for hearing aid fitting is appealing due to its reduced resource and equipment requirements compared to standard approaches employing conventional audiometry alongside real-ear measures. However, its validity has been a subject of debate, as previous studies noted differences between hearing thresholds measured using conventional and in-situ audiometry. The differences were particularly notable for open-fit hearing aids, attributed to low-frequency leakage caused by the vent. Here, in-situ audiometry was investigated for six receiver-in-canal hearing aids from different manufacturers through three experiments. In Experiment I, the hearing aid gain was measured to investigate whether corrections were implemented to the prescribed target gain. In Experiment II, the in-situ stimuli were recorded to investigate if corrections were directly incorporated to the delivered in-situ stimulus. Finally, in Experiment III, hearing thresholds using in-situ and conventional audiometry were measured with real patients wearing open-fit hearing aids. Results indicated that (1) the hearing aid gain remained unaffected when measured with in-situ or conventional audiometry for all open-fit measurements, (2) the in-situ stimuli were adjusted for up to 30 dB at frequencies below 1000 Hz for all open-fit hearing aids except one, which also recommends the use of closed domes for all in-situ measurements, and (3) the mean interparticipant threshold difference fell within 5 dB for frequencies between 250 and 6000 Hz. The results clearly indicated that modern measured in-situ thresholds align (within 5 dB) with conventional thresholds measured, indicating the potential of in-situ audiometry for remote hearing care.


Auditory Threshold , Hearing Aids , Humans , Acoustic Stimulation , Prosthesis Fitting/methods , Reproducibility of Results , Audiometry/methods , Audiometry, Pure-Tone , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Hearing , Predictive Value of Tests , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Equipment Design , Male , Female
5.
Codas ; 36(4): e20230111, 2024.
Article En | MEDLINE | ID: mdl-38836828

PURPOSE: To analyze the effects of auditory stimulation on heart rate variability (HRV) indices in healthy individuals with normal hearing and with hearing loss, regardless of type and/or grade, by means of a systematic review. RESEARCH STRATEGIES: This is a systematic review with a meta-analysis that addresses the following question: in healthy individuals with normal hearing and/or with hearing loss, what are the effects of auditory stimulation on HRV indices in comparison to silence? We consulted the Cochrane Library, Embase, LILACS, PubMed, Web of Science, and Scopus databases and the gray literature (Google Scholar, OpenGrey, and ProQuest). SELECTION CRITERIA: There were no restrictions as to period or language of publication. DATA ANALYSIS: We identified 451 records, an additional 261 in the gray literature, and five studies in a search through the references, resulting in a total of 717 records, with 171 duplicate records. After screening the titles and abstracts of 546 studies, we excluded 490 and considered 56 studies in full to assess their eligibility. RESULTS: Nine of these studies were included in the systematic review, eight of which were suitable for the meta-analysis. CONCLUSION: It is suggested that auditory stimulation may influence the RMSSD, pNN50, SDNN, RRTri and SD2 indices of HRV in healthy adults with normal hearing.


Acoustic Stimulation , Hearing Loss , Heart Rate , Humans , Heart Rate/physiology , Acoustic Stimulation/methods , Hearing Loss/physiopathology , Hearing/physiology
6.
J Acoust Soc Am ; 155(6): 3589-3599, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38829154

Frequency importance functions (FIFs) for simulated bimodal hearing were derived using sentence perception scores measured in quiet and noise. Acoustic hearing was simulated using low-pass filtering. Electric hearing was simulated using a six-channel vocoder with three input frequency ranges, resulting in overlap, meet, and gap maps, relative to the acoustic cutoff frequency. Spectral holes present in the speech spectra were created within electric stimulation by setting amplitude(s) of channels to zero. FIFs were significantly different between frequency maps. In quiet, the three FIFs were similar with gradually increasing weights with channels 5 and 6 compared to the first three channels. However, the most and least weighted channels slightly varied depending on the maps. In noise, the patterns of the three FIFs were similar to those in quiet, with steeper increasing weights with channels 5 and 6 compared to the first four channels. Thus, channels 5 and 6 contributed to speech perception the most, while channels 1 and 2 contributed the least, regardless of frequency maps. Results suggest that the contribution of cochlear implant frequency bands for bimodal speech perception depends on the degree of frequency overlap between acoustic and electric stimulation and if noise is absent or present.


Acoustic Stimulation , Cochlear Implants , Electric Stimulation , Noise , Speech Perception , Humans , Noise/adverse effects , Cochlear Implantation/instrumentation , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Perceptual Masking , Adult
7.
J Neurodev Disord ; 16(1): 28, 2024 Jun 03.
Article En | MEDLINE | ID: mdl-38831410

BACKGROUND: In the search for objective tools to quantify neural function in Rett Syndrome (RTT), which are crucial in the evaluation of therapeutic efficacy in clinical trials, recordings of sensory-perceptual functioning using event-related potential (ERP) approaches have emerged as potentially powerful tools. Considerable work points to highly anomalous auditory evoked potentials (AEPs) in RTT. However, an assumption of the typical signal-averaging method used to derive these measures is "stationarity" of the underlying responses - i.e. neural responses to each input are highly stereotyped. An alternate possibility is that responses to repeated stimuli are highly variable in RTT. If so, this will significantly impact the validity of assumptions about underlying neural dysfunction, and likely lead to overestimation of underlying neuropathology. To assess this possibility, analyses at the single-trial level assessing signal-to-noise ratios (SNR), inter-trial variability (ITV) and inter-trial phase coherence (ITPC) are necessary. METHODS: AEPs were recorded to simple 100 Hz tones from 18 RTT and 27 age-matched controls (Ages: 6-22 years). We applied standard AEP averaging, as well as measures of neuronal reliability at the single-trial level (i.e. SNR, ITV, ITPC). To separate signal-carrying components from non-neural noise sources, we also applied a denoising source separation (DSS) algorithm and then repeated the reliability measures. RESULTS: Substantially increased ITV, lower SNRs, and reduced ITPC were observed in auditory responses of RTT participants, supporting a "neural unreliability" account. Application of the DSS technique made it clear that non-neural noise sources contribute to overestimation of the extent of processing deficits in RTT. Post-DSS, ITV measures were substantially reduced, so much so that pre-DSS ITV differences between RTT and TD populations were no longer detected. In the case of SNR and ITPC, DSS substantially improved these estimates in the RTT population, but robust differences between RTT and TD were still fully evident. CONCLUSIONS: To accurately represent the degree of neural dysfunction in RTT using the ERP technique, a consideration of response reliability at the single-trial level is highly advised. Non-neural sources of noise lead to overestimation of the degree of pathological processing in RTT, and denoising source separation techniques during signal processing substantially ameliorate this issue.


Electroencephalography , Evoked Potentials, Auditory , Rett Syndrome , Humans , Rett Syndrome/physiopathology , Rett Syndrome/complications , Adolescent , Female , Evoked Potentials, Auditory/physiology , Child , Young Adult , Auditory Perception/physiology , Reproducibility of Results , Acoustic Stimulation , Male , Signal-To-Noise Ratio , Adult
8.
Trends Hear ; 28: 23312165241260029, 2024.
Article En | MEDLINE | ID: mdl-38831646

The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.


Hearing Aids , Noise , Perceptual Masking , Signal-To-Noise Ratio , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Computer Simulation , Acoustic Stimulation , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Equipment Design , Signal Processing, Computer-Assisted
9.
J Acoust Soc Am ; 155(6): 3715-3729, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38847595

Emerging technologies of virtual reality (VR) and augmented reality (AR) are enhancing soundscape research, potentially producing new insights by enabling controlled conditions while preserving the context of a virtual gestalt within the soundscape concept. This study explored the ecological validity of virtual environments for subjective evaluations in soundscape research, focusing on the authenticity of virtual audio-visual environments for reproducibility. Different technologies for creating and reproducing virtual environments were compared, including field recording, simulated VR, AR, and audio-only presentation, in two audio-visual reproduction settings, a head-mounted display with head-tracked headphones and a VR lab with head-locked headphones. Via a series of soundwalk- and lab-based experiments, the results indicate that field recording technologies provided the most authentic audio-visual environments, followed by AR, simulated VR, and audio-only approaches. The authenticity level influenced subjective evaluations of virtual environments, e.g., arousal/eventfulness and pleasantness. The field recording and AR-based technologies closely matched the on-site soundwalk ratings in arousal, while the other approaches scored lower. All the approaches had significantly lower pleasantness ratings compared to on-site evaluations. The choice of audio-visual reproduction technology did not significantly impact the evaluations. Overall, the results suggest virtual environments with high authenticity can be useful for future soundscape research and design.


Auditory Perception , Virtual Reality , Humans , Female , Male , Adult , Young Adult , Augmented Reality , Acoustic Stimulation , Sound , Reproducibility of Results
10.
PLoS One ; 19(5): e0299698, 2024.
Article En | MEDLINE | ID: mdl-38722993

Misophonia, a heightened aversion to certain sounds, turns common cognitive and social exercises (e.g., paying attention during a lecture near a pen-clicking classmate, coexisting at the dinner table with a food-chomping relative) into challenging endeavors. How does exposure to triggering sounds impact cognitive and social judgments? We investigated this question in a sample of 65 participants (26 misophonia, 39 control) from the general population. In Phase 1, participants saw faces paired with auditory stimuli while completing a gender judgment task, then reported sound discomfort and identification. In Phase 2, participants saw these same faces with novel ones and reported face likeability and memory. For both oral and non-oral triggers, misophonic participants gave higher discomfort ratings than controls did-especially when identification was correct-and performed slower on the gender judgment. Misophonic participants rated lower likeability than controls did for faces they remembered with high discomfort sounds, and face memory was worse overall for faces originally paired with high discomfort sounds. Altogether, these results suggest that misophonic individuals show impairments on social and cognitive judgments if they must endure discomforting sounds. This experiment helps us better understand the day-to-day impact of misophonia and encourages usage of individualized triggers in future studies.


Cognition , Judgment , Humans , Male , Female , Cognition/physiology , Adult , Young Adult , Acoustic Stimulation , Memory/physiology
11.
Article En | MEDLINE | ID: mdl-38691431

In hippocampus, synaptic plasticity and rhythmic oscillations reflect the cytological basis and the intermediate level of cognition, respectively. Transcranial ultrasound stimulation (TUS) has demonstrated the ability to elicit changes in neural response. However, the modulatory effect of TUS on synaptic plasticity and rhythmic oscillations was insufficient in the present studies, which may be attributed to the fact that TUS acts mainly through mechanical forces. To enhance the modulatory effect on synaptic plasticity and rhythmic oscillations, transcranial magneto-acoustic stimulation (TMAS) which induced a coupled electric field together with TUS's ultrasound field was applied. The modulatory effect of TMAS and TUS with a pulse repetition frequency of 100 Hz were compared. TMAS/TUS were performed on C57 mice for 7 days at two different ultrasound intensities (3 W/cm2 and 5 W/cm [Formula: see text]. Behavioral tests, long-term potential (LTP) and local field potentials in vivo were performed to evaluate TUS/TMAS modulatory effect on cognition, synaptic plasticity and rhythmic oscillations. Protein expression based on western blotting were used to investigate the under- lying mechanisms of these beneficial effects. At 5 W/cm2, TMAS-induced LTP were 113.4% compared to the sham group and 110.5% compared to TUS. Moreover, the relative power of high gamma oscillations (50-100Hz) in the TMAS group ( 1.060±0.155 %) was markedly higher than that in the TUS group ( 0.560±0.114 %) and sham group ( 0.570±0.088 %). TMAS significantly enhanced the synchronization of theta and gamma oscillations as well as theta-gamma cross-frequency coupling. Whereas, TUS did not show relative enhancements. TMAS provides enhanced effect for modulating the synaptic plasticity and rhythmic oscillations in hippocampus.


Acoustic Stimulation , Hippocampus , Mice, Inbred C57BL , Transcranial Magnetic Stimulation , Animals , Mice , Transcranial Magnetic Stimulation/methods , Male , Hippocampus/physiology , Neuronal Plasticity/physiology , Cognition/physiology , Long-Term Potentiation/physiology , Ultrasonic Waves , Theta Rhythm/physiology
12.
Sci Rep ; 14(1): 10518, 2024 05 08.
Article En | MEDLINE | ID: mdl-38714827

Previous work assessing the effect of additive noise on the postural control system has found a positive effect of additive white noise on postural dynamics. This study covers two separate experiments that were run sequentially to better understand how the structure of the additive noise signal affects postural dynamics, while also furthering our knowledge of how the intensity of auditory stimulation of noise may elicit this phenomenon. Across the two experiments, we introduced three auditory noise stimulations of varying structure (white, pink, and brown noise). Experiment 1 presented the stimuli at 35 dB while Experiment 2 was presented at 75 dB. Our findings demonstrate a decrease in variability of the postural control system regardless of the structure of the noise signal presented, but only for high intensity auditory stimulation.


Acoustic Stimulation , Noise , Humans , Female , Male , Adult , Young Adult , Postural Balance/physiology , Color , Posture/physiology , Standing Position
13.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38700440

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Attention , Auditory Perception , Electroencephalography , Visual Perception , Humans , Attention/physiology , Male , Female , Young Adult , Adult , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Evoked Potentials/physiology , Brain/physiology , Adolescent
14.
Nat Commun ; 15(1): 3692, 2024 May 01.
Article En | MEDLINE | ID: mdl-38693186

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Attention , Eye Movements , Magnetoencephalography , Speech Perception , Speech , Humans , Attention/physiology , Eye Movements/physiology , Male , Female , Adult , Young Adult , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Brain/physiology , Eye-Tracking Technology
15.
eNeuro ; 11(5)2024 May.
Article En | MEDLINE | ID: mdl-38702194

Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.


Electroencephalography , Event-Related Potentials, P300 , Humans , Male , Female , Adult , Electroencephalography/methods , Young Adult , Event-Related Potentials, P300/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology
16.
Sci Rep ; 14(1): 10422, 2024 05 07.
Article En | MEDLINE | ID: mdl-38710727

Anticipating positive outcomes is a core cognitive function in the process of reward prediction. However, no neurophysiological method objectively assesses reward prediction in basic medical research. In the present study, we established a physiological paradigm using cortical direct current (DC) potential responses in rats to assess reward prediction. This paradigm consisted of five daily 1-h sessions with two tones, wherein the rewarded tone was followed by electrical stimulation of the medial forebrain bundle (MFB) scheduled at 1000 ms later, whereas the unrewarded tone was not. On day 1, both tones induced a negative DC shift immediately after auditory responses, persisting up to MFB stimulation. This negative shift progressively increased and peaked on day 4. Starting from day 3, the negative shift from 600 to 1000 ms was significantly larger following the rewarded tone than that following the unrewarded tone. This negative DC shift was particularly prominent in the frontal cortex, suggesting its crucial role in discriminative reward prediction. During the extinction sessions, the shift diminished significantly on extinction day 1. These findings suggest that cortical DC potential is related to reward prediction and could be a valuable tool for evaluating animal models of depression, providing a testing system for anhedonia.


Extinction, Psychological , Reward , Animals , Rats , Male , Extinction, Psychological/physiology , Electric Stimulation , Acoustic Stimulation , Medial Forebrain Bundle/physiology , Rats, Sprague-Dawley
17.
Multisens Res ; 37(2): 89-124, 2024 Feb 13.
Article En | MEDLINE | ID: mdl-38714311

Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes - a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of N = 130. Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.


Attention , Video Games , Visual Perception , Humans , Male , Female , Visual Perception/physiology , Young Adult , Adult , Attention/physiology , Auditory Perception/physiology , Photic Stimulation , Adolescent , Reaction Time/physiology , Cues , Acoustic Stimulation
18.
Multisens Res ; 37(2): 143-162, 2024 Apr 30.
Article En | MEDLINE | ID: mdl-38714315

A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.


Acoustic Stimulation , Auditory Perception , Photic Stimulation , Reaction Time , Visual Perception , Humans , Visual Perception/physiology , Auditory Perception/physiology , Male , Female , Reaction Time/physiology , Adult , Young Adult , Judgment/physiology
19.
Cereb Cortex ; 34(5)2024 May 02.
Article En | MEDLINE | ID: mdl-38715408

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
20.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article En | MEDLINE | ID: mdl-38717201

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
...