Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 10.539
Filtrar
1.
Brain Behav ; 14(10): e70083, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39378282

RESUMO

OBJECTIVES: The objective of this study was to investigate whether residual inhibition (RI), which provides information on the relationship between tinnitus and increased spontaneous activity in the auditory system, is a predictor for the success of sound enrichment treatment. DESIGN: Tinnitus patients were divided into two groups based on whether RI was achieved (RI+) or not (RI-). All participants underwent sound enrichment. Psychosomatic measures (for tinnitus severity, discomfort, attention deficit and sleep difficulties), Tinnitus Handicap Inventory (THI), minimum masking level (MML), and tinnitus loudness level (TLL) results were compared before and at 1, 3, and 6 months after treatment. STUDY SAMPLE: Sixty-seven chronic tinnitus patients were divided into two groups based on whether RI was achieved (RI+) or not (RI-). There were 38 patients in the RI+ group and 29 in the RI- group. RESULTS: There was a statistically significant difference between the groups in psychosomatic measures, THI, MML and TLL scores at the post-treatment 6 months after treatment (p <.05). There was a statistically significant decrease in psychosomatic measures, THI, MML and TLL scores during the treatment period in the RI+ group, but not in the RI- group. CONCLUSION: RI may predict the prognosis of tinnitus treatments used in clinics to reduce the spontaneous firing rate of neurons in the central auditory system, and that RI positivity may be a predictor of treatment success in sound enrichment.


Assuntos
Zumbido , Zumbido/terapia , Zumbido/fisiopatologia , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Resultado do Tratamento , Estimulação Acústica/métodos , Inibição Neural/fisiologia , Idoso , Som , Mascaramento Perceptivo/fisiologia
2.
Physiol Rep ; 12(19): e70079, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39380173

RESUMO

Physiological oscillations, such as those involved in brain activity, heartbeat, and respiration, display inherent rhythmicity across various timescales. However, adaptive behavior arises from the interaction between these intrinsic rhythms and external environmental cues. In this study, we used multimodal neurophysiological recordings, simultaneously capturing signals from the central and autonomic nervous systems (CNS and ANS), to explore the dynamics of brain and body rhythms in response to rhythmic auditory stimulation across three conditions: baseline (no auditory stimulation), passive auditory processing, and active auditory processing (discrimination task). Our findings demonstrate that active engagement with auditory stimulation synchronizes both CNS and ANS rhythms with the external rhythm, unlike passive and baseline conditions, as evidenced by power spectral density (PSD) and coherence analyses. Importantly, phase angle analysis revealed a consistent alignment across participants between their physiological oscillatory phases at stimulus or response onsets. This alignment was associated with reaction times, suggesting that certain phases of physiological oscillations are spontaneously prioritized across individuals due to their adaptive role in sensorimotor behavior. These results highlight the intricate interplay between CNS and ANS rhythms in optimizing sensorimotor responses to environmental demands, suggesting a potential mechanism of embodied predictive processing.


Assuntos
Estimulação Acústica , Humanos , Masculino , Feminino , Adulto , Estimulação Acústica/métodos , Adulto Jovem , Percepção Auditiva/fisiologia , Sistema Nervoso Autônomo/fisiologia , Eletroencefalografia/métodos , Tempo de Reação/fisiologia , Encéfalo/fisiologia , Periodicidade
3.
Brain Topogr ; 38(1): 7, 2024 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-39397132

RESUMO

Electroencephalogram (EEG) based Neurofeedback training has gained traction as a practical method for enhancing executive functions, particularly attention, among healthy individuals. The neurofeedback protocols based on EEG channel locations, frequency bands, or EEG features has been tested. However, the improvement in attention was not measured by comparing different feedback stimulus types. We believe that multisensory nature feedback even with few training sessions may induce strong effect. Therefore, this study compares the effect of audio-visual and visual feedback stimuli for attention enhancement utilizing neurophysiological, behavioural and neuropsychological measures. Total 21 subjects were recruited, undergoing six alternate days of neurofeedback training sessions to upregulate EEG beta power of frontocentral (FC5). Dwell time, fractional occupancy and transition probability were also estimated from the EEG beta power. Audiovisual group (G1) as compared to visual group (G2) demonstrate a significant increase of global EEG beta activity alongside improved dwell time (t = 2.76, p = 0.003), fractional occupancy (t = 1.73, p = 0.042) and transition probability (t = 2.46, p = 0.008) over the course of six neurofeedback training sessions. Similarly, the group (G1) shows higher scores (t = 2.13, p = 0.032) and faster reaction times (t = 2.22, p = 0.028) in Stroop task, along with increased score in Mindfulness Attention Awareness Scale (MAAS-15) questionnaire (t = 2.306, p = 0.012). Audiovisual neurofeedback may enhance training effectiveness, potentially achieving the same outcomes in fewer sessions compared to visual-only feedback. However, sufficient training days are essential for effect consolidation. This highlights the feasibility of completing neurofeedback training, a significant challenge in practice.


Assuntos
Estimulação Acústica , Atenção , Eletroencefalografia , Neurorretroalimentação , Humanos , Neurorretroalimentação/métodos , Masculino , Atenção/fisiologia , Feminino , Projetos Piloto , Adulto , Adulto Jovem , Eletroencefalografia/métodos , Estimulação Acústica/métodos , Estimulação Luminosa/métodos , Testes Neuropsicológicos , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia
4.
J Vis ; 24(11): 7, 2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-39382867

RESUMO

Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.


Assuntos
Percepção Auditiva , Sinais (Psicologia) , Navegação Espacial , Humanos , Masculino , Navegação Espacial/fisiologia , Feminino , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Percepção Espacial/fisiologia , Estimulação Luminosa/métodos , Baixa Visão/fisiopatologia , Realidade Virtual , Estimulação Acústica/métodos , Acuidade Visual/fisiologia
5.
Sci Rep ; 14(1): 23779, 2024 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-39389982

RESUMO

Intentionally walking to the beat of an auditory stimulus seems effortless for most humans. However, studies have revealed significant individual differences in the spontaneous tendency to synchronize. Some individuals tend to adapt their walking pace to the beat, while others show little or no adjustment. To fill this gap we introduce the Ramp protocol, which measures spontaneous adaptation to a change in an auditory rhythmic stimulus in a gait task. First, participants walk at their preferred cadence without stimulation. After several steps, a metronome is presented, timed to match the participant's heel-strike. Then, the metronome tempo progressively departs from the participant's cadence by either accelerating or decelerating. The implementation of the Ramp protocol required real-time detection of heel-strike and auditory stimuli aligned with participants' preferred cadence. To achieve this, we developed the TeensyStep device, which we validated compared to a gold standard for step detection. We also demonstrated the sensitivity of the Ramp protocol to individual differences in the spontaneous response to a tempo-changing rhythmic stimulus by introducing a new measure: the Response Score. This new method and quantification of spontaneous response to rhythmic stimuli holds promise for highlighting and distinguishing different profiles of adaptation in a gait task.


Assuntos
Estimulação Acústica , Marcha , Individualidade , Caminhada , Humanos , Masculino , Feminino , Caminhada/fisiologia , Estimulação Acústica/métodos , Adulto , Marcha/fisiologia , Adulto Jovem , Adaptação Fisiológica , Adolescente
6.
Cereb Cortex ; 34(10)2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-39390710

RESUMO

Humans perceive a pulse, or beat, underlying musical rhythm. Beat strength correlates with activity in the basal ganglia and supplementary motor area, suggesting these regions support beat perception. However, the basal ganglia and supplementary motor area are part of a general rhythm and timing network (regardless of the beat) and may also represent basic rhythmic features (e.g. tempo, number of onsets). To characterize the encoding of beat-related and other basic rhythmic features, we used representational similarity analysis. During functional magnetic resonance imaging, participants heard 12 rhythms-4 strong-beat, 4 weak-beat, and 4 nonbeat. Multi-voxel activity patterns for each rhythm were tested to determine which brain areas were beat-sensitive: those in which activity patterns showed greater dissimilarities between rhythms of different beat strength than between rhythms of similar beat strength. Indeed, putamen and supplementary motor area activity patterns were significantly dissimilar for strong-beat and nonbeat conditions. Next, we tested whether basic rhythmic features or models of beat strength (counterevidence scores) predicted activity patterns. We found again that activity pattern dissimilarity in supplementary motor area and putamen correlated with beat strength models, not basic features. Beat strength models also correlated with activity pattern dissimilarities in the inferior frontal gyrus and inferior parietal lobe, though these regions encoded beat and rhythm simultaneously and were not driven by beat alone.


Assuntos
Percepção Auditiva , Mapeamento Encefálico , Imageamento por Ressonância Magnética , Córtex Motor , Música , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Córtex Motor/fisiologia , Córtex Motor/diagnóstico por imagem , Percepção Auditiva/fisiologia , Periodicidade , Estimulação Acústica/métodos , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
7.
Trends Hear ; 28: 23312165241282872, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39397786

RESUMO

Decoding speech envelopes from electroencephalogram (EEG) signals holds potential as a research tool for objectively assessing auditory processing, which could contribute to future developments in hearing loss diagnosis. However, current methods struggle to meet both high accuracy and interpretability. We propose a deep learning model called the auditory decoding transformer (ADT) network for speech envelope reconstruction from EEG signals to address these issues. The ADT network uses spatio-temporal convolution for feature extraction, followed by a transformer decoder to decode the speech envelopes. Through anticausal masking, the ADT considers only the current and future EEG features to match the natural relationship of speech and EEG. Performance evaluation shows that the ADT network achieves average reconstruction scores of 0.168 and 0.167 on the SparrKULee and DTU datasets, respectively, rivaling those of other nonlinear models. Furthermore, by visualizing the weights of the spatio-temporal convolution layer as time-domain filters and brain topographies, combined with an ablation study of the temporal convolution kernels, we analyze the behavioral patterns of the ADT network in decoding speech envelopes. The results indicate that low- (0.5-8 Hz) and high-frequency (14-32 Hz) EEG signals are more critical for envelope reconstruction and that the active brain regions are primarily distributed bilaterally in the auditory cortex, consistent with previous research. Visualization of attention scores further validated previous research. In summary, the ADT network balances high performance and interpretability, making it a promising tool for studying neural speech envelope tracking.


Assuntos
Aprendizado Profundo , Eletroencefalografia , Processamento de Sinais Assistido por Computador , Percepção da Fala , Humanos , Eletroencefalografia/métodos , Percepção da Fala/fisiologia , Dinâmica não Linear , Estimulação Acústica/métodos , Acústica da Fala , Redes Neurais de Computação , Córtex Auditivo/fisiologia
8.
J Acoust Soc Am ; 156(4): 2759-2766, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39436360

RESUMO

Tactile stimulation has been shown to increase auditory loudness judgments in listeners. This bias could be utilized to enhance perception for people with deficiencies in auditory intensity perception, such as cochlear implant users. However, several aspects of this enhancement remain uncertain. For instance, does the tactile stimulation need to be applied to the hand or body, or can it be applied to the wrist? Furthermore, can the tactile stimulation both amplify and attenuate the perceived auditory loudness? To address these questions, two loudness-matching experiments were conducted. Participants matched a comparison auditory stimulus with an auditory reference, either with or without spectro-temporally identical tactile stimulation. In the first experiment, fixed-level tactile stimulation was administered to the wrist during the comparison stimulus to assess whether perceived auditory loudness increased. The second experiment replicated the same conditions but introduced tactile stimulation to both the reference and comparison, aiming to investigate the potential decrease in perceived auditory loudness when the two tactile accompaniments were incongruent between the reference and comparison. The results provide evidence supporting the existence of the tactile loudness bias in each experiment and are a step towards wrist-based haptic devices that modulate the auditory dynamic range for a user.


Assuntos
Percepção Sonora , Tato , Punho , Humanos , Feminino , Masculino , Punho/fisiologia , Adulto , Adulto Jovem , Estimulação Acústica/métodos , Julgamento , Estimulação Física , Percepção Auditiva/fisiologia
9.
Brain Topogr ; 38(1): 2, 2024 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-39367155

RESUMO

Frequent listening to unfamiliar music excerpts forms functional connectivity in the brain as music becomes familiar and memorable. However, where these connections spectrally arise in the cerebral cortex during music familiarization has yet to be determined. This study investigates electrophysiological changes in phase-based functional connectivity recorded with electroencephalography (EEG) from twenty participants' brains during thrice passive listening to initially unknown classical music excerpts. Functional connectivity is evaluated based on measuring phase synchronization between all pairwise combinations of EEG electrodes across all repetitions via repeated measures ANOVA and between every two repetitions of listening to unknown music with the weighted phase lag index (WPLI) method in different frequency bands. The results indicate an increased phase synchronization during gradual short-term familiarization between the right frontal and the right parietal areas in the theta and alpha bands. In addition, the increased phase synchronization is discovered between the right temporal areas and the right parietal areas at the theta band during gradual music familiarization. Overall, this study explores the short-term music familiarization effects on neural responses by revealing that repetitions form phasic coupling in the theta and alpha bands in the right hemisphere during passive listening.


Assuntos
Ritmo alfa , Percepção Auditiva , Eletroencefalografia , Lobo Frontal , Música , Lobo Parietal , Ritmo Teta , Humanos , Masculino , Feminino , Ritmo alfa/fisiologia , Adulto Jovem , Lobo Parietal/fisiologia , Ritmo Teta/fisiologia , Adulto , Percepção Auditiva/fisiologia , Lobo Frontal/fisiologia , Eletroencefalografia/métodos , Lobo Temporal/fisiologia , Reconhecimento Psicológico/fisiologia , Estimulação Acústica/métodos
10.
J Acoust Soc Am ; 156(4): 2220-2236, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39377529

RESUMO

Measuring and analyzing both nonlinear-distortion and linear-reflection otoacoustic emissions (OAEs) combined creates what we have termed a "joint-OAE profile." Here, we test whether these two classes of emissions have different sensitivities to hearing loss and whether our joint-OAE profile can detect mild-moderate hearing loss better than conventional OAE protocols have. 2f1-f2 distortion-product OAEs and stimulus-frequency OAEs were evoked with rapidly sweeping tones in 300 normal and impaired ears. Metrics included OAE amplitude for fixed-level stimuli as well as slope and compression features derived from OAE input/output functions. Results show that mild-moderate hearing loss impacts distortion and reflection emissions differently. Clinical decision theory was applied using OAE metrics to classify all ears as either normal-hearing or hearing-impaired. Our best OAE classifiers achieved 90% or better hit rates (with false positive rates of 5%-10%) for mild hearing loss, across a nearly five-octave range. In summary, results suggest that distortion and reflection emissions have distinct sensitivities to hearing loss, which supports the use of a joint-OAE approach for diagnosis. Results also indicate that analyzing both reflection and distortion OAEs together to detect mild hearing loss produces outstanding accuracy across the frequency range, exceeding that achieved by conventional OAE protocols.


Assuntos
Estimulação Acústica , Emissões Otoacústicas Espontâneas , Humanos , Emissões Otoacústicas Espontâneas/fisiologia , Feminino , Estimulação Acústica/métodos , Masculino , Adulto , Adulto Jovem , Pessoa de Meia-Idade , Índice de Gravidade de Doença , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/diagnóstico , Estudos de Casos e Controles , Valor Preditivo dos Testes , Idoso , Audiometria de Tons Puros , Limiar Auditivo/fisiologia , Dinâmica não Linear
11.
Sci Rep ; 14(1): 24026, 2024 10 14.
Artigo em Inglês | MEDLINE | ID: mdl-39402073

RESUMO

Surgical personnel face various stressors in the workplace, including environmental sounds. Mobile electroencephalography (EEG) offers a promising approach for objectively measuring how individuals perceive sounds. Because surgical performance does not necessarily decrease with higher levels of distraction, EEG could help guide noise reduction strategies that are independent of performance measures. In this study, we utilized mobile EEG to explore how a realistic soundscape is perceived during simulated laparoscopic surgery. To examine the varying demands placed on personnel in different situations, we manipulated the cognitive demand during the surgical task, using a memory task. To assess responses to the soundscape, we calculated event-related potentials for distinct sound events and temporal response functions for the ongoing soundscape. Although participants reported varying degrees of demand under different conditions, no significant effects were observed on surgical task performance or EEG parameters. However, changes in surgical task performance and EEG parameters over time were noted, while subjective results remained consistent over time. These findings highlight the importance of using multiple measures to fully understand the complex relationship between sound processing and cognitive demand. Furthermore, in the context of combined EEG and audio recordings in real-life scenarios, a sparse representation of the soundscape has the advantage that it can be recorded in a data-protected way compared to more detailed representations. However, it is unclear whether information get lost with sparse representations. Our results indicate that sparse and detailed representations are equally effective in eliciting neural responses. Overall, this study marks a significant step towards objectively investigating sound processing in applied settings.


Assuntos
Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Masculino , Feminino , Adulto , Laparoscopia/métodos , Adulto Jovem , Percepção Auditiva/fisiologia , Análise e Desempenho de Tarefas , Estimulação Acústica/métodos , Som , Potenciais Evocados/fisiologia , Estresse Ocupacional/fisiopatologia
12.
Brain Topogr ; 38(1): 8, 2024 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-39400782

RESUMO

Pre-surgical localization of language function in the brain is critical for patients with medically intractable epilepsy. MEG has emerged as a valuable clinical tool for localizing language areas in clinical populations, however, it is limited for widespread application due to the low availability of the system. Recent advances in optically pumped magnetometer (OPM) systems account for some of the limitations of traditional MEG and have been shown to have a similar signal-to-noise ratio. However, the novelty of these systems means that they have only been tested for limited sensory and motor applications. In this work, we aim to validate a novel on-head OPM MEG procedure for lateralizing language processes. OPM recordings, using a soft cap with flexible sensor placement, were collected from 19 healthy, right-handed controls during an auditory word recognition task. The resulting evoked fields were assessed for hemispheric laterality of the response. Principal component analysis (PCA) of the grand average language response indicated that the first two principal components were lateralized to the left hemisphere. The PCA also revealed that all participants had evoked topographies that closely resembled the average left-lateralized response. Left-lateralized responses were consistent with what is expected for a group of healthy right-handed individuals. These findings demonstrate that language-related evoked fields can be elucidated from on-head OPM MEG recordings in a group of healthy adult participants. In the future, on-head OPM MEG and the associated lateralization methods should be validated in patient populations as they may have utility in the pre-surgical mapping of language functions in patients with epilepsy.


Assuntos
Lateralidade Funcional , Idioma , Magnetoencefalografia , Humanos , Magnetoencefalografia/métodos , Magnetoencefalografia/normas , Feminino , Masculino , Lateralidade Funcional/fisiologia , Adulto , Adulto Jovem , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Encéfalo/fisiopatologia , Análise de Componente Principal , Pessoa de Meia-Idade , Estimulação Acústica/métodos
13.
Adv Exp Med Biol ; 1463: 153-158, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39400816

RESUMO

The purpose of this study was to examine the effects on prefrontal cortex (PFC) activity of listening to pleasant sounds (PS) while walking, gum chewing (GCh), or performing the dual task of walking and gum chewing at the same time (walking + GCh). A total of 11 healthy adult male volunteers participated in the study (mean age: 29.54 ± 3.37). The block design of the trial consisted of a 30-sec rest, a 60-sec task (target task or control task), and a 30-sec rest. There were three target task conditions: walking, GCh, and the dual task. All of these were performed while listening to PS. The control condition was rest (no exercise) while listening to PS. The outcomes measured and measurements used were PFC activity using two-channel near-infrared spectroscopy and participant self-evaluation of the pleasantness of the experience using the visual analogue scale (VAS). Compared to the control condition, there was significantly greater PFC activation during the GCh and the walking + GCh tasks. Compared to the control condition, GCh and walking + GCh showed significantly greater activation on the VAS measure. In conclusion, listening to PS while GCh or walking + GCh increases PFC activity in the lower central region and induces positive emotional change.


Assuntos
Goma de Mascar , Mastigação , Córtex Pré-Frontal , Humanos , Córtex Pré-Frontal/fisiologia , Masculino , Adulto , Mastigação/fisiologia , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Caminhada/fisiologia , Estimulação Acústica/métodos
14.
BMC Neurosci ; 25(1): 54, 2024 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-39448936

RESUMO

BACKGROUND: Several cognitive functions are related to sex. However, the relationship between auditory attention and sex remains unclear. The present study aimed to explore sex differences in auditory saliency judgments, with a particular focus on bottom-up type auditory attention. METHODS: Forty-five typical adults (mean age: 21.5 ± 0.64 years) with no known hearing deficits, intelligence abnormalities, or attention deficits were enrolled in this study. They were tasked with annotating attention capturing sounds from five audio clips played in a soundproof room. Each stimulus contained ten salient sounds randomly placed within a 1-min natural soundscape. We conducted a generalized linear mixed model (GLMM) analysis using the number of responses to salient sounds as the dependent variable, sex as the between-subjects factor, duration, maximum loudness, and maximum spectrum of each sound as the within-subjects factor, and each sound event and participant as the variable effect. RESULTS: No significant differences were found between male and female groups in age, hearing threshold, intellectual function, and attention function (all p > 0.05). Analysis confirmed 77 distinct sound events, with individual response rates of 4.0-100%. In a GLMM analysis, the main effect of sex was not statistically significant (p = 0.458). Duration and spectrum had a significant effect on response rate (p = 0.006 and p < 0.001). The effect of loudness was not statistically significant (p = 0.13). CONCLUSIONS: The results suggest that male and female listeners do not differ significantly in their auditory saliency judgments based on the acoustic characteristics studied. This finding challenges the notion of inherent sex differences in bottom-up auditory attention and highlights the need for further research to explore other potential factors or conditions under which such differences might emerge.


Assuntos
Estimulação Acústica , Atenção , Percepção Auditiva , Caracteres Sexuais , Humanos , Masculino , Feminino , Adulto Jovem , Atenção/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Adulto
15.
Soc Cogn Affect Neurosci ; 19(1)2024 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-39417280

RESUMO

Humans live in collective groups and are highly sensitive to perceived emotions of a group, including the pain of a group. However, previous research on empathy for pain mainly focused on the suffering of a single individual ("individual empathy for pain"), with limited understanding of empathy for pain to a group ("group empathy for pain"). Thus, the present study aimed to investigate the cognitive neural mechanisms of group empathy for pain in the auditory modality. The study produced group painful voices to simulate the painful voices made by a group, and recruited 34 participants to explore differences between their responses to group painful voices and individual painful voices using the event-related potential (ERP) techniques. The results revealed that group painful voices were rated with higher pain intensity, more negative affective valence, and larger P2 amplitudes than individual painful voices. Furthermore, trait affective empathy scores of the participants were positively correlated with their P2 amplitudes of group painful voices. The results suggested that the group empathy for pain may facilitate affective empathetic processing in auditory modality.


Assuntos
Empatia , Dor , Humanos , Empatia/fisiologia , Feminino , Masculino , Dor/psicologia , Dor/fisiopatologia , Adulto Jovem , Adulto , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Processos Grupais
16.
Cereb Cortex ; 34(10)2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-39475113

RESUMO

Research on action-based timing has shed light on the temporal dynamics of sensorimotor coordination. This study investigates the neural mechanisms underlying action-based timing, particularly during finger-tapping tasks involving synchronized and syncopated patterns. Twelve healthy participants completed a continuation task, alternating between tapping in time with an auditory metronome (pacing) and continuing without it (continuation). Electroencephalography data were collected to explore how neural activity changes across these coordination modes and phases. We applied deep learning methods to classify single-trial electroencephalography data and predict behavioral timing conditions. Results showed significant classification accuracy for distinguishing between pacing and continuation phases, particularly during the presence of auditory cues, emphasizing the role of auditory input in motor timing. However, when auditory components were removed from the electroencephalography data, the differentiation between phases became inconclusive. Mean accuracy asynchrony, a measure of timing error, emerged as a superior predictor of performance variability compared to inter-response interval. These findings highlight the importance of auditory cues in modulating motor timing behaviors and present the challenges of isolating motor activation in the absence of auditory stimuli. Our study offers new insights into the neural dynamics of motor timing and demonstrates the utility of deep learning in analyzing single-trial electroencephalography data.


Assuntos
Sinais (Psicologia) , Aprendizado Profundo , Eletroencefalografia , Desempenho Psicomotor , Humanos , Masculino , Eletroencefalografia/métodos , Feminino , Adulto , Adulto Jovem , Desempenho Psicomotor/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Dedos/fisiologia
17.
J Neurosci ; 44(44)2024 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-39477536

RESUMO

The capabilities of the human ear are remarkable. We can normally detect acoustic stimuli down to a threshold sound-pressure level of 0 dB (decibels) at the entrance to the external ear, which elicits eardrum vibrations in the picometer range. From this threshold up to the onset of pain, 120 dB, our ears can encompass sounds that differ in power by a trillionfold. The comprehension of speech and enjoyment of music result from our ability to distinguish between tones that differ in frequency by only 0.2%. All these capabilities vanish upon damage to the ear's receptors, the mechanoreceptive sensory hair cells. Each cochlea, the auditory organ of the inner ear, contains some 16,000 such cells that are frequency-tuned between ∼20 Hz (cycles per second) and 20,000 Hz. Remarkably enough, hair cells do not simply capture sound energy: they can also exhibit an active process whereby sound signals are amplified, tuned, and scaled. This article describes the active process in detail and offers evidence that its striking features emerge from the operation of hair cells on the brink of an oscillatory instability-one example of the critical phenomena that are widespread in physics.


Assuntos
Células Ciliadas Auditivas , Humanos , Células Ciliadas Auditivas/fisiologia , Animais , Mecanotransdução Celular/fisiologia , Estimulação Acústica/métodos
18.
Sci Rep ; 14(1): 25553, 2024 10 26.
Artigo em Inglês | MEDLINE | ID: mdl-39462004

RESUMO

In this randomized, controlled, and double-blind experiment with a relatively large sample (n = 262), a novel technique of audiovisual stimulation (AVS) was demonstrated to substantially improve self-reported mood states by reducing several negative affects, including anxiety and depression, and enhancing performance on mood-sensitive cognitive tasks. Most of the AVS effects were highly similar whether binaural beats were present or not and regardless of the duration of experience. Remarkably, the mood benefits from AVS closely aligned with those achieved through breath-focused meditation with additional evidence that a brief AVS exposure of approximately five minutes may be sufficient or even optimal for improving mood to a comparable or greater degree than meditation sessions of equal or longer durations (11-22 min). These exciting findings position AVS as a promising avenue for mood and cognition enhancement and a potentially more accessible "plug-and-play" alternative to meditation, which is especially relevant considering the high attrition rates commonly observed in meditation practices.


Assuntos
Afeto , Cognição , Meditação , Humanos , Meditação/métodos , Meditação/psicologia , Afeto/fisiologia , Cognição/fisiologia , Feminino , Masculino , Adulto , Método Duplo-Cego , Estimulação Luminosa/métodos , Estimulação Acústica/métodos , Pessoa de Meia-Idade , Adulto Jovem , Depressão/terapia , Depressão/psicologia , Ansiedade/terapia , Ansiedade/psicologia , Respiração
19.
J Vis Exp ; (212)2024 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-39465961

RESUMO

Electric-acoustic stimulation (EAS) is a promising treatment to improve hearing ability in patients with high-frequency hearing loss (HL). In EAS surgeries, shorter electrodes have been preferred to avoid the presence of an electrode covering the residual hearing region. However, our earlier studies showed that EAS with longer electrodes (28 mm) could preserve acoustic hearing. Additionally, we reported that the hearing preservation (HP) scores were independent of the length of the inserted electrodes, consistent with the systematic review. As most EAS patients gradually lose residual hearing over time due to the natural course of HL, in these cases, providing broader cochlear coverage using longer electrodes was beneficial toward better place-pitch matching. In addition to preparing for the deterioration in hearing in the future, EAS with longer electrodes could offer various types of map strategies. Herein, we show the pre-, intra-, and post-procedures for EAS surgery. Appropriate preoperative evaluation, less invasive surgery, flexible lateral-wall electrodes, and steroid administration resulted in good HP following EAS with longer electrodes.


Assuntos
Perda Auditiva de Alta Frequência , Humanos , Perda Auditiva de Alta Frequência/cirurgia , Estimulação Acústica/métodos , Estimulação Acústica/instrumentação , Implante Coclear/métodos , Implante Coclear/instrumentação , Cóclea/cirurgia , Implantes Cocleares , Estimulação Elétrica/instrumentação , Estimulação Elétrica/métodos , Eletrodos Implantados
20.
Commun Biol ; 7(1): 1376, 2024 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-39443657

RESUMO

Background music is widely used to sustain attention, but little is known about what musical properties aid attention. This may be due to inter-individual variability in neural responses to music. Here we find that music with amplitude modulations added at specific rates can sustain attention differentially for those with varying levels of attentional difficulty. We first tested the hypothesis that music with strong amplitude modulation would improve sustained attention, and found it did so when it occurred early in the experiment. Rapid modulations in music elicited greater activity in attentional networks in fMRI, as well as greater stimulus-brain coupling in EEG. Finally, to test the idea that specific modulation properties would differentially affect listeners based on their level of attentional difficulty, we parametrically manipulated the depth and rate of amplitude modulations inserted in otherwise-identical music, and found that beta-range modulations helped more than other modulation ranges for participants with more ADHD symptoms. Results suggest the possibility of an oscillation-based neural mechanism for targeted music to support improved cognitive performance.


Assuntos
Atenção , Percepção Auditiva , Imageamento por Ressonância Magnética , Música , Humanos , Música/psicologia , Atenção/fisiologia , Masculino , Feminino , Percepção Auditiva/fisiologia , Adulto Jovem , Adulto , Eletroencefalografia , Estimulação Acústica/métodos , Encéfalo/fisiologia , Encéfalo/fisiopatologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...