Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 16.712
Filtrar
1.
PLoS One ; 19(7): e0299784, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38950011

RESUMO

Observers can discriminate between correct versus incorrect perceptual decisions with feelings of confidence. The centro-parietal positivity build-up rate (CPP slope) has been suggested as a likely neural signature of accumulated evidence, which may guide both perceptual performance and confidence. However, CPP slope also covaries with reaction time, which also covaries with confidence in previous studies, and performance and confidence typically covary; thus, CPP slope may index signatures of perceptual performance rather than confidence per se. Moreover, perceptual metacognition-including neural correlates-has largely been studied in vision, with few exceptions. Thus, we lack understanding of domain-general neural signatures of perceptual metacognition outside vision. Here we designed a novel auditory pitch identification task and collected behavior with simultaneous 32-channel EEG in healthy adults. Participants saw two tone labels which varied in tonal distance on each trial (e.g., C vs D, C vs F), then heard a single auditory tone; they identified which label was correct and rated confidence. We found that pitch identification confidence varied with tonal distance, but performance, metacognitive sensitivity (trial-by-trial covariation of confidence with accuracy), and reaction time did not. Interestingly, however, while CPP slope covaried with performance and reaction time, it did not significantly covary with confidence. We interpret these results to mean that CPP slope is likely a signature of first-order perceptual processing and not confidence-specific signals or computations in auditory tasks. Our novel pitch identification task offers a valuable method to examine the neural correlates of auditory and domain-general perceptual confidence.


Assuntos
Eletroencefalografia , Percepção da Altura Sonora , Tempo de Reação , Humanos , Masculino , Feminino , Adulto , Tempo de Reação/fisiologia , Adulto Jovem , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Metacognição/fisiologia , Percepção Auditiva/fisiologia
2.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39001584

RESUMO

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Música , Humanos , Feminino , Masculino , Percepção Auditiva/fisiologia , Recém-Nascido , Canto/fisiologia , Recém-Nascido Prematuro/fisiologia , Mapeamento Encefálico , Estimulação Acústica , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Voz/fisiologia
3.
Philos Trans R Soc Lond B Biol Sci ; 379(1908): 20230257, 2024 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-39005025

RESUMO

Misophonia is commonly classified by intense emotional reactions to common everyday sounds. The condition has an impact both on the mental health of its sufferers and societally. As yet, formal models on the basis of misophonia are in their infancy. Based on developing behavioural and neuroscientific research we are gaining a growing understanding of the phenomenology and empirical findings in misophonia, such as the importance of context, types of coping strategies used and the activation of particular brain regions. In this article, we argue for a model of misophonia that includes not only the sound but also the context within which sound is perceived and the emotional reaction triggered. We review the current behavioural and neuroimaging literature, which lends support to this idea. Based on the current evidence, we propose that misophonia should be understood within the broader context of social perception and cognition, and not restricted within the narrow domain of being a disorder of auditory processing. We discuss the evidence in support of this hypothesis, as well as the implications for potential treatment approaches. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.


Assuntos
Emoções , Cognição Social , Humanos , Emoções/fisiologia , Percepção Auditiva/fisiologia , Cognição , Percepção Social
5.
Proc Natl Acad Sci U S A ; 121(30): e2320378121, 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39008675

RESUMO

The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.


Assuntos
Percepção Auditiva , Emoções , Imageamento por Ressonância Magnética , Música , Percepção Visual , Humanos , Música/psicologia , Feminino , Masculino , Adulto , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Emoções/fisiologia , Adulto Jovem , Mapeamento Encefálico , Estimulação Acústica , Córtex Visual/fisiologia , Córtex Visual/diagnóstico por imagem , Córtex Visual Primário/fisiologia , Estimulação Luminosa/métodos
6.
Optom Vis Sci ; 101(6): 393-398, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38990237

RESUMO

SIGNIFICANCE: It is important to know whether early-onset vision loss and late-onset vision loss are associated with differences in the estimation of distances of sound sources within the environment. People with vision loss rely heavily on auditory cues for path planning, safe navigation, avoiding collisions, and activities of daily living. PURPOSE: Loss of vision can lead to substantial changes in auditory abilities. It is unclear whether differences in sound distance estimation exist in people with early-onset partial vision loss, late-onset partial vision loss, and normal vision. We investigated distance estimates for a range of sound sources and auditory environments in groups of participants with early- or late-onset partial visual loss and sighted controls. METHODS: Fifty-two participants heard static sounds with virtual distances ranging from 1.2 to 13.8 m within a simulated room. The room simulated either anechoic (no echoes) or reverberant environments. Stimuli were speech, music, or noise. Single sounds were presented, and participants reported the estimated distance of the sound source. Each participant took part in 480 trials. RESULTS: Analysis of variance showed significant main effects of visual status (p<0.05) environment (reverberant vs. anechoic, p<0.05) and also of the stimulus (p<0.05). Significant differences (p<0.05) were shown in the estimation of distances of sound sources between early-onset visually impaired participants and sighted controls for closer distances for all conditions except the anechoic speech condition and at middle distances for all conditions except the reverberant speech and music conditions. Late-onset visually impaired participants and sighted controls showed similar performance (p>0.05). CONCLUSIONS: The findings suggest that early-onset partial vision loss results in significant changes in judged auditory distance in different environments, especially for close and middle distances. Late-onset partial visual loss has less of an impact on the ability to estimate the distance of sound sources. The findings are consistent with a theoretical framework, the perceptual restructuring hypothesis, which was recently proposed to account for the effects of vision loss on audition.


Assuntos
Localização de Som , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Adulto , Localização de Som/fisiologia , Julgamento , Percepção Auditiva/fisiologia , Percepção de Distância/fisiologia , Estimulação Acústica/métodos , Adulto Jovem , Acuidade Visual/fisiologia , Idade de Início , Idoso de 80 Anos ou mais , Sinais (Psicologia)
7.
Front Neural Circuits ; 18: 1431119, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39011279

RESUMO

Memory-guided motor shaping is necessary for sensorimotor learning. Vocal learning, such as speech development in human babies and song learning in bird juveniles, begins with the formation of an auditory template by hearing adult voices followed by vocally matching to the memorized template using auditory feedback. In zebra finches, the widely used songbird model system, only males develop individually unique stereotyped songs. The production of normal songs relies on auditory experience of tutor's songs (commonly their father's songs) during a critical period in development that consists of orchestrated auditory and sensorimotor phases. "Auditory templates" of tutor songs are thought to form in the brain to guide later vocal learning, while formation of "motor templates" of own song has been suggested to be necessary for the maintenance of stereotyped adult songs. Where these templates are formed in the brain and how they interact with other brain areas to guide song learning, presumably with template-matching error correction, remains to be clarified. Here, we review and discuss studies on auditory and motor templates in the avian brain. We suggest that distinct auditory and motor template systems exist that switch their functions during development.


Assuntos
Percepção Auditiva , Aprendizagem , Vocalização Animal , Animais , Vocalização Animal/fisiologia , Aprendizagem/fisiologia , Percepção Auditiva/fisiologia , Memória/fisiologia , Tentilhões/fisiologia , Encéfalo/fisiologia , Masculino
8.
Commun Biol ; 7(1): 856, 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38997514

RESUMO

The neuroscience of consciousness aims to identify neural markers that distinguish brain dynamics in healthy individuals from those in unconscious conditions. Recent research has revealed that specific brain connectivity patterns correlate with conscious states and diminish with loss of consciousness. However, the contribution of these patterns to shaping conscious processing remains unclear. Our study investigates the functional significance of these neural dynamics by examining their impact on participants' ability to process external information during wakefulness. Using fMRI recordings during an auditory detection task and rest, we show that ongoing dynamics are underpinned by brain patterns consistent with those identified in previous research. Detection of auditory stimuli at threshold is specifically improved when the connectivity pattern at stimulus presentation corresponds to patterns characteristic of conscious states. Conversely, the occurrence of these conscious state-associated patterns increases after detection, indicating a mutual influence between ongoing brain dynamics and conscious perception. Our findings suggest that certain brain configurations are more favorable to the conscious processing of external stimuli. Targeting these favorable patterns in patients with consciousness disorders may help identify windows of greater receptivity to the external world, guiding personalized treatments.


Assuntos
Estimulação Acústica , Percepção Auditiva , Encéfalo , Estado de Consciência , Imageamento por Ressonância Magnética , Humanos , Estado de Consciência/fisiologia , Percepção Auditiva/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos
9.
Sci Rep ; 14(1): 16412, 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39013995

RESUMO

A series of eleven public concerts (staging chamber music by Ludwig van Beethoven, Brett Dean, Johannes Brahms) was organized with the goal to analyze physiological synchronies within the audiences and associations of synchrony with psychological variables. We hypothesized that the music would induce synchronized physiology, which would be linked to participants' aesthetic experiences, affect, and personality traits. Physiological measures (cardiac, electrodermal, respiration) of 695 participants were recorded during presentations. Before and after concerts, questionnaires provided self-report scales and standardized measures of participants' affectivity, personality traits, aesthetic experiences and listening modes. Synchrony was computed by a cross-correlational algorithm to obtain, for each participant and physiological variable (heart rate, heart-rate variability, respiration rate, respiration, skin-conductance response), how much each individual participant contributed to overall audience synchrony. In hierarchical models, such synchrony contribution was used as the dependent and the various self-report scales as predictor variables. We found that physiology throughout audiences was significantly synchronized, as expected with the exception of breathing behavior. There were links between synchrony and affectivity. Personality moderated the synchrony levels: Openness was positively associated, Extraversion and Neuroticism negatively. Several factors of experiences and listening modes predicted synchrony. Emotional listening was associated with reduced, whereas both structual and sound-focused listening was associated with increased synchrony. We concluded with an updated, nuanced understanding of synchrony on the timescale of whole concerts, inviting elaboration by synchony studies on shorter timescales of music passages.


Assuntos
Música , Personalidade , Humanos , Música/psicologia , Masculino , Feminino , Adulto , Personalidade/fisiologia , Frequência Cardíaca/fisiologia , Percepção Auditiva/fisiologia , Adulto Jovem , Pessoa de Meia-Idade , Resposta Galvânica da Pele/fisiologia , Atitude , Adolescente , Inquéritos e Questionários , Emoções/fisiologia , Taxa Respiratória/fisiologia
10.
Sci Rep ; 14(1): 16462, 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39014043

RESUMO

The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.


Assuntos
Aptidão , Emoções , Música , Humanos , Música/psicologia , Masculino , Feminino , Emoções/fisiologia , Aptidão/fisiologia , Adulto , Adulto Jovem , Percepção da Fala/fisiologia , Percepção Auditiva/fisiologia , Adolescente , Reconhecimento Psicológico/fisiologia , Voz/fisiologia
11.
Cereb Cortex ; 34(7)2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-39016432

RESUMO

Sound is an important navigational cue for mammals. During spatial navigation, hippocampal place cells encode spatial representations of the environment based on visual information, but to what extent audiospatial information can enable reliable place cell mapping is largely unknown. We assessed this by recording from CA1 place cells in the dark, under circumstances where reliable visual, tactile, or olfactory information was unavailable. Male rats were exposed to auditory cues of different frequencies that were delivered from local or distal spatial locations. We observed that distal, but not local cue presentation, enables and supports stable place fields, regardless of the sound frequency used. Our data suggest that a context dependency exists regarding the relevance of auditory information for place field mapping: whereas locally available auditory cues do not serve as a salient spatial basis for the anchoring of place fields, auditory cue localization supports spatial representations by place cells when available in the form of distal information. Furthermore, our results demonstrate that CA1 neurons can effectively use auditory stimuli to generate place fields, and that hippocampal pyramidal neurons are not solely dependent on visual cues for the generation of place field representations based on allocentric reference frames.


Assuntos
Estimulação Acústica , Sinais (Psicologia) , Células de Lugar , Ratos Long-Evans , Percepção Espacial , Animais , Masculino , Células de Lugar/fisiologia , Percepção Espacial/fisiologia , Região CA1 Hipocampal/fisiologia , Região CA1 Hipocampal/citologia , Ratos , Percepção Auditiva/fisiologia , Potenciais de Ação/fisiologia , Navegação Espacial/fisiologia
12.
PLoS One ; 19(7): e0304027, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39018315

RESUMO

Rhythms are the most natural cue for temporal anticipation because many sounds in our living environment have rhythmic structures. Humans have cortical mechanisms that can predict the arrival of the next sound based on rhythm and periodicity. Herein, we showed that temporal anticipation, based on the regularity of sound sequences, modulates peripheral auditory responses via efferent innervation. The medial olivocochlear reflex (MOCR), a sound-activated efferent feedback mechanism that controls outer hair cell motility, was inferred noninvasively by measuring the suppression of otoacoustic emissions (OAE). First, OAE suppression was compared between conditions in which sound sequences preceding the MOCR elicitor were presented at regular (predictable condition) or irregular (unpredictable condition) intervals. We found that OAE suppression in the predictable condition was stronger than that in the unpredictable condition. This implies that the MOCR is strengthened by the regularity of preceding sound sequences. In addition, to examine how many regularly presented preceding sounds are required to enhance the MOCR, we compared OAE suppression within stimulus sequences with 0-3 preceding tones. The OAE suppression was strengthened only when there were at least three regular preceding tones. This suggests that the MOCR was not automatically enhanced by a single stimulus presented immediately before the MOCR elicitor, but rather that it was enhanced by the regularity of the preceding sound sequences.


Assuntos
Estimulação Acústica , Cóclea , Humanos , Masculino , Adulto , Feminino , Adulto Jovem , Cóclea/fisiologia , Núcleo Olivar/fisiologia , Reflexo/fisiologia , Som , Percepção Auditiva/fisiologia , Emissões Otoacústicas Espontâneas/fisiologia , Reflexo Acústico/fisiologia
13.
J Acoust Soc Am ; 156(1): 511-523, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39013168

RESUMO

Echolocating bats rely on precise auditory temporal processing to detect echoes generated by calls that may be emitted at rates reaching 150-200 Hz. High call rates can introduce forward masking perceptual effects that interfere with echo detection; however, bats may have evolved specializations to prevent repetition suppression of auditory responses and facilitate detection of sounds separated by brief intervals. Recovery of the auditory brainstem response (ABR) was assessed in two species that differ in the temporal characteristics of their echolocation behaviors: Eptesicus fuscus, which uses high call rates to capture prey, and Carollia perspicillata, which uses lower call rates to avoid obstacles and forage for fruit. We observed significant species differences in the effects of forward masking on ABR wave 1, in which E. fuscus maintained comparable ABR wave 1 amplitudes when stimulated at intervals of <3 ms, whereas post-stimulus recovery in C. perspicillata required 12 ms. When the intensity of the second stimulus was reduced by 20-30 dB relative to the first, however, C. perspicillata showed greater recovery of wave 1 amplitudes. The results demonstrate that species differences in temporal resolution are established at early levels of the auditory pathway and that these differences reflect auditory processing requirements of species-specific echolocation behaviors.


Assuntos
Estimulação Acústica , Quirópteros , Ecolocação , Potenciais Evocados Auditivos do Tronco Encefálico , Mascaramento Perceptivo , Especificidade da Espécie , Animais , Quirópteros/fisiologia , Estimulação Acústica/métodos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Fatores de Tempo , Masculino , Feminino , Limiar Auditivo , Percepção Auditiva/fisiologia
14.
PLoS Biol ; 22(6): e3002665, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38935589

RESUMO

Loss of synapses between spiral ganglion neurons and inner hair cells (IHC synaptopathy) leads to an auditory neuropathy called hidden hearing loss (HHL) characterized by normal auditory thresholds but reduced amplitude of sound-evoked auditory potentials. It has been proposed that synaptopathy and HHL result in poor performance in challenging hearing tasks despite a normal audiogram. However, this has only been tested in animals after exposure to noise or ototoxic drugs, which can cause deficits beyond synaptopathy. Furthermore, the impact of supernumerary synapses on auditory processing has not been evaluated. Here, we studied mice in which IHC synapse counts were increased or decreased by altering neurotrophin 3 (Ntf3) expression in IHC supporting cells. As we previously showed, postnatal Ntf3 knockdown or overexpression reduces or increases, respectively, IHC synapse density and suprathreshold amplitude of sound-evoked auditory potentials without changing cochlear thresholds. We now show that IHC synapse density does not influence the magnitude of the acoustic startle reflex or its prepulse inhibition. In contrast, gap-prepulse inhibition, a behavioral test for auditory temporal processing, is reduced or enhanced according to Ntf3 expression levels. These results indicate that IHC synaptopathy causes temporal processing deficits predicted in HHL. Furthermore, the improvement in temporal acuity achieved by increasing Ntf3 expression and synapse density suggests a therapeutic strategy for improving hearing in noise for individuals with synaptopathy of various etiologies.


Assuntos
Células Ciliadas Auditivas Internas , Neurotrofina 3 , Sinapses , Animais , Células Ciliadas Auditivas Internas/metabolismo , Células Ciliadas Auditivas Internas/patologia , Sinapses/metabolismo , Sinapses/fisiologia , Neurotrofina 3/metabolismo , Neurotrofina 3/genética , Camundongos , Limiar Auditivo , Potenciais Evocados Auditivos/fisiologia , Reflexo de Sobressalto/fisiologia , Percepção Auditiva/fisiologia , Gânglio Espiral da Cóclea/metabolismo , Feminino , Masculino , Perda Auditiva Oculta
15.
Sci Rep ; 14(1): 14895, 2024 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-38942761

RESUMO

Older adults (OAs) are typically slower and/or less accurate in forming perceptual choices relative to younger adults. Despite perceptual deficits, OAs gain from integrating information across senses, yielding multisensory benefits. However, the cognitive processes underlying these seemingly discrepant ageing effects remain unclear. To address this knowledge gap, 212 participants (18-90 years old) performed an online object categorisation paradigm, whereby age-related differences in Reaction Times (RTs) and choice accuracy between audiovisual (AV), visual (V), and auditory (A) conditions could be assessed. Whereas OAs were slower and less accurate across sensory conditions, they exhibited greater RT decreases between AV and V conditions, showing a larger multisensory benefit towards decisional speed. Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants' behaviour to probe age-related impacts on the latent multisensory decision formation processes. For OAs, HDDM demonstrated slower evidence accumulation rates across sensory conditions coupled with increased response caution for AV trials of higher difficulty. Notably, for trials of lower difficulty we found multisensory benefits in evidence accumulation that increased with age, but not for trials of higher difficulty, in which increased response caution was instead evident. Together, our findings reconcile age-related impacts on multisensory decision-making, indicating greater multisensory evidence accumulation benefits with age underlying enhanced decisional speed.


Assuntos
Envelhecimento , Percepção Auditiva , Tomada de Decisões , Tempo de Reação , Percepção Visual , Humanos , Idoso , Adulto , Pessoa de Meia-Idade , Feminino , Masculino , Idoso de 80 Anos ou mais , Tomada de Decisões/fisiologia , Adolescente , Tempo de Reação/fisiologia , Adulto Jovem , Percepção Auditiva/fisiologia , Envelhecimento/fisiologia , Envelhecimento/psicologia , Percepção Visual/fisiologia , Estimulação Luminosa , Estimulação Acústica
16.
Behav Brain Funct ; 20(1): 17, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38943215

RESUMO

BACKGROUND: Left-handedness is a condition that reverses the typical left cerebral dominance of motor control to an atypical right dominance. The impact of this distinct control - and its associated neuroanatomical peculiarities - on other cognitive functions such as music processing or playing a musical instrument remains unexplored. Previous studies in right-handed population have linked musicianship to a larger volume in the (right) auditory cortex and a larger volume in the (right) arcuate fasciculus. RESULTS: In our study, we reveal that left-handed musicians (n = 55), in comparison to left-handed non-musicians (n = 75), exhibit a larger gray matter volume in both the left and right Heschl's gyrus, critical for auditory processing. They also present a higher number of streamlines across the anterior segment of the right arcuate fasciculus. Importantly, atypical hemispheric lateralization of speech (notably prevalent among left-handers) was associated to a rightward asymmetry of the AF, in contrast to the leftward asymmetry exhibited by the typically lateralized. CONCLUSIONS: These findings suggest that left-handed musicians share similar neuroanatomical characteristics with their right-handed counterparts. However, atypical lateralization of speech might potentiate the right audiomotor pathway, which has been associated with musicianship and better musical skills. This may help explain why musicians are more prevalent among left-handers and shed light on their cognitive advantages.


Assuntos
Lateralidade Funcional , Música , Humanos , Masculino , Lateralidade Funcional/fisiologia , Feminino , Adulto , Adulto Jovem , Córtex Auditivo/anatomia & histologia , Córtex Auditivo/fisiologia , Imageamento por Ressonância Magnética , Substância Cinzenta/anatomia & histologia , Substância Cinzenta/diagnóstico por imagem , Percepção Auditiva/fisiologia , Encéfalo/anatomia & histologia , Encéfalo/fisiologia
17.
Neuroreport ; 35(11): 721-728, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-38874941

RESUMO

Attention is a cognitive process that involves focusing mental resources on specific stimuli and plays a fundamental role in perception, learning, memory, and decision-making. Neurofeedback (NF) is a useful technique for improving attention, providing real-time feedback on brain activity in the form of visual or auditory cues, and allowing users to learn to self-regulate their cognitive processes. This study compares the effectiveness of different cues in NF training for attention enhancement through a multimodal approach. We conducted neurological (Quantitative Electroencephalography), neuropsychological (Mindfulness Attention Awareness Scale-15), and behavioral (Stroop test) assessments before and after NF training on 36 healthy participants, divided into audiovisual (G1) and visual (G2) groups. Twelve NF training sessions were conducted on alternate days, each consisting of five subsessions, with pre- and post-NF baseline electroencephalographic evaluations using power spectral density. The pre-NF baseline was used for thresholding the NF session using the beta frequency band power. Two-way analysis of variance revealed a significant long-term effect of group (G1/G2) and state (before/after NF) on the behavioral and neuropsychological assessments, with G1 showing significantly higher Mindfulness Attention Awareness Scale-15 scores, higher Stroop scores, and lower Stroop reaction times for interaction effects. Moreover, unpaired t -tests to compare voxel-wise standardized low-resolution brain electromagnetic tomography images revealed higher activity of G1 in Brodmann area 40 due to NF training. Neurological assessments show that G1 had better improvement in immediate, short-, and long-term attention. The findings of this study offer a guide for the development of NF training protocols aimed at enhancing attention effectively.


Assuntos
Atenção , Eletroencefalografia , Neurorretroalimentação , Humanos , Neurorretroalimentação/métodos , Atenção/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Eletroencefalografia/métodos , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Estimulação Luminosa/métodos , Percepção Auditiva/fisiologia
18.
J Neurophysiol ; 132(1): 130-133, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38863428

RESUMO

Recent functional magnetic resonance imaging (fMRI) experiments revealed similar neural representations across different types of two-dimensional (2-D) visual stimuli; however, real three-dimensional (3-D) objects affording action differentially affect neural activation and behavioral results relative to 2-D objects. Recruitment of multiple sensory regions during unisensory (visual, haptic, and auditory) object shape tasks suggests that shape representation may be modality invariant. This mini-review explores the overlapping neural regions involved in object shape representation, across 2-D, 3-D, visual, and haptic experiments.


Assuntos
Imageamento por Ressonância Magnética , Humanos , Percepção Visual/fisiologia , Animais , Percepção do Tato/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Percepção de Forma/fisiologia
19.
Proc Natl Acad Sci U S A ; 121(26): e2318361121, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38889147

RESUMO

When listeners hear a voice, they rapidly form a complex first impression of who the person behind that voice might be. We characterize how these multivariate first impressions from voices emerge over time across different levels of abstraction using electroencephalography and representational similarity analysis. We find that for eight perceived physical (gender, age, and health), trait (attractiveness, dominance, and trustworthiness), and social characteristics (educatedness and professionalism), representations emerge early (~80 ms after stimulus onset), with voice acoustics contributing to those representations between ~100 ms and 400 ms. While impressions of person characteristics are highly correlated, we can find evidence for highly abstracted, independent representations of individual person characteristics. These abstracted representationse merge gradually over time. That is, representations of physical characteristics (age, gender) arise early (from ~120 ms), while representations of some trait and social characteristics emerge later (~360 ms onward). The findings align with recent theoretical models and shed light on the computations underpinning person perception from voices.


Assuntos
Percepção Auditiva , Encéfalo , Eletroencefalografia , Voz , Humanos , Masculino , Feminino , Voz/fisiologia , Adulto , Encéfalo/fisiologia , Percepção Auditiva/fisiologia , Adulto Jovem , Percepção Social
20.
J Neural Eng ; 21(4)2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-38936398

RESUMO

Objective.Measures of functional connectivity (FC) can elucidate which cortical regions work together in order to complete a variety of behavioral tasks. This study's primary objective was to expand a previously published model of measuring FC to include multiple subjects and several regions of interest. While FC has been more extensively investigated in vision and other sensorimotor tasks, it is not as well understood in audition. The secondary objective of this study was to investigate how auditory regions are functionally connected to other cortical regions when attention is directed to different distinct auditory stimuli.Approach.This study implements a linear dynamic system (LDS) to measure the structured time-lagged dependence across several cortical regions in order to estimate their FC during a dual-stream auditory attention task.Results.The model's output shows consistent functionally connected regions across different listening conditions, indicative of an auditory attention network that engages regardless of endogenous switching of attention or different auditory cues being attended.Significance.The LDS implemented in this study implements a multivariate autoregression to infer FC across cortical regions during an auditory attention task. This study shows how a first-order autoregressive function can reliably measure functional connectivity from M/EEG data. Additionally, the study shows how auditory regions engage with the supramodal attention network outlined in the visual attention literature.


Assuntos
Atenção , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Masculino , Feminino , Atenção/fisiologia , Adulto , Estimulação Acústica/métodos , Adulto Jovem , Modelos Lineares , Percepção Auditiva/fisiologia , Córtex Auditivo/fisiologia , Magnetoencefalografia/métodos , Rede Nervosa/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...