Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33.152
Filtrar
1.
Hum Brain Mapp ; 45(14): e70035, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39360580

RESUMO

The processing of auditory stimuli which are structured in time is thought to involve the arcuate fasciculus, the white matter tract which connects the temporal cortex and the inferior frontal gyrus. Research has indicated effects of both musical and language experience on the structural characteristics of the arcuate fasciculus. Here, we investigated in a sample of n = 84 young adults whether continuous conceptualizations of musical and multilingual experience related to structural characteristics of the arcuate fasciculus, measured using diffusion tensor imaging. Probabilistic tractography was used to identify the dorsal and ventral parts of the white matter tract. Linear regressions indicated that different aspects of musical sophistication related to the arcuate fasciculus' volume (emotional engagement with music), volumetric asymmetry (musical training and music perceptual abilities), and fractional anisotropy (music perceptual abilities). Our conceptualization of multilingual experience, accounting for participants' proficiency in reading, writing, understanding, and speaking different languages, was not related to the structural characteristics of the arcuate fasciculus. We discuss our results in the context of other research on hemispheric specializations and a dual-stream model of auditory processing.


Assuntos
Percepção Auditiva , Imagem de Tensor de Difusão , Multilinguismo , Música , Substância Branca , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Substância Branca/diagnóstico por imagem , Substância Branca/fisiologia , Substância Branca/anatomia & histologia , Percepção Auditiva/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Lobo Temporal/anatomia & histologia , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia , Vias Neurais/anatomia & histologia , Adolescente
2.
PLoS Biol ; 22(10): e3002789, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39352912

RESUMO

Within species, vocal and auditory systems presumably coevolved to converge on a critical temporal acoustic structure that can be best produced and perceived. While dogs cannot produce articulated sounds, they respond to speech, raising the question as to whether this heterospecific receptive ability could be shaped by exposure to speech or remains bounded by their own sensorimotor capacity. Using acoustic analyses of dog vocalisations, we show that their main production rhythm is slower than the dominant (syllabic) speech rate, and that human-dog-directed speech falls halfway in between. Comparative exploration of neural (electroencephalography) and behavioural responses to speech reveals that comprehension in dogs relies on a slower speech rhythm tracking (delta) than humans' (theta), even though dogs are equally sensitive to speech content and prosody. Thus, the dog audio-motor tuning differs from humans', and we hypothesise that humans may adjust their speech rate to this shared temporal channel as means to improve communication efficacy.


Assuntos
Fala , Vocalização Animal , Animais , Cães , Humanos , Vocalização Animal/fisiologia , Fala/fisiologia , Masculino , Feminino , Eletroencefalografia , Percepção Auditiva/fisiologia , Adulto , Interação Humano-Animal , Estimulação Acústica , Percepção da Fala/fisiologia
3.
Sci Rep ; 14(1): 22764, 2024 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354014

RESUMO

Listening to conversing talkers in quiet environments and remembering the content is a common activity. However, research on the cognitive demands involved is limited. This study investigates the relevance of individuals' cognitive functions for listeners' memory of two-talker conversations and their listening effort in quiet listening settings. A dual-task paradigm was employed to explore memory of conversational content and listening effort while analyzing the role of participants' (n = 29) working memory capacity (measured through the operation span task), attention (Frankfurt attention inventory 2), and information-processing speed (trail making test). In the primary task, participants listened to a conversation between a male and female talker and answered content-related questions. The two talkers' audio signals were presented through headphones, either spatially separated (+ /- 60°) or co-located (0°). Participants concurrently performed a vibrotactile pattern recognition task as a secondary task to measure listening effort. Results indicated that attention and processing speed were related to memory of conversational content and that all three cognitive functions were related to listening effort. Memory performance and listening effort were similar for spatially separated and co-located talkers when considering the psychometric measures. This research offers valuable insights into cognitive processes during two-talker conversations in quiet settings.


Assuntos
Atenção , Cognição , Percepção da Fala , Humanos , Masculino , Feminino , Cognição/fisiologia , Adulto , Atenção/fisiologia , Adulto Jovem , Percepção da Fala/fisiologia , Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Memória/fisiologia
4.
Cereb Cortex ; 34(9)2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39270675

RESUMO

The human auditory system includes discrete cortical patches and selective regions for processing voice information, including emotional prosody. Although behavioral evidence indicates individuals with autism spectrum disorder (ASD) have difficulties in recognizing emotional prosody, it remains understudied whether and how localized voice patches (VPs) and other voice-sensitive regions are functionally altered in processing prosody. This fMRI study investigated neural responses to prosodic voices in 25 adult males with ASD and 33 controls using voices of anger, sadness, and happiness with varying degrees of emotion. We used a functional region-of-interest analysis with an independent voice localizer to identify multiple VPs from combined ASD and control data. We observed a general response reduction to prosodic voices in specific VPs of left posterior temporal VP (TVP) and right middle TVP. Reduced cortical responses in right middle TVP were consistently correlated with the severity of autistic symptoms for all examined emotional prosodies. Moreover, representation similarity analysis revealed the reduced effect of emotional intensity in multivoxel activation patterns in left anterior superior temporal cortex only for sad prosody. These results indicate reduced response magnitudes to voice prosodies in specific TVPs and altered emotion intensity-dependent multivoxel activation patterns in adult ASDs, potentially underlying their socio-communicative difficulties.


Assuntos
Transtorno do Espectro Autista , Emoções , Imageamento por Ressonância Magnética , Lobo Temporal , Voz , Humanos , Masculino , Transtorno do Espectro Autista/fisiopatologia , Transtorno do Espectro Autista/diagnóstico por imagem , Transtorno do Espectro Autista/psicologia , Lobo Temporal/fisiopatologia , Lobo Temporal/diagnóstico por imagem , Adulto , Emoções/fisiologia , Adulto Jovem , Percepção da Fala/fisiologia , Mapeamento Encefálico/métodos , Estimulação Acústica , Percepção Auditiva/fisiologia
5.
Curr Biol ; 34(17): R831-R833, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39255769

RESUMO

'Jump scares' are particularly robust when visuals are paired with coherent sound. A new study demonstrates that connectivity between the superior colliculus and parabigeminal nucleus generates multimodal enhancement of visually triggered defensiveness, revealing a novel multisensory threat augmentation mechanism.


Assuntos
Colículos Superiores , Animais , Colículos Superiores/fisiologia , Mesencéfalo/fisiologia , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Humanos
6.
Sci Rep ; 14(1): 20994, 2024 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251659

RESUMO

Sound recognition is effortless for humans but poses a significant challenge for artificial hearing systems. Deep neural networks (DNNs), especially convolutional neural networks (CNNs), have recently surpassed traditional machine learning in sound classification. However, current DNNs map sounds to labels using binary categorical variables, neglecting the semantic relations between labels. Cognitive neuroscience research suggests that human listeners exploit such semantic information besides acoustic cues. Hence, our hypothesis is that incorporating semantic information improves DNN's sound recognition performance, emulating human behaviour. In our approach, sound recognition is framed as a regression problem, with CNNs trained to map spectrograms to continuous semantic representations from NLP models (Word2Vec, BERT, and CLAP text encoder). Two DNN types were trained: semDNN with continuous embeddings and catDNN with categorical labels, both with a dataset extracted from a collection of 388,211 sounds enriched with semantic descriptions. Evaluations across four external datasets, confirmed the superiority of semantic labeling from semDNN compared to catDNN, preserving higher-level relations. Importantly, an analysis of human similarity ratings for natural sounds, showed that semDNN approximated human listener behaviour better than catDNN, other DNNs, and NLP models. Our work contributes to understanding the role of semantics in sound recognition, bridging the gap between artificial systems and human auditory perception.


Assuntos
Percepção Auditiva , Processamento de Linguagem Natural , Redes Neurais de Computação , Semântica , Humanos , Percepção Auditiva/fisiologia , Aprendizado Profundo , Som
7.
Sci Rep ; 14(1): 20923, 2024 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251764

RESUMO

Does congruence between auditory and visual modalities affect aesthetic experience? While cross-modal correspondences between vision and hearing are well-documented, previous studies show conflicting results regarding whether audiovisual correspondence affects subjective aesthetic experience. Here, in collaboration with the Kentler International Drawing Space (NYC, USA), we depart from previous research by using music specifically composed to pair with visual art in the professionally-curated Music as Image and Metaphor exhibition. Our pre-registered online experiment consisted of 4 conditions: Audio, Visual, Audio-Visual-Intended (artist-intended pairing of art/music), and Audio-Visual-Random (random shuffling). Participants (N = 201) were presented with 16 pieces and could click to proceed to the next piece whenever they liked. We used time spent as an implicit index of aesthetic interest. Additionally, after each piece, participants were asked about their subjective experience (e.g., feeling moved). We found that participants spent significantly more time with Audio, followed by Audiovisual, followed by Visual pieces; however, they felt most moved in the Audiovisual (bi-modal) conditions. Ratings of audiovisual correspondence were significantly higher for the Audiovisual-Intended compared to Audiovisual-Random condition; interestingly, though, there were no significant differences between intended and random conditions on any other subjective rating scale, or for time spent. Collectively, these results call into question the relationship between cross-modal correspondence and aesthetic appreciation. Additionally, the results complicate the use of time spent as an implicit measure of aesthetic experience.


Assuntos
Percepção Auditiva , Estética , Música , Percepção Visual , Humanos , Música/psicologia , Feminino , Estética/psicologia , Masculino , Adulto , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Adulto Jovem , Arte , Estimulação Luminosa , Estimulação Acústica , Adolescente
8.
Sci Rep ; 14(1): 20492, 2024 09 03.
Artigo em Inglês | MEDLINE | ID: mdl-39242623

RESUMO

A social individual needs to effectively manage the amount of complex information in his or her environment relative to his or her own purpose to obtain relevant information. This paper presents a neural architecture aiming to reproduce attention mechanisms (alerting/orienting/selecting) that are efficient in humans during audiovisual tasks in robots. We evaluated the system based on its ability to identify relevant sources of information on faces of subjects emitting vowels. We propose a developmental model of audio-visual attention (MAVA) combining Hebbian learning and a competition between saliency maps based on visual movement and audio energy. MAVA effectively combines bottom-up and top-down information to orient the system toward pertinent areas. The system has several advantages, including online and autonomous learning abilities, low computation time and robustness to environmental noise. MAVA outperforms other artificial models for detecting speech sources under various noise conditions.


Assuntos
Atenção , Robótica , Humanos , Robótica/métodos , Atenção/fisiologia , Lactente , Aprendizagem/fisiologia , Percepção Visual/fisiologia , Desenvolvimento da Linguagem , Percepção Auditiva/fisiologia , Idioma
9.
Commun Biol ; 7(1): 1125, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39266696

RESUMO

During continuous tasks, humans show spontaneous fluctuations in performance, putatively caused by varying attentional resources allocated to process external information. If neural resources are used to process other, presumably "internal" information, sensory input can be missed and explain an apparent dichotomy of "internal" versus "external" attention. In the current study, we extract presumed neural signatures of these attentional modes in human electroencephalography (EEG): neural entrainment and α-oscillations (~10-Hz), linked to the processing and suppression of sensory information, respectively. We test whether they exhibit structured fluctuations over time, while listeners attend to an ecologically relevant stimulus, like speech, and complete a task that requires full and continuous attention. Results show an antagonistic relation between neural entrainment to speech and spontaneous α-oscillations in two distinct brain networks-one specialized in the processing of external information, the other reminiscent of the dorsal attention network. These opposing neural modes undergo slow, periodic fluctuations around ~0.07 Hz and are related to the detection of auditory targets. Our study might have tapped into a general attentional mechanism that is conserved across species and has important implications for situations in which sustained attention to sensory information is critical.


Assuntos
Atenção , Percepção Auditiva , Eletroencefalografia , Humanos , Atenção/fisiologia , Feminino , Masculino , Adulto , Percepção Auditiva/fisiologia , Adulto Jovem , Estimulação Acústica , Encéfalo/fisiologia
10.
J Acoust Soc Am ; 156(3): 1877-1886, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39297650

RESUMO

In listening tests of noise annoyance, subjects act as "measuring instruments". Noise annoyance of different subjects induced by a same noise sample, or noise annoyance of a same subject induced by a same noise sample in different experimental groups, are different due to the varying psychological scale of subjects. To unify subjects' psychological scale and accurately determine perceived annoyance, it is necessary to investigate the optimal noise annoyance data calibration method. Based on the master scale transformation, three kinds of annoyance data calibration methods, i.e., individual annoyance data calibration, sound sample annoyance data calibration, and a combination of both methods, were explored. The effectiveness of three methods for unifying subjects' psychological scale was ascertained. Results showed that the individual annoyance data calibration was the most effective among the three calibration methods. After calibration, the difference between annoyance induced by a same sound sample in any two different experimental sound sample groups declined significantly. The determination coefficient of the fitting curve between psychoacoustic annoyance and perceived annoyance, R2, upgraded significantly. By comprehensively applying listening test methods and annoyance data calibration methods suggested in this study, the psychological scale of the subjects can be as unified as possible.


Assuntos
Estimulação Acústica , Ruído , Psicoacústica , Humanos , Feminino , Masculino , Calibragem , Ruído/efeitos adversos , Adulto , Adulto Jovem , Percepção Auditiva
11.
Proc Natl Acad Sci U S A ; 121(40): e2405615121, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39312661

RESUMO

Stimulus-specific adaptation is a hallmark of sensory processing in which a repeated stimulus results in diminished successive neuronal responses, but a deviant stimulus will still elicit robust responses from the same neurons. Recent work has established that synaptically released zinc is an endogenous mechanism that shapes neuronal responses to sounds in the auditory cortex. Here, to understand the contributions of synaptic zinc to deviance detection of specific neurons, we performed wide-field and 2-photon calcium imaging of multiple classes of cortical neurons. We find that intratelencephalic (IT) neurons in both layers 2/3 and 5 as well as corticocollicular neurons in layer 5 all demonstrate deviance detection; however, we find a specific enhancement of deviance detection in corticocollicular neurons that arises from ZnT3-dependent synaptic zinc in layer 2/3 IT neurons. Genetic deletion of ZnT3 from layer 2/3 IT neurons removes the enhancing effects of synaptic zinc on corticocollicular neuron deviance detection and results in poorer acuity of detecting deviant sounds by behaving mice.


Assuntos
Córtex Auditivo , Neurônios , Sinapses , Zinco , Animais , Zinco/metabolismo , Córtex Auditivo/metabolismo , Córtex Auditivo/fisiologia , Camundongos , Sinapses/metabolismo , Sinapses/fisiologia , Neurônios/metabolismo , Neurônios/fisiologia , Proteínas de Transporte de Cátions/metabolismo , Proteínas de Transporte de Cátions/genética , Estimulação Acústica , Camundongos Knockout , Percepção Auditiva/fisiologia , Camundongos Endogâmicos C57BL , Masculino
12.
J Acoust Soc Am ; 156(3): 1929-1941, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39315887

RESUMO

Electric drones serve diverse functions, including delivery and surveillance. Nonetheless, they encounter significant challenges due to their annoying noise emissions. To address this issue, a sound database was created from experiments conducted in a hover-test-bench and real flights operated indoors. These experiments involved a wide range of parameter variations and operational conditions. A global digital user study involving 578 participants was conducted to assess drone noise annoyance. Furthermore, correlations between annoyance levels, psychoacoustic metrics, sociocultural factors, and technical/operational parameters were analyzed. The effects of implementing acoustic optimization modifications on the drone's performance were quantified with a conceptual design tool. The findings indicate that reducing the levels of loudness, sharpness, tonality, and roughness or fluctuation strength led to an improvement in annoyance. Differences in variable importance of psychoacoustic metrics dependent on the specific model were found. Sociocultural factors did not affect annoyance. Technical and operational parameters impacted annoyance, especially when reducing blade tip speed. A 20% reduction in tip speed showed potential through tool application as it maintained acceptable drone performance while beneficially targeting annoyance. A multi-disciplinary optimization is recommended to maintain operational efficiency. Last, psychoacoustic metrics were validated as an effective measure to evaluate a design solution.


Assuntos
Aeronaves , Ruído dos Transportes , Psicoacústica , Humanos , Masculino , Ruído dos Transportes/efeitos adversos , Feminino , Adulto , Adulto Jovem , Acústica , Pessoa de Meia-Idade , Desenho de Equipamento , Percepção Auditiva , Percepção Sonora
13.
Curr Biol ; 34(18): R866-R868, 2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39317159

RESUMO

Mosquitoes are notorious for swarming. A new study shows that multi-sensory integration, in particular the way that male mosquitoes' behavioural responses to visual stimuli are modulated by female flight tones, plays a key part in this swarming behaviour.


Assuntos
Culicidae , Animais , Feminino , Masculino , Culicidae/fisiologia , Percepção Visual/fisiologia , Voo Animal/fisiologia , Percepção Auditiva/fisiologia
14.
Cereb Cortex ; 34(9)2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39300609

RESUMO

Audiovisual (AV) interaction has been shown in many studies of auditory cortex. However, the underlying processes and circuits are unclear because few studies have used methods that delineate the timing and laminar distribution of net excitatory and inhibitory processes within areas, much less across cortical levels. This study examined laminar profiles of neuronal activity in auditory core (AC) and parabelt (PB) cortices recorded from macaques during active discrimination of conspecific faces and vocalizations. We found modulation of multi-unit activity (MUA) in response to isolated visual stimulation, characterized by a brief deep MUA spike, putatively in white matter, followed by mid-layer MUA suppression in core auditory cortex; the later suppressive event had clear current source density concomitants, while the earlier MUA spike did not. We observed a similar facilitation-suppression sequence in the PB, with later onset latency. In combined AV stimulation, there was moderate reduction of responses to sound during the visual-evoked MUA suppression interval in both AC and PB. These data suggest a common sequence of afferent spikes, followed by synaptic inhibition; however, differences in timing and laminar location may reflect distinct visual projections to AC and PB.


Assuntos
Córtex Auditivo , Estimulação Luminosa , Animais , Córtex Auditivo/fisiologia , Masculino , Estimulação Luminosa/métodos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Macaca mulatta , Potenciais de Ação/fisiologia , Neurônios/fisiologia , Feminino , Vocalização Animal/fisiologia
15.
Elife ; 132024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39302291

RESUMO

Emotional responsiveness in neonates, particularly their ability to discern vocal emotions, plays an evolutionarily adaptive role in human communication and adaptive behaviors. The developmental trajectory of emotional sensitivity in neonates is crucial for understanding the foundations of early social-emotional functioning. However, the precise onset of this sensitivity and its relationship with gestational age (GA) remain subjects of investigation. In a study involving 120 healthy neonates categorized into six groups based on their GA (ranging from 35 and 40 weeks), we explored their emotional responses to vocal stimuli. These stimuli encompassed disyllables with happy and neutral prosodies, alongside acoustically matched nonvocal control sounds. The assessments occurred during natural sleep states using the odd-ball paradigm and event-related potentials. The results reveal a distinct developmental change at 37 weeks GA, marking the point at which neonates exhibit heightened perceptual acuity for emotional vocal expressions. This newfound ability is substantiated by the presence of the mismatch response, akin to an initial form of adult mismatch negativity, elicited in response to positive emotional vocal prosody. Notably, this perceptual shift's specificity becomes evident when no such discrimination is observed in acoustically matched control sounds. Neonates born before 37 weeks GA do not display this level of discrimination ability. This developmental change has important implications for our understanding of early social-emotional development, highlighting the role of gestational age in shaping early perceptual abilities. Moreover, while these findings introduce the potential for a valuable screening tool for conditions like autism, characterized by atypical social-emotional functions, it is important to note that the current data are not yet robust enough to fully support this application. This study makes a substantial contribution to the broader field of developmental neuroscience and holds promise for future research on early intervention in neurodevelopmental disorders.


Assuntos
Emoções , Idade Gestacional , Humanos , Recém-Nascido , Emoções/fisiologia , Feminino , Masculino , Potenciais Evocados/fisiologia , Estimulação Acústica , Voz/fisiologia , Percepção Auditiva/fisiologia
16.
Curr Biol ; 34(17): 4062-4070.e7, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39255755

RESUMO

Some species have evolved the ability to use the sense of hearing to modify existing vocalizations, or even create new ones, which enlarges their repertoires and results in complex communication systems.1 This ability corresponds to various forms of vocal production learning that are all possessed by humans and independently displayed by distantly related vertebrates.1,2,3,4,5,6,7 Among mammals, a few species, including the Egyptian fruit bat,8,9,10 would possess such vocal production learning abilities.7 Yet the necessity of an intact auditory system for the development of the Egyptian fruit bat typical vocal repertoire has not been tested. Furthermore, a systematic causal examination of learned and innate aspects of the entire repertoire has never been performed in any vocal learner. Here we addressed these gaps by eliminating pups' sense of hearing at birth and assessing its effects on vocal production in adulthood. The deafening treatment enabled us to both causally test these bats' vocal learning ability and discern learned from innate aspects of their vocalizations. Leveraging wireless individual audio recordings from freely interacting adults, we show that a subset of the Egyptian fruit bat vocal repertoire necessitates auditory feedback. Intriguingly, these affected vocalizations belong to different acoustic groups in the vocal repertoire of males and females. These findings open the possibilities for targeted studies of the mammalian neural circuits that enable sexually dimorphic forms of vocal learning.


Assuntos
Quirópteros , Aprendizagem , Vocalização Animal , Animais , Quirópteros/fisiologia , Vocalização Animal/fisiologia , Aprendizagem/fisiologia , Feminino , Masculino , Retroalimentação Sensorial/fisiologia , Percepção Auditiva/fisiologia , Audição/fisiologia
17.
Elife ; 122024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39268817

RESUMO

Perceptual systems heavily rely on prior knowledge and predictions to make sense of the environment. Predictions can originate from multiple sources of information, including contextual short-term priors, based on isolated temporal situations, and context-independent long-term priors, arising from extended exposure to statistical regularities. While the effects of short-term predictions on auditory perception have been well-documented, how long-term predictions shape early auditory processing is poorly understood. To address this, we recorded magnetoencephalography data from native speakers of two languages with different word orders (Spanish: functor-initial vs Basque: functor-final) listening to simple sequences of binary sounds alternating in duration with occasional omissions. We hypothesized that, together with contextual transition probabilities, the auditory system uses the characteristic prosodic cues (duration) associated with the native language's word order as an internal model to generate long-term predictions about incoming non-linguistic sounds. Consistent with our hypothesis, we found that the amplitude of the mismatch negativity elicited by sound omissions varied orthogonally depending on the speaker's linguistic background and was most pronounced in the left auditory cortex. Importantly, listening to binary sounds alternating in pitch instead of duration did not yield group differences, confirming that the above results were driven by the hypothesized long-term 'duration' prior. These findings show that experience with a given language can shape a fundamental aspect of human perception - the neural processing of rhythmic sounds - and provides direct evidence for a long-term predictive coding system in the auditory cortex that uses auditory schemes learned over a lifetime to process incoming sound sequences.


Assuntos
Córtex Auditivo , Percepção Auditiva , Idioma , Magnetoencefalografia , Humanos , Feminino , Masculino , Adulto , Percepção Auditiva/fisiologia , Adulto Jovem , Córtex Auditivo/fisiologia , Estimulação Acústica , Som , Percepção da Fala/fisiologia
18.
eNeuro ; 11(9)2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39231633

RESUMO

Previous physiological and psychophysical studies have explored whether feedback to the cochlea from the efferent system influences forward masking. The present work proposes that the limited growth-of-masking (GOM) observed in auditory nerve (AN) fibers may have been misunderstood; namely, that this limitation may be due to the influence of anesthesia on the efferent system. Building on the premise that the unanesthetized AN may exhibit GOM similar to more central nuclei, the present computational modeling study demonstrates that feedback from the medial olivocochlear (MOC) efferents may contribute to GOM observed physiologically in onset-type neurons in both the cochlear nucleus and inferior colliculus (IC). Additionally, the computational model of MOC efferents used here generates a decrease in masking with longer masker-signal delays similar to that observed in IC physiology and in psychophysical studies. An advantage of this explanation over alternative physiological explanations (e.g., that forward masking requires inhibition from the superior paraolivary nucleus) is that this theory can explain forward masking observed in the brainstem, early in the ascending pathway. For explaining psychoacoustic results, one strength of this model is that it can account for the lack of elevation in thresholds observed when masker level is randomly varied from interval-to-interval, a result that is difficult to explain using the conventional temporal window model of psychophysical forward masking. Future directions for evaluating the efferent mechanism as a contributing mechanism for psychoacoustic results are discussed.


Assuntos
Cóclea , Mascaramento Perceptivo , Humanos , Cóclea/fisiologia , Mascaramento Perceptivo/fisiologia , Modelos Neurológicos , Vias Auditivas/fisiologia , Vias Eferentes/fisiologia , Simulação por Computador , Colículos Inferiores/fisiologia , Estimulação Acústica , Nervo Coclear/fisiologia , Percepção Auditiva/fisiologia , Núcleo Coclear/fisiologia
19.
Sci Rep ; 14(1): 21216, 2024 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-39261536

RESUMO

Object-based attention operates both in perception and visual working memory. While the efficient perception of auditory stimuli also requires the formation of auditory objects, little is known about their role in auditory working memory (AWM). To investigate whether attention to one object feature in AWM leads to the involuntary maintenance of another, task-irrelevant feature, we conducted four experiments. Stimuli were abstract sounds that differed on the dimensions frequency and location, only one of which was task-relevant in each experiment. The first two experiments required a match-nonmatch decision about a probe sound whose irrelevant feature value could either be identical to or differ from the memorized stimulus. Matches on the relevant dimension were detected more accurately when the irrelevant feature matched as well, whereas for nonmatches on the relevant dimension, performance was better for irrelevant feature nonmatches. Signal-detection analysis showed that changes of irrelevant frequency reduced the sensitivity for sound location. Two further experiments used continuous report tasks. When location was the target feature, changes of irrelevant sound frequency had an impact on both recall error and adjustment time. Irrelevant location changes affected adjustment time only. In summary, object-based attention led to a concurrent maintenance of task-irrelevant sound features in AWM.


Assuntos
Estimulação Acústica , Atenção , Percepção Auditiva , Memória de Curto Prazo , Humanos , Memória de Curto Prazo/fisiologia , Feminino , Masculino , Percepção Auditiva/fisiologia , Adulto , Atenção/fisiologia , Adulto Jovem , Tempo de Reação/fisiologia
20.
Sci Rep ; 14(1): 21313, 2024 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-39266561

RESUMO

Extensive research with musicians has shown that instrumental musical training can have a profound impact on how acoustic features are processed in the brain. However, less is known about the influence of singing training on neural activity during voice perception, particularly in response to salient acoustic features, such as the vocal vibrato in operatic singing. To address this gap, the present study employed functional magnetic resonance imaging (fMRI) to measure brain responses in trained opera singers and musically untrained controls listening to recordings of opera singers performing in two distinct styles: a full operatic voice with vibrato, and a straight voice without vibrato. Results indicated that for opera singers, perception of operatic voice led to differential fMRI activations in bilateral auditory cortical regions and the default mode network. In contrast, musically untrained controls exhibited differences only in bilateral auditory cortex. These results suggest that operatic singing training triggers experience-dependent neural changes in the brain that activate self-referential networks, possibly through embodiment of acoustic features associated with one's own singing style.


Assuntos
Imageamento por Ressonância Magnética , Canto , Humanos , Canto/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Música , Rede de Modo Padrão/fisiologia , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Voz/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA