Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Stress ; 27(1): 2402519, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39285764

RESUMO

The main aim of this review is to compare whether natural sounds or a quiet environment is more beneficial for alleviating stress. The results showed that there is a statistically significant difference between exposure to natural sounds and a quiet environment in terms of their effect on heart rate (p = 0.006), blood pressure (p = 0.001), and respiratory rate (p = 0.032). However, no significant difference was found between exposure to natural sounds and a quiet environment in terms of their effect on MAP (p = 0.407), perceived stress, and SPO2 (p = 0.251). Although the evidence was slightly inconsistent, overall, natural sounds were found more beneficial for stress reduction than quiet environments.


Assuntos
Pressão Sanguínea , Frequência Cardíaca , Taxa Respiratória , Som , Estresse Psicológico , Humanos , Pressão Sanguínea/fisiologia , Frequência Cardíaca/fisiologia , Taxa Respiratória/fisiologia , Estresse Psicológico/fisiopatologia , Estresse Psicológico/prevenção & controle
2.
Proc Natl Acad Sci U S A ; 117(49): 31482-31493, 2020 12 08.
Artigo em Inglês | MEDLINE | ID: mdl-33219122

RESUMO

The perception of sound textures, a class of natural sounds defined by statistical sound structure such as fire, wind, and rain, has been proposed to arise through the integration of time-averaged summary statistics. Where and how the auditory system might encode these summary statistics to create internal representations of these stationary sounds, however, is unknown. Here, using natural textures and synthetic variants with reduced statistics, we show that summary statistics modulate the correlations between frequency organized neuron ensembles in the awake rabbit inferior colliculus (IC). These neural ensemble correlation statistics capture high-order sound structure and allow for accurate neural decoding in a single trial recognition task with evidence accumulation times approaching 1 s. In contrast, the average activity across the neural ensemble (neural spectrum) provides a fast (tens of milliseconds) and salient signal that contributes primarily to texture discrimination. Intriguingly, perceptual studies in human listeners reveal analogous trends: the sound spectrum is integrated quickly and serves as a salient discrimination cue while high-order sound statistics are integrated slowly and contribute substantially more toward recognition. The findings suggest statistical sound cues such as the sound spectrum and correlation structure are represented by distinct response statistics in auditory midbrain ensembles, and that these neural response statistics may have dissociable roles and time scales for the recognition and discrimination of natural sounds.


Assuntos
Percepção Auditiva/fisiologia , Discriminação Psicológica , Modelos Estatísticos , Neurônios/fisiologia , Reconhecimento Psicológico , Som , Adulto , Animais , Feminino , Humanos , Masculino , Mesencéfalo/fisiologia , Coelhos , Análise e Desempenho de Tarefas , Fatores de Tempo , Adulto Jovem
3.
Proc Natl Acad Sci U S A ; 117(45): 28442-28451, 2020 11 10.
Artigo em Inglês | MEDLINE | ID: mdl-33097665

RESUMO

Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Som , Estimulação Acústica , Animais , Percepção Auditiva/fisiologia , Cóclea , Nervo Coclear/fisiologia , Furões , Humanos , Modelos Neurológicos , Neurônios/fisiologia , Fala
4.
J Neurosci ; 41(50): 10261-10277, 2021 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-34750226

RESUMO

Sound discrimination is essential in many species for communicating and foraging. Bats, for example, use sounds for echolocation and communication. In the bat auditory cortex there are neurons that process both sound categories, but how these neurons respond to acoustic transitions, that is, echolocation streams followed by a communication sound, remains unknown. Here, we show that the acoustic context, a leading sound sequence followed by a target sound, changes neuronal discriminability of echolocation versus communication calls in the cortex of awake bats of both sexes. Nonselective neurons that fire equally well to both echolocation and communication calls in the absence of context become category selective when leading context is present. On the contrary, neurons that prefer communication sounds in the absence of context turn into nonselective ones when context is added. The presence of context leads to an overall response suppression, but the strength of this suppression is stimulus specific. Suppression is strongest when context and target sounds belong to the same category, e.g.,echolocation followed by echolocation. A neuron model of stimulus-specific adaptation replicated our results in silico The model predicts selectivity to communication and echolocation sounds in the inputs arriving to the auditory cortex, as well as two forms of adaptation, presynaptic frequency-specific adaptation acting in cortical inputs and stimulus-unspecific postsynaptic adaptation. In addition, the model predicted that context effects can last up to 1.5 s after context offset and that synaptic inputs tuned to low-frequency sounds (communication signals) have the shortest decay constant of presynaptic adaptation.SIGNIFICANCE STATEMENT We studied cortical responses to isolated calls and call mixtures in awake bats and show that (1) two neuronal populations coexist in the bat cortex, including neurons that discriminate social from echolocation sounds well and neurons that are equally driven by these two ethologically different sound types; (2) acoustic context (i.e., other natural sounds preceding the target sound) affects natural sound selectivity in a manner that could not be predicted based on responses to isolated sounds; and (3) a computational model similar to those used for explaining stimulus-specific adaptation in rodents can account for the responses observed in the bat cortex to natural sounds. This model depends on segregated feedforward inputs, synaptic depression, and postsynaptic neuronal adaptation.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Quirópteros/fisiologia , Ecolocação/fisiologia , Neurônios/fisiologia , Adaptação Fisiológica/fisiologia , Animais , Feminino , Masculino , Modelos Neurológicos
5.
J Neurosci ; 40(27): 5228-5246, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-32444386

RESUMO

Humans and animals maintain accurate sound discrimination in the presence of loud sources of background noise. It is commonly assumed that this ability relies on the robustness of auditory cortex responses. However, only a few attempts have been made to characterize neural discrimination of communication sounds masked by noise at each stage of the auditory system and to quantify the noise effects on the neuronal discrimination in terms of alterations in amplitude modulations. Here, we measured neural discrimination between communication sounds masked by a vocalization-shaped stationary noise from multiunit responses recorded in the cochlear nucleus, inferior colliculus, auditory thalamus, and primary and secondary auditory cortex at several signal-to-noise ratios (SNRs) in anesthetized male or female guinea pigs. Masking noise decreased sound discrimination of neuronal populations in each auditory structure, but collicular and thalamic populations showed better performance than cortical populations at each SNR. In contrast, in each auditory structure, discrimination by neuronal populations was slightly decreased when tone-vocoded vocalizations were tested. These results shed new light on the specific contributions of subcortical structures to robust sound encoding, and suggest that the distortion of slow amplitude modulation cues conveyed by communication sounds is one of the factors constraining the neuronal discrimination in subcortical and cortical levels.SIGNIFICANCE STATEMENT Dissecting how auditory neurons discriminate communication sounds in noise is a major goal in auditory neuroscience. Robust sound coding in noise is often viewed as a specific property of cortical networks, although this remains to be demonstrated. Here, we tested the discrimination performance of neuronal populations at five levels of the auditory system in response to conspecific vocalizations masked by noise. In each acoustic condition, subcortical neurons better discriminated target vocalizations than cortical ones and in each structure, the reduction in discrimination performance was related to the reduction in slow amplitude modulation cues.


Assuntos
Comunicação Animal , Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Ruído , Vocalização Animal/fisiologia , Estimulação Acústica , Algoritmos , Animais , Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Feminino , Cobaias , Masculino , Mascaramento Perceptivo , Razão Sinal-Ruído , Colículos Superiores/citologia , Colículos Superiores/fisiologia , Tálamo/citologia , Tálamo/fisiologia
6.
Biol Cybern ; 115(4): 331-341, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34109476

RESUMO

Octopus cells in the posteroventral cochlear nucleus exhibit characteristic onset responses to broad band transients but are little investigated in response to more complex sound stimuli. In this paper, we propose a phenomenological, but biophysically motivated, modeling approach that allows to simulate responses of large populations of octopus cells to arbitrary sound pressure waves. The model depends on only few parameters and reproduces basic physiological characteristics like onset firing and phase locking to amplitude modulations. Simulated responses to speech stimuli suggest that octopus cells are particularly sensitive to high-frequency transients in natural sounds and their sustained firing to phonemes provides a population code for sound level.


Assuntos
Núcleo Coclear , Octopodiformes , Estimulação Acústica , Animais , Neurônios
7.
Neuroimage ; 210: 116558, 2020 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-31962174

RESUMO

Humans can easily distinguish many sounds in the environment, but speech and music are uniquely important. Previous studies, mostly using fMRI, have identified separate regions of the brain that respond selectively for speech and music. Yet there is little evidence that brain responses are larger and more temporally precise for human-specific sounds like speech and music compared to other types of sounds, as has been found for responses to species-specific sounds in other animals. We recorded EEG as healthy, adult subjects listened to various types of two-second-long natural sounds. By classifying each sound based on the EEG response, we found that speech, music, and impact sounds were classified better than other natural sounds. But unlike impact sounds, the classification accuracy for speech and music dropped for synthesized sounds that have identical frequency and modulation statistics based on a subcortical model, indicating a selectivity for higher-order features in these sounds. Lastly, the patterns in average power and phase consistency of the two-second EEG responses to each sound replicated the patterns of speech and music selectivity observed with classification accuracy. Together with the classification results, this suggests that the brain produces temporally individualized responses to speech and music sounds that are stronger than the responses to other natural sounds. In addition to highlighting the importance of speech and music for the human brain, the techniques used here could be a cost-effective, temporally precise, and efficient way to study the human brain's selectivity for speech and music in other populations.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Neuroimagem Funcional/métodos , Música , Adulto , Feminino , Humanos , Masculino , Percepção da Fala/fisiologia , Adulto Jovem
8.
Proc Natl Acad Sci U S A ; 114(18): 4799-4804, 2017 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-28420788

RESUMO

Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2-4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice).


Assuntos
Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Imageamento por Ressonância Magnética , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino
9.
Cereb Cortex ; 28(1): 295-306, 2018 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-29069292

RESUMO

In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI to measure cortical response patterns while human listeners categorized real-world sounds created by objects of different solid materials (glass, metal, wood) manipulated by different sound-producing actions (striking, rattling, dropping). In different sessions, subjects had to identify either material or action categories in the same sound stimuli. The sound-producing action and the material of the sound source could be decoded from multivoxel activity patterns in auditory cortex, including Heschl's gyrus and planum temporale. Importantly, decoding success depended on task relevance and category discriminability. Action categories were more accurately decoded in auditory cortex when subjects identified action information. Conversely, the material of the same sound sources was decoded with higher accuracy in the inferior frontal cortex during material identification. Representational similarity analyses indicated that both early and higher-order auditory cortex selectively enhanced spectrotemporal features relevant to the target category. Together, the results indicate a cortical selection mechanism that favors task-relevant information in the processing of nonvocal sound categories.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Estimulação Acústica/métodos , Adulto , Atenção/fisiologia , Mapeamento Encefálico , Córtex Cerebral/diagnóstico por imagem , Circulação Cerebrovascular/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Oxigênio/sangue , Adulto Jovem
10.
Cereb Cortex ; 27(3): 2385-2402, 2017 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-27095823

RESUMO

Natural sounds exhibit statistical variation in their spectrotemporal structure. This variation is central to identification of unique environmental sounds and to vocal communication. Using limited resources, the auditory system must create a faithful representation of sounds across the full range of variation in temporal statistics. Imaging studies in humans demonstrated that the auditory cortex is sensitive to temporal correlations. However, the mechanisms by which the auditory cortex represents the spectrotemporal structure of sounds and how neuronal activity adjusts to vastly different statistics remain poorly understood. In this study, we recorded responses of neurons in the primary auditory cortex of awake rats to sounds with systematically varied temporal correlation, to determine whether and how this feature alters sound encoding. Neuronal responses adapted to changing stimulus temporal correlation. This adaptation was mediated by a change in the firing rate gain of neuronal responses rather than their spectrotemporal properties. This gain adaptation allowed neurons to maintain similar firing rates across stimuli with different statistics, preserving their ability to efficiently encode temporal modulation. This dynamic gain control mechanism may underlie comprehension of vocalizations and other natural sounds under different contexts, subject to distortions in temporal correlation structure via stretching or compression.


Assuntos
Adaptação Fisiológica/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Neurônios/fisiologia , Estimulação Acústica/métodos , Potenciais de Ação , Animais , Eletrodos Implantados , Modelos Lineares , Masculino , Dinâmica não Linear , Ratos Long-Evans , Processamento de Sinais Assistido por Computador , Fatores de Tempo
11.
Cereb Cortex ; 26(11): 4242-4252, 2016 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-27600839

RESUMO

In the auditory system, early neural stations such as brain stem are characterized by strict tonotopy, which is used to deconstruct sounds to their basic frequencies. But higher along the auditory hierarchy, as early as primary auditory cortex (A1), tonotopy starts breaking down at local circuits. Here, we studied the response properties of both excitatory and inhibitory neurons in the auditory cortex of anesthetized mice. We used in vivo two photon-targeted cell-attached recordings from identified parvalbumin-positive neurons (PVNs) and their excitatory pyramidal neighbors (PyrNs). We show that PyrNs are locally heterogeneous as characterized by diverse best frequencies, pairwise signal correlations, and response timing. In marked contrast, neighboring PVNs exhibited homogenous response properties in pairwise signal correlations and temporal responses. The distinct physiological microarchitecture of different cell types is maintained qualitatively in response to natural sounds. Excitatory heterogeneity and inhibitory homogeneity within the same circuit suggest different roles for each population in coding natural stimuli.


Assuntos
Córtex Auditivo/citologia , Mapeamento Encefálico , Rede Nervosa/fisiologia , Inibição Neural/fisiologia , Células Piramidais/fisiologia , Estimulação Acústica , Animais , Estimulação Elétrica , Potenciais da Membrana/fisiologia , Camundongos , Camundongos Endogâmicos C57BL , Camundongos Transgênicos , Parvalbuminas/genética , Parvalbuminas/metabolismo , Técnicas de Patch-Clamp , Vocalização Animal/fisiologia
12.
Cognition ; 253: 105874, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39216190

RESUMO

Perception has long been envisioned to use an internal model of the world to explain the causes of sensory signals. However, such accounts have historically not been testable, typically requiring intractable search through the space of possible explanations. Using auditory scenes as a case study, we leveraged contemporary computational tools to infer explanations of sounds in a candidate internal generative model of the auditory world (ecologically inspired audio synthesizers). Model inferences accounted for many classic illusions. Unlike traditional accounts of auditory illusions, the model is applicable to any sound, and exhibited human-like perceptual organization for real-world sound mixtures. The combination of stimulus-computability and interpretable model structure enabled 'rich falsification', revealing additional assumptions about sound generation needed to account for perception. The results show how generative models can account for the perception of both classic illusions and everyday sensory signals, and illustrate the opportunities and challenges involved in incorporating them into theories of perception.


Assuntos
Percepção Auditiva , Humanos , Percepção Auditiva/fisiologia , Ilusões/fisiologia , Modelos Psicológicos , Estimulação Acústica
13.
Physiol Behav ; 287: 114651, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39117032

RESUMO

Sound is one of the important environmental factors that influence individuals' decision-making. However, it is still unclear whether and how natural sounds nudge green product purchases. This study proposes an extension of the Stimulus-Organism-Response (S-O-R) framework, suggesting that natural sounds increase early attentional congruency associated with green products, thereby promoting individuals' green product purchases. To test our theory, we conducted an experiment employing a hierarchical drift-diffusion model (HDDM) and utilized an event-related potentials (ERP) method. Results showed that natural sounds not only increased the purchase rate for green products but also enhanced drift rate in favor of purchasing green products. Additionally, consumers also exhibited a reduced frontal early P2 wave (150-230 ms) in response to green products under natural sounds, indicating that natural sounds increased the early attentional congruency associated with green products. More importantly, neural correlates of early attentional congruency meditated the nudge effect of natural sounds on purchase rate and drift rate for green products. This study contributes to the neural understanding of how natural sounds influence green product purchases and provides actionable implications for market managers to design the green products sales environments.

14.
Front Public Health ; 11: 1031501, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36935713

RESUMO

The use of existing resources, such as natural sounds, to promote the mental health of citizens is an area of research that is receiving increasing attention. This research contributes to existing knowledge by combining a field psychological walk method and an experimental acoustic control method to compare the acoustic information masking effects of water and birdsong sounds on traffic noise based on the psychological health responses of 30 participants to such effects. The influence of traffic noise and contextual sounds on the psychological health of participants identified the potential of natural sounds in the acoustic information masking of traffic noise. Furthermore, it was found that 65.0 dBA water sounds did not mask 60.0 dBA traffic noises. However, 45.0 dBA birdsong sounds did mask it, but this effect was not significant. Additionally, contextual factors with and without crowd activity sounds were not significant in influencing psychological health through birdsong. This study contributes to public health cost savings. It may also guide the development of new ideas and methods for configuring open urban spaces according to public health needs.


Assuntos
Ruído dos Transportes , Humanos , Saúde Mental , Acústica , Água
15.
Artigo em Inglês | MEDLINE | ID: mdl-35328840

RESUMO

This paper explores strategies that the visually impaired use to obtain information in unfamiliar environments. This paper also aims to determine how natural sounds that often exist in the environment or the auditory cues that are installed in various facilities as a source of guidance are prioritized and selected in different countries. The aim was to evaluate the utilization of natural sounds and auditory cues by users who are visually impaired during mobility. The data were collected by interviewing 60 individuals with visual impairments who offered their insights on the ways they use auditory cues. The data revealed a clear contrast in methods used to obtain information at unfamiliar locations and in the desire for the installation of auditory cues in different locations between those who use trains and those who use different transportation systems. The participants demonstrated a consensus on the need for devices that provide on-demand minimal auditory feedback. The paper discusses the suggestions offered by the interviewees and details their hopes for adjusted auditory cues. The study argues that auditory cues have high potential for improving the quality of life of people who are visually impaired by increasing their mobility range and independence level. Additionally, this study emphasizes the importance of a standardized design for auditory cues, which is a change desired by the interviewees. Standardization is expected to boost the efficiency of auditory cues in providing accurate information and assistance to individuals with visual impairment regardless of their geographical location. Regarding implications for practitioners, the study presents the need to design systems that provide minimal audio feedback to reduce the masking of natural sounds. The design of new auditory cues should utilize the already-existing imagination skills that people who have a visual impairment possess. For example, the pitch of the sound should change to indicate the direction of escalators and elevators and to distinguish the location of male and female toilets.


Assuntos
Sinais (Psicologia) , Qualidade de Vida , Estimulação Acústica/métodos , Percepção Auditiva , Feminino , Humanos , Masculino , Som , Transtornos da Visão
16.
Front Hum Neurosci ; 16: 949655, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35967006

RESUMO

Recently, researchers have expanded the investigation into attentional biases toward positive stimuli; however, few studies have examined attentional biases toward positive auditory information. In three experiments, the present study employed an emotional spatial cueing task using emotional sounds as cues and auditory stimuli (Experiment 1) or visual stimuli (Experiment 2 and Experiment 3) as targets to explore whether auditory or visual spatial attention could be modulated by positive auditory cues. Experiment 3 also examined the temporal dynamics of cross-modal auditory bias toward positive natural sounds using event-related potentials (ERPs). The behavioral results of the three experiments consistently demonstrated that response times to targets were faster after positive auditory cues than they were after neutral auditory cues in the valid condition, indicating that healthy participants showed a selective auditory attentional bias (Experiment 1) and cross-modal attentional bias (Experiment 2 and Experiment 3) toward positive natural sounds. The results of Experiment 3 showed that N1 amplitudes were more negative after positive sounds than they were after neutral sounds, which further provided electrophysiological evidence that positive auditory information enhances attention at early stages in healthy adults. The results of the experiments performed in the present study suggest that humans exhibit an attentional bias toward positive natural sounds.

17.
Front Psychol ; 13: 964209, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36312201

RESUMO

Taxonomies and ontologies for the characterization of everyday sounds have been developed in several research fields, including auditory cognition, soundscape research, artificial hearing, sound design, and medicine. Here, we surveyed 36 of such knowledge organization systems, which we identified through a systematic literature search. To evaluate the semantic domains covered by these systems within a homogeneous framework, we introduced a comprehensive set of verbal sound descriptors (sound source properties; attributes of sensation; sound signal descriptors; onomatopoeias; music genres), which we used to manually label the surveyed descriptor classes. We reveal that most taxonomies and ontologies were developed to characterize higher-level semantic relations between sound sources in terms of the sound-generating objects and actions involved (what/how), or in terms of the environmental context (where). This indicates the current lack of a comprehensive ontology of everyday sounds that covers simultaneously all semantic aspects of the relation between sounds. Such an ontology may have a wide range of applications and purposes, ranging from extending our scientific knowledge of auditory processes in the real world, to developing artificial hearing systems.

18.
Curr Biol ; 32(7): 1470-1484.e12, 2022 04 11.
Artigo em Inglês | MEDLINE | ID: mdl-35196507

RESUMO

How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.


Assuntos
Córtex Auditivo , Música , Percepção da Fala , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia
19.
Artigo em Inglês | MEDLINE | ID: mdl-36232035

RESUMO

BACKGROUND: Natural sounds are reportedly restorative, but most research has used one-off experiments conducted in artificial conditions. Research based on field experiments is still in its infancy. This study aimed to generate hypotheses on the restorative effects of listening to natural sounds on surgeons, representing professionals working in stressful conditions. METHODS: Each of four surgeons (two experts and two residents) participated six times in an experiment where they took a 10-min break listening to natural sounds (four times) or without natural sounds (twice) after a surgical operation. We measured their skin conductance level, an indicator of sympathetic arousal, continuously during the break (measurement occasions N = 2520) and assessed their mood using two questionnaires before and after the break (N = 69 and N = 42). We also interviewed them after the break. RESULTS: Based on statistical Linear Mixed-Effects modeling, we developed two hypotheses for further, more detailed studies: (H1) Listening to natural sounds after an operation improves surgeons' mood. (H2) Inexperienced surgeons' tension persists so long that the effect of natural sounds on their sympathetic arousal is negligible. CONCLUSIONS: This risk-free, easy-to-use means of stress alleviation through natural sounds could benefit highly-stressed people working indoors.


Assuntos
Cirurgiões , Percepção Auditiva , Humanos , Descanso , Som , Inquéritos e Questionários
20.
Hear Res ; 400: 108124, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33321385

RESUMO

Hyperacusis is defined as an increased sensitivity to sounds, i.e. sounds presented at moderate levels can produce discomfort or even pain. Existing diagnostic methods, like the Hyperacusis Questionnaire (HQ) and Loudness Discomfort Levels (LDLs), have been challenged because of their variability and lack of agreement on appropriate cut-off values. We propose a novel approach by using psychoacoustic ratings of natural sounds as an assessment tool for hyperacusis. Subjects (n = 81) were presented with natural and artificial (tone pips, noises) sounds (n = 69) in a controlled environment at four sound levels (60, 70, 80 and 90 dB SPL). The task was to rate them on a pleasant to unpleasant visual analog scale. The inherent challenge of this study was to create a new diagnostic tool when no gold standard of hyperacusis diagnosis exists. We labeled our subjects as hyperacusic (n = 26) when they were diagnosed as such by at least two of three methods (HQ, LDLs and self-report). There was a significant difference between controls (n = 23) and hyperacusics in the median global rating of pleasant sounds. Median global ratings of unpleasant sounds and artificial sounds did not differ significantly. Then we selected the subset of sounds that could best discriminate the controls from the hyperacusics, the Core Discriminant Sounds (CDS), and we used them to develop a new metric: The CDS score. A normalized global score and a score for each sound level can be computed with respect to a control population without hyperacusis. A receiver operating characteristic analysis showed that the accuracy of our method in distinguishing subjects with and without complaints of hyperacusis (86%, 95% Confidence Interval (CI): 76-93%) is comparable to that of existing methods such as the LDL (77%, CI: 67-86%) and the HQ (80%, CI: 69-88%). We believe that the CDS score is more relevant to subject's complaints than LDLs and that it could be applied in a clinical environment in a fast and effective way, while minimizing discomfort and biases.


Assuntos
Hiperacusia , Som , Humanos , Hiperacusia/diagnóstico , Psicoacústica , Autorrelato , Inquéritos e Questionários
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA