Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Neurosci ; 43(21): 3876-3894, 2023 05 24.
Artigo em Inglês | MEDLINE | ID: mdl-37185101

RESUMO

Natural sounds contain rich patterns of amplitude modulation (AM), which is one of the essential sound dimensions for auditory perception. The sensitivity of human hearing to AM measured by psychophysics takes diverse forms depending on the experimental conditions. Here, we address with a single framework the questions of why such patterns of AM sensitivity have emerged in the human auditory system and how they are realized by our neural mechanisms. Assuming that optimization for natural sound recognition has taken place during human evolution and development, we examined its effect on the formation of AM sensitivity by optimizing a computational model, specifically, a multilayer neural network, for natural sound (namely, everyday sounds and speech sounds) recognition and simulating psychophysical experiments in which the AM sensitivity of the model was assessed. Relatively higher layers in the model optimized to sounds with natural AM statistics exhibited AM sensitivity similar to that of humans, although the model was not designed to reproduce human-like AM sensitivity. Moreover, simulated neurophysiological experiments on the model revealed a correspondence between the model layers and the auditory brain regions. The layers in which human-like psychophysical AM sensitivity emerged exhibited substantial neurophysiological similarity with the auditory midbrain and higher regions. These results suggest that human behavioral AM sensitivity has emerged as a result of optimization for natural sound recognition in the course of our evolution and/or development and that it is based on a stimulus representation encoded in the neural firing rates in the auditory midbrain and higher regions.SIGNIFICANCE STATEMENT This study provides a computational paradigm to bridge the gap between the behavioral properties of human sensory systems as measured in psychophysics and neural representations as measured in nonhuman neurophysiology. This was accomplished by combining the knowledge and techniques in psychophysics, neurophysiology, and machine learning. As a specific target modality, we focused on the auditory sensitivity to sound AM. We built an artificial neural network model that performs natural sound recognition and simulated psychophysical and neurophysiological experiments in the model. Quantitative comparison of a machine learning model with human and nonhuman data made it possible to integrate the knowledge of behavioral AM sensitivity and neural AM tunings from the perspective of optimization to natural sound recognition.


Assuntos
Córtex Auditivo , Som , Humanos , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Audição , Mesencéfalo/fisiologia , Estimulação Acústica , Córtex Auditivo/fisiologia
2.
J Vis ; 22(2): 17, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35195670

RESUMO

Complex visual processing involved in perceiving the object materials can be better elucidated by taking a variety of research approaches. Sharing stimulus and response data is an effective strategy to make the results of different studies directly comparable and can assist researchers with different backgrounds to jump into the field. Here, we constructed a database containing several sets of material images annotated with visual discrimination performance. We created the material images using physically based computer graphics techniques and conducted psychophysical experiments with them in both laboratory and crowdsourcing settings. The observer's task was to discriminate materials on one of six dimensions (gloss contrast, gloss distinctness of image, translucent vs. opaque, metal vs. plastic, metal vs. glass, and glossy vs. painted). The illumination consistency and object geometry were also varied. We used a nonverbal procedure (an oddity task) applicable for diverse use cases, such as cross-cultural, cross-species, clinical, or developmental studies. Results showed that the material discrimination depended on the illuminations and geometries and that the ability to discriminate the spatial consistency of specular highlights in glossiness perception showed larger individual differences than in other tasks. In addition, analysis of visual features showed that the parameters of higher order color texture statistics can partially, but not completely, explain task performance. The results obtained through crowdsourcing were highly correlated with those obtained in the laboratory, suggesting that our database can be used even when the experimental conditions are not strictly controlled in the laboratory. Several projects using our dataset are underway.


Assuntos
Percepção de Forma , Sensibilidades de Contraste , Percepção de Forma/fisiologia , Humanos , Estimulação Luminosa , Propriedades de Superfície , Percepção Visual/fisiologia
3.
Q J Exp Psychol (Hove) ; 74(6): 1140-1152, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33176602

RESUMO

Frisson is characterised by tingling and tickling sensations with positive or negative feelings. However, it is still unknown what factors affect the intensity of frisson. We conducted experiments on the stimulus characteristics and individual's mood states and personality traits. Participants filled out self-reported questionnaires, including the Profile of Mood States, Beck Depression Inventory, and Big Five Inventory. They continuously indicated the subjective intensity of frisson throughout a 17-min experiment while listening to binaural brushing and tapping sounds through headphones. In the interviews after the experiments, participants reported that tingling and tickling sensations mainly originated on their ears, neck, shoulders, and back. Cross-correlation results showed that the intensity of frisson was closely linked to the acoustic features of auditory stimuli, including their amplitude, spectral centroid, and spectral bandwidth. This suggests that proximal sounds with dark and compact timbre trigger frisson. The peak of correlation between frisson and the acoustic feature was observed 2 s after the acoustic feature changed, suggesting that bottom-up auditory inputs modulate skin-related modalities. We also found that participants with anxiety were sensitive to frisson. Our results provide important clues to understanding the mechanisms of auditory-somatosensory interactions.


Assuntos
Percepção Auditiva , Som , Estimulação Acústica , Ansiedade , Emoções , Humanos , Sensação
4.
Front Psychol ; 11: 316, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32194479

RESUMO

Auditory frisson is the experience of feeling of cold or shivering related to sound in the absence of a physical cold stimulus. Multiple examples of frisson-inducing sounds have been reported, but the mechanism of auditory frisson remains elusive. Typical frisson-inducing sounds may contain a looming effect, in which a sound appears to approach the listener's peripersonal space. Previous studies on sound in peripersonal space have provided objective measurements of sound-inducing effects, but few have investigated the subjective experience of frisson-inducing sounds. Here we explored whether it is possible to produce subjective feelings of frisson by moving a noise sound (white noise, rolling beads noise, or frictional noise produced by rubbing a plastic bag) stimulus around a listener's head. Our results demonstrated that sound-induced frisson can be experienced stronger when auditory stimuli are rotated around the head (binaural moving sounds) than the one without the rotation (monaural static sounds), regardless of the source of the noise sound. Pearson's correlation analysis showed that several acoustic features of auditory stimuli, such as variance of interaural level difference (ILD), loudness, and sharpness, were correlated with the magnitude of subjective frisson. We had also observed that the subjective feelings of frisson by moving a musical sound had increased comparing with a static musical sound.

5.
J Neurosci ; 39(28): 5517-5533, 2019 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-31092586

RESUMO

The auditory system converts the physical properties of a sound waveform to neural activities and processes them for recognition. During the process, the tuning to amplitude modulation (AM) is successively transformed by a cascade of brain regions. To test the functional significance of the AM tuning, we conducted single-unit recording in a deep neural network (DNN) trained for natural sound recognition. We calculated the AM representation in the DNN and quantitatively compared it with those reported in previous neurophysiological studies. We found that an auditory-system-like AM tuning emerges in the optimized DNN. Better-recognizing models showed greater similarity to the auditory system. We isolated the factors forming the AM representation in the different brain regions. Because the model was not designed to reproduce any anatomical or physiological properties of the auditory system other than the cascading architecture, the observed similarity suggests that the AM tuning in the auditory system might also be an emergent property for natural sound recognition during evolution and development.SIGNIFICANCE STATEMENT This study suggests that neural tuning to amplitude modulation may be a consequence of the auditory system evolving for natural sound recognition. We modeled the function of the entire auditory system; that is, recognizing sounds from raw waveforms with as few anatomical or physiological assumptions as possible. We analyzed the model using single-unit recording, which enabled a fair comparison with neurophysiological data with as few methodological biases as possible. Interestingly, our results imply that frequency decomposition in the inner ear might not be necessary for processing amplitude modulation. This implication could not have been obtained if we had used a model that assumes frequency decomposition.


Assuntos
Percepção Auditiva , Modelos Neurológicos , Redes Neurais de Computação , Encéfalo/fisiologia , Humanos , Som
6.
Sci Rep ; 7(1): 16455, 2017 11 28.
Artigo em Inglês | MEDLINE | ID: mdl-29184117

RESUMO

Our hearing is usually robust against reverberation. This study asked how such robustness to daily sound is realized, and what kinds of acoustic cues contribute to the robustness. We focused on the perception of materials based on impact sounds, which is a common daily experience, and for which the responsible acoustic features have already been identified in the absence of reverberation. In our experiment, we instructed the participants to identify materials from impact sounds with and without reverberation. The imposition of reverberation did not alter the average responses across participants to perceived materials. However, an analysis of each participant revealed the significant effect of reverberation with response patterns varying among participants. The effect depended on the context of the stimulus presentation, namely it was smaller for a constant reverberation than when the reverberation varied presentation by presentation. The context modified the relative contribution of the spectral features of the sounds to material identification, while no consistent change across participants was observed as regards the temporal features. Although the detailed results varied greatly among the participants, these results suggest that a mechanism exists in the auditory system that compensates for reverberation based on adaptation to the spectral features of reverberant sound.

7.
PLoS One ; 11(7): e0159188, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27442240

RESUMO

Researches on sequential vocalization often require analysis of vocalizations in long continuous sounds. In such studies as developmental ones or studies across generations in which days or months of vocalizations must be analyzed, methods for automatic recognition would be strongly desired. Although methods for automatic speech recognition for application purposes have been intensively studied, blindly applying them for biological purposes may not be an optimal solution. This is because, unlike human speech recognition, analysis of sequential vocalizations often requires accurate extraction of timing information. In the present study we propose automated systems suitable for recognizing birdsong, one of the most intensively investigated sequential vocalizations, focusing on the three properties of the birdsong. First, a song is a sequence of vocal elements, called notes, which can be grouped into categories. Second, temporal structure of birdsong is precisely controlled, meaning that temporal information is important in song analysis. Finally, notes are produced according to certain probabilistic rules, which may facilitate the accurate song recognition. We divided the procedure of song recognition into three sub-steps: local classification, boundary detection, and global sequencing, each of which corresponds to each of the three properties of birdsong. We compared the performances of several different ways to arrange these three steps. As results, we demonstrated a hybrid model of a deep convolutional neural network and a hidden Markov model was effective. We propose suitable arrangements of methods according to whether accurate boundary detection is needed. Also we designed the new measure to jointly evaluate the accuracy of note classification and boundary detection. Our methods should be applicable, with small modification and tuning, to the songs in other species that hold the three properties of the sequential vocalization.


Assuntos
Tentilhões/fisiologia , Reconhecimento Fisiológico de Modelo , Vocalização Animal/fisiologia , Animais , Limiar Auditivo/fisiologia , Automação , Cadeias de Markov , Redes Neurais de Computação , Reprodutibilidade dos Testes , Fatores de Tempo
8.
Artigo em Inglês | MEDLINE | ID: mdl-26512015

RESUMO

Birdsong provides a unique model for studying the control mechanisms of complex sequential behaviors. The present study aimed to demonstrate that multiple factors affect temporal control in the song production. We analyzed the song of Bengalese finches in various time ranges to address factors that affected the duration of acoustic elements (notes) and silent intervals (gaps). The gaps showed more jitter across song renditions than did notes. Gaps had longer duration in branching points of song sequence than in stereotypic transitions, and the duration of a gap was correlated with the duration of the note that preceded the gap. When looking at the variation among song renditions, we found notable factors in three time ranges: within-day drift, within-bout changes, and local jitter. Note durations shortened over time from morning to evening. Within each song bout note durations lengthened as singing progressed, while gap durations lengthened only during the late part of song bout. Further analysis after removing these drift factors confirmed that the jitter remained in local song sequences. These results suggest distinct sources of temporal variability exist at multiple levels on the basis of this note-gap relationship, and that song comprised a mixture of these sources.


Assuntos
Tentilhões/fisiologia , Canto , Vocalização Animal/fisiologia , Estimulação Acústica , Acústica , Animais , Masculino , Probabilidade , Análise de Regressão , Espectrografia do Som , Fatores de Tempo
9.
PLoS One ; 9(6): e99040, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24932482

RESUMO

A dendritic spine is a very small structure (∼0.1 µm3) of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for information coding. Spines code input timing information by the probability of Ca2+ increases, which makes robust and sensitive information coding possible. We created a stochastic simulation model of input timing-dependent Ca2+ increases in a cerebellar Purkinje cell's spine. Spines used probability coding of Ca2+ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, where information per volume appeared optimal. Probability coding of Ca2+ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca2+ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.


Assuntos
Cálcio/metabolismo , Espinhas Dendríticas/fisiologia , Células de Purkinje/fisiologia , Animais , Sinapses Elétricas/fisiologia , Masculino , Camundongos , Modelos Neurológicos , Neurônios , Processos Estocásticos
10.
Neuroreport ; 25(8): 562-8, 2014 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-24642952

RESUMO

Birdsong is an excellent research model for sound sequences consisting of complex structures. Neural and behavioral experiments have shown that auditory feedback is necessary for songbirds, especially Bengalese finches, to maintain the quality of the songs and that the nucleus HVC (used as proper name) and the anterior forebrain pathway (AFP) in the nervous system play key roles in this maintenance process. Neurons in the HVC and AFP exhibit higher spike rate to the bird's own song (BOS) than to other sound stimuli, such as temporally reversed song. To systematically evaluate what aspects of the BOS are captured by the different types of neural activities, both average spike rate and trial-to-trial spike timing variability in the BOS-selective neurons in the HVC and Area X (used as proper name), a gateway to the AFP from the HVC, were investigated following the presentation of auditory stimuli consisting of the BOS with systematic temporal inversion. Within-subjects analysis of the average spike rate and spike timing revealed that neural activity in the HVC and Area X is more sensitive to the local sound modulation of songs than to the global amplitude modulation. In addition, neurons in the HVC exhibit greater consistency of spike timing than neurons in Area X.


Assuntos
Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Telencéfalo/fisiologia , Vocalização Animal/fisiologia , Estimulação Acústica , Potenciais de Ação/fisiologia , Animais , Tentilhões , Masculino , Neurônios/fisiologia , Som , Telencéfalo/citologia , Fatores de Tempo
11.
Neural Netw ; 43: 114-24, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23500505

RESUMO

Cerebellar long-term depression (LTD) and cortical spike-timing-dependent synaptic plasticity (STDP) are two well-known and well-characterized types of synaptic plasticity. Induction of both types of synaptic plasticity depends on the spike timing, pairing frequency, and pairing numbers of two different sources of spiking. This implies that the induction of synaptic plasticity may share common frameworks in terms of signal processing regardless of the different signaling pathways involved in the two types of synaptic plasticity. Here we propose that both types share common frameworks of signal processing for spike-timing, pairing-frequency, and pairing-numbers detection. We developed system models of both types of synaptic plasticity and analyzed signal processing in the induction of synaptic plasticity. We found that both systems have upstream subsystems for spike-timing detection and downstream subsystems for pairing-frequency and pairing-numbers detection. The upstream systems used multiplication of signals from the feedback filters and nonlinear functions for spike-timing detection. The downstream subsystems used temporal filters with longer time constants for pairing-frequency detection and nonlinear switch-like functions for pairing-numbers detection, indicating that the downstream subsystems serve as a leaky integrate-and-fire system. Thus, our findings suggest that a common conceptual framework for the induction of synaptic plasticity exists despite the differences in molecular species and pathways.


Assuntos
Potenciais de Ação/fisiologia , Cerebelo/fisiologia , Aprendizagem/fisiologia , Plasticidade Neuronal/fisiologia , Sinapses/fisiologia , Comunicação Celular/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA