Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; 63(10): 3539-3559, 2020 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-32936717

RESUMO

Purpose From an anthropological perspective of hominin communication, the human auditory system likely evolved to enable special sensitivity to sounds produced by the vocal tracts of human conspecifics whether attended or passively heard. While numerous electrophysiological studies have used stereotypical human-produced verbal (speech voice and singing voice) and nonverbal vocalizations to identify human voice-sensitive responses, controversy remains as to when (and where) processing of acoustic signal attributes characteristic of "human voiceness" per se initiate in the brain. Method To explore this, we used animal vocalizations and human-mimicked versions of those calls ("mimic voice") to examine late auditory evoked potential responses in humans. Results Here, we revealed an N1b component (96-120 ms poststimulus) during a nonattending listening condition showing significantly greater magnitude in response to mimics, beginning as early as primary auditory cortices, preceding the time window reported in previous studies that revealed species-specific vocalization processing initiating in the range of 147-219 ms. During a sound discrimination task, a P600 (500-700 ms poststimulus) component showed specificity for accurate discrimination of human mimic voice. Distinct acoustic signal attributes and features of the stimuli were used in a classifier model, which could distinguish most human from animal voice comparably to behavioral data-though none of these single features could adequately distinguish human voiceness. Conclusions These results provide novel ideas for algorithms used in neuromimetic hearing aids, as well as direct electrophysiological support for a neurocognitive model of natural sound processing that informs both neurodevelopmental and anthropological models regarding the establishment of auditory communication systems in humans. Supplemental Material https://doi.org/10.23641/asha.12903839.


Assuntos
Córtex Auditivo , Voz , Estimulação Acústica , Animais , Percepção Auditiva , Potenciais Evocados Auditivos , Humanos
2.
J Vis Exp ; (103)2015 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-26384034

RESUMO

The study of neuromuscular control of movement in humans is accomplished with numerous technologies. Non-invasive methods for investigating neuromuscular function include transcranial magnetic stimulation, electromyography, and three-dimensional motion capture. The advent of readily available and cost-effective virtual reality solutions has expanded the capabilities of researchers in recreating "real-world" environments and movements in a laboratory setting. Naturalistic movement analysis will not only garner a greater understanding of motor control in healthy individuals, but also permit the design of experiments and rehabilitation strategies that target specific motor impairments (e.g. stroke). The combined use of these tools will lead to increasingly deeper understanding of neural mechanisms of motor control. A key requirement when combining these data acquisition systems is fine temporal correspondence between the various data streams. This protocol describes a multifunctional system's overall connectivity, intersystem signaling, and the temporal synchronization of recorded data. Synchronization of the component systems is primarily accomplished through the use of a customizable circuit, readily made with off the shelf components and minimal electronics assembly skills.


Assuntos
Eletromiografia/métodos , Movimento/fisiologia , Junção Neuromuscular/fisiologia , Estimulação Magnética Transcraniana/métodos , Fenômenos Biomecânicos , Simulação por Computador , Eletromiografia/instrumentação , Humanos , Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Estimulação Magnética Transcraniana/instrumentação , Gravação em Vídeo/instrumentação , Gravação em Vídeo/métodos
3.
Hear Res ; 305: 74-85, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23994296

RESUMO

Humans and several non-human primates possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). However, the use of speech and other broadly defined categories of behaviorally relevant natural sounds has led to many discrepancies regarding where voice-sensitivity occurs, and more generally the identification of cortical networks, "proto-networks" or protolanguage networks, and pathways that may be sensitive or selective for certain aspects of vocalization processing. In this prospective review we examine different approaches for exploring vocal communication processing, including pathways that may be, or become, specialized for conspecific utterances. In particular, we address the use of naturally produced non-stereotypical vocalizations (mimicry of other animal calls) as another category of vocalization for use with human and non-human primate auditory systems. We focus this review on two main themes, including progress and future ideas for studying vocalization processing in great apes (chimpanzees) and in very early stages of human development, including infants and fetuses. Advancing our understanding of the fundamental principles that govern the evolution and early development of cortical pathways for processing non-verbal communication utterances is expected to lead to better diagnoses and early intervention strategies in children with communication disorders, improve rehabilitation of communication disorders resulting from brain injury, and develop new strategies for intelligent hearing aid and implant design that can better enhance speech signals in noisy environments. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Pan troglodytes/fisiologia , Reconhecimento Fisiológico de Modelo , Fala , Vocalização Animal , Voz , Estimulação Acústica , Fatores Etários , Envelhecimento , Animais , Humanos , Ruído/efeitos adversos , Pan troglodytes/psicologia , Mascaramento Perceptivo , Especificidade da Espécie , Percepção da Fala
4.
J Neurosci ; 32(23): 8084-93, 2012 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-22674283

RESUMO

Numerous species possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). In humans, the superior temporal sulci (STSs) putatively represent homologous voice-sensitive areas of cortex. However, superior temporal sulcus (STS) regions have recently been reported to represent auditory experience or "expertise" in general rather than showing exclusive sensitivity to human vocalizations per se. Using functional magnetic resonance imaging and a unique non-stereotypical category of complex human non-verbal vocalizations-human-mimicked versions of animal vocalizations-we found a cortical hierarchy in humans optimized for processing meaningful conspecific utterances. This left-lateralized hierarchy originated near primary auditory cortices and progressed into traditional speech-sensitive areas. Our results suggest that the cortical regions supporting vocalization perception are initially organized by sensitivity to the human vocal tract in stages before the STS. Additionally, these findings have implications for the developmental time course of conspecific vocalization processing in humans as well as its evolutionary origins.


Assuntos
Córtex Cerebral/fisiologia , Comunicação , Vocalização Animal , Adulto , Animais , Percepção Auditiva/fisiologia , Entropia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Processos Mentais , Oxigênio/sangue , Psicofísica , Fala , Adulto Jovem
5.
Front Syst Neurosci ; 6: 27, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22582038

RESUMO

Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.

6.
Hum Brain Mapp ; 32(12): 2241-55, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21305666

RESUMO

Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here, we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, whereas the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when hearing and attempting to recognize action sounds.


Assuntos
Percepção Auditiva/fisiologia , Cegueira/fisiopatologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Reconhecimento Psicológico/fisiologia , Adulto , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Memória Episódica , Pessoa de Meia-Idade , Som
7.
J Cogn Neurosci ; 23(8): 2079-101, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-20812786

RESUMO

In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Reconhecimento Psicológico , Som , Estimulação Acústica , Adulto , Córtex Cerebral/irrigação sanguínea , Meio Ambiente , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Masculino , Rede Nervosa/irrigação sanguínea , Rede Nervosa/fisiologia , Vias Neurais/irrigação sanguínea , Vias Neurais/fisiologia , Oxigênio/sangue , Semântica , Adulto Jovem
8.
J Neurosci ; 29(7): 2283-96, 2009 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-19228981

RESUMO

The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g., harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using functional MRI, we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human nonverbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for the presence of spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound.


Assuntos
Córtex Auditivo/anatomia & histologia , Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia , Vocalização Animal/fisiologia , Estimulação Acústica , Adolescente , Adulto , Animais , Vias Auditivas/anatomia & histologia , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Percepção da Altura Sonora , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Especificidade da Espécie , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...