Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Neurosci ; 32(23): 8084-93, 2012 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-22674283

RESUMO

Numerous species possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). In humans, the superior temporal sulci (STSs) putatively represent homologous voice-sensitive areas of cortex. However, superior temporal sulcus (STS) regions have recently been reported to represent auditory experience or "expertise" in general rather than showing exclusive sensitivity to human vocalizations per se. Using functional magnetic resonance imaging and a unique non-stereotypical category of complex human non-verbal vocalizations-human-mimicked versions of animal vocalizations-we found a cortical hierarchy in humans optimized for processing meaningful conspecific utterances. This left-lateralized hierarchy originated near primary auditory cortices and progressed into traditional speech-sensitive areas. Our results suggest that the cortical regions supporting vocalization perception are initially organized by sensitivity to the human vocal tract in stages before the STS. Additionally, these findings have implications for the developmental time course of conspecific vocalization processing in humans as well as its evolutionary origins.


Assuntos
Córtex Cerebral/fisiologia , Comunicação , Vocalização Animal , Adulto , Animais , Percepção Auditiva/fisiologia , Entropia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Processos Mentais , Oxigênio/sangue , Psicofísica , Fala , Adulto Jovem
2.
Lang Cogn Neurosci ; 36(6): 773-790, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34568509

RESUMO

Higher cognitive functions such as linguistic comprehension must ultimately relate to perceptual systems in the brain, though how and why this forms remains unclear. Different brain networks that mediate perception when hearing real-world natural sounds has recently been proposed to respect a taxonomic model of acoustic-semantic categories. Using functional magnetic resonance imaging (fMRI) with Chinese/English bilingual listeners, the present study explored whether reception of short spoken phrases, in both Chinese (Mandarin) and English, describing corresponding sound-producing events would engage overlapping brain regions at a semantic category level. The results revealed a double-dissociation of cortical regions that were preferential for representing knowledge of human versus environmental action events, whether conveyed through natural sounds or the corresponding spoken phrases depicted by either language. These findings of cortical hubs exhibiting linguistic-perceptual knowledge links at a semantic category level should help to advance neurocomputational models of the neurodevelopment of language systems.

3.
J Speech Lang Hear Res ; 63(10): 3539-3559, 2020 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-32936717

RESUMO

Purpose From an anthropological perspective of hominin communication, the human auditory system likely evolved to enable special sensitivity to sounds produced by the vocal tracts of human conspecifics whether attended or passively heard. While numerous electrophysiological studies have used stereotypical human-produced verbal (speech voice and singing voice) and nonverbal vocalizations to identify human voice-sensitive responses, controversy remains as to when (and where) processing of acoustic signal attributes characteristic of "human voiceness" per se initiate in the brain. Method To explore this, we used animal vocalizations and human-mimicked versions of those calls ("mimic voice") to examine late auditory evoked potential responses in humans. Results Here, we revealed an N1b component (96-120 ms poststimulus) during a nonattending listening condition showing significantly greater magnitude in response to mimics, beginning as early as primary auditory cortices, preceding the time window reported in previous studies that revealed species-specific vocalization processing initiating in the range of 147-219 ms. During a sound discrimination task, a P600 (500-700 ms poststimulus) component showed specificity for accurate discrimination of human mimic voice. Distinct acoustic signal attributes and features of the stimuli were used in a classifier model, which could distinguish most human from animal voice comparably to behavioral data-though none of these single features could adequately distinguish human voiceness. Conclusions These results provide novel ideas for algorithms used in neuromimetic hearing aids, as well as direct electrophysiological support for a neurocognitive model of natural sound processing that informs both neurodevelopmental and anthropological models regarding the establishment of auditory communication systems in humans. Supplemental Material https://doi.org/10.23641/asha.12903839.


Assuntos
Córtex Auditivo , Voz , Estimulação Acústica , Animais , Percepção Auditiva , Potenciais Evocados Auditivos , Humanos
4.
Brain Lang ; 183: 64-78, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29966815

RESUMO

Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.


Assuntos
Vias Auditivas/diagnóstico por imagem , Percepção Auditiva/fisiologia , Audição/fisiologia , Comportamento Imitativo/fisiologia , Estimulação Acústica/métodos , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Semântica , Som , Adulto Jovem
5.
Front Neurosci ; 10: 579, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28111538

RESUMO

A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds.

6.
Front Syst Neurosci ; 6: 27, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22582038

RESUMO

Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA