Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
J Speech Lang Hear Res ; 63(10): 3539-3559, 2020 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-32936717

RESUMO

Purpose From an anthropological perspective of hominin communication, the human auditory system likely evolved to enable special sensitivity to sounds produced by the vocal tracts of human conspecifics whether attended or passively heard. While numerous electrophysiological studies have used stereotypical human-produced verbal (speech voice and singing voice) and nonverbal vocalizations to identify human voice-sensitive responses, controversy remains as to when (and where) processing of acoustic signal attributes characteristic of "human voiceness" per se initiate in the brain. Method To explore this, we used animal vocalizations and human-mimicked versions of those calls ("mimic voice") to examine late auditory evoked potential responses in humans. Results Here, we revealed an N1b component (96-120 ms poststimulus) during a nonattending listening condition showing significantly greater magnitude in response to mimics, beginning as early as primary auditory cortices, preceding the time window reported in previous studies that revealed species-specific vocalization processing initiating in the range of 147-219 ms. During a sound discrimination task, a P600 (500-700 ms poststimulus) component showed specificity for accurate discrimination of human mimic voice. Distinct acoustic signal attributes and features of the stimuli were used in a classifier model, which could distinguish most human from animal voice comparably to behavioral data-though none of these single features could adequately distinguish human voiceness. Conclusions These results provide novel ideas for algorithms used in neuromimetic hearing aids, as well as direct electrophysiological support for a neurocognitive model of natural sound processing that informs both neurodevelopmental and anthropological models regarding the establishment of auditory communication systems in humans. Supplemental Material https://doi.org/10.23641/asha.12903839.


Assuntos
Córtex Auditivo , Voz , Estimulação Acústica , Animais , Percepção Auditiva , Potenciais Evocados Auditivos , Humanos
2.
Autism Res ; 13(4): 539-549, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31944557

RESUMO

Many individuals with autism spectrum disorder (ASD) have been shown to perceive everyday sensory information differently compared to peers without autism. Research examining these sensory differences has primarily utilized nonnatural stimuli or natural stimuli using static photos with few having utilized dynamic, real-world nonverbal stimuli. Therefore, in this study, we used functional magnetic resonance imaging to characterize brain activation of individuals with high-functioning autism when viewing and listening to a video of a real-world scene (a person bouncing a ball) and anticipating the bounce. We investigated both multisensory and unisensory processing and hypothesized that individuals with ASD would show differential activation in (a) primary auditory and visual sensory cortical and association areas, and in (b) cortical and subcortical regions where auditory and visual information is integrated (e.g. temporal-parietal junction, pulvinar, superior colliculus). Contrary to our hypotheses, the whole-brain analysis revealed similar activation between the groups in these brain regions. However, compared to controls the ASD group showed significant hypoactivation in the left intraparietal sulcus and left putamen/globus pallidus. We theorize that this hypoactivation reflected underconnectivity for mediating spatiotemporal processing of the visual biological motion stimuli with the task demands of anticipating the timing of the bounce event. The paradigm thus may have tapped into a specific left-lateralized aberrant corticobasal circuit or loop involved in initiating or inhibiting motor responses. This was consistent with a dual "when versus where" psychophysical model of corticobasal function, which may reflect core differences in sensory processing of real-world, nonverbal natural stimuli in ASD. Autism Res 2020, 13: 539-549. © 2020 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: To understand how individuals with autism perceive the real-world, using magnetic resonance imaging we examined brain activation in individuals with autism while watching a video of someone bouncing a basketball. Those with autism had similar activation to controls in auditory and visual sensory brain regions, but less activation in an area that processes information about body movements and in a region involved in modulating movements. These areas are important for understanding the actions of others and developing social skills.


Assuntos
Percepção Auditiva/fisiologia , Transtorno do Espectro Autista/fisiopatologia , Encéfalo/fisiopatologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Mapeamento Encefálico/métodos , Criança , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Processos Mentais/fisiologia , Estimulação Luminosa/métodos , Adulto Jovem
3.
Brain Lang ; 183: 64-78, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29966815

RESUMO

Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.


Assuntos
Vias Auditivas/diagnóstico por imagem , Percepção Auditiva/fisiologia , Audição/fisiologia , Comportamento Imitativo/fisiologia , Estimulação Acústica/métodos , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Semântica , Som , Adulto Jovem
4.
Neuropsychologia ; 105: 223-242, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28467888

RESUMO

Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.


Assuntos
Percepção Auditiva/fisiologia , Idioma , Modelos Biológicos , Reconhecimento Fisiológico de Modelo/fisiologia , Estimulação Acústica , Comunicação , Humanos , Neurobiologia
5.
Dev Cogn Neurosci ; 12: 134-44, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25732377

RESUMO

Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011). Yet, little is known about the development of such specialization. Using event-related potentials (ERP), this study investigated neural correlates of 7-month-olds' processing of human action (HA) sounds in comparison to human vocalizations (HV), environmental (ENV), and mechanical (MEC) sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA+HV) led to significantly different response profiles compared to non-living sound sources (ENV+MEC) at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.


Assuntos
Estimulação Acústica , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Potenciais Evocados Auditivos , Mapeamento Encefálico , Eletroencefalografia , Feminino , Lobo Frontal/fisiologia , Humanos , Lactente , Masculino , Lobo Parietal/fisiologia , Lobo Temporal/fisiologia
6.
Hear Res ; 305: 74-85, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23994296

RESUMO

Humans and several non-human primates possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). However, the use of speech and other broadly defined categories of behaviorally relevant natural sounds has led to many discrepancies regarding where voice-sensitivity occurs, and more generally the identification of cortical networks, "proto-networks" or protolanguage networks, and pathways that may be sensitive or selective for certain aspects of vocalization processing. In this prospective review we examine different approaches for exploring vocal communication processing, including pathways that may be, or become, specialized for conspecific utterances. In particular, we address the use of naturally produced non-stereotypical vocalizations (mimicry of other animal calls) as another category of vocalization for use with human and non-human primate auditory systems. We focus this review on two main themes, including progress and future ideas for studying vocalization processing in great apes (chimpanzees) and in very early stages of human development, including infants and fetuses. Advancing our understanding of the fundamental principles that govern the evolution and early development of cortical pathways for processing non-verbal communication utterances is expected to lead to better diagnoses and early intervention strategies in children with communication disorders, improve rehabilitation of communication disorders resulting from brain injury, and develop new strategies for intelligent hearing aid and implant design that can better enhance speech signals in noisy environments. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Pan troglodytes/fisiologia , Reconhecimento Fisiológico de Modelo , Fala , Vocalização Animal , Voz , Estimulação Acústica , Fatores Etários , Envelhecimento , Animais , Humanos , Ruído/efeitos adversos , Pan troglodytes/psicologia , Mascaramento Perceptivo , Especificidade da Espécie , Percepção da Fala
7.
J Cogn Neurosci ; 23(8): 2079-101, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-20812786

RESUMO

In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Reconhecimento Psicológico , Som , Estimulação Acústica , Adulto , Córtex Cerebral/irrigação sanguínea , Meio Ambiente , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Masculino , Rede Nervosa/irrigação sanguínea , Rede Nervosa/fisiologia , Vias Neurais/irrigação sanguínea , Vias Neurais/fisiologia , Oxigênio/sangue , Semântica , Adulto Jovem
8.
Neuroimage ; 47(4): 1778-91, 2009 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-19465134

RESUMO

With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places-categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left lateralized fronto-parietal regions, bilateral insular cortices, and sub-cortical regions previously implicated in observation-execution matching, consistent with "embodied" and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Imageamento por Ressonância Magnética/métodos , Som , Percepção da Fala/fisiologia , Adulto , Animais , Feminino , Humanos , Masculino , Adulto Jovem
9.
J Neurosci ; 29(7): 2283-96, 2009 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-19228981

RESUMO

The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g., harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using functional MRI, we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human nonverbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for the presence of spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound.


Assuntos
Córtex Auditivo/anatomia & histologia , Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia , Vocalização Animal/fisiologia , Estimulação Acústica , Adolescente , Adulto , Animais , Vias Auditivas/anatomia & histologia , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Percepção da Altura Sonora , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Especificidade da Espécie , Adulto Jovem
10.
J Cogn Neurosci ; 18(8): 1314-30, 2006 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16859417

RESUMO

Our ability to manipulate and understand the use of a wide range of tools is a feature that sets humans apart from other animals. In right-handers, we previously reported that hearing hand-manipulated tool sounds preferentially activates a left hemisphere network of motor-related brain regions hypothesized to be related to handedness. Using functional magnetic resonance imaging, we compared cortical activation in strongly right-handed versus left-handed listeners categorizing tool sounds relative to animal vocalizations. Here we show that tool sounds preferentially evoke activity predominantly in the hemisphere "opposite" the dominant hand, in specific high-level motor-related and multisensory cortical regions, as determined by a separate task involving pantomiming tool-use gestures. This organization presumably reflects the idea that we typically learn the "meaning" of tool sounds in the context of using them with our dominant hand, such that the networks underlying motor imagery or action schemas may be recruited to facilitate recognition.


Assuntos
Córtex Cerebral/fisiologia , Lateralidade Funcional/fisiologia , Audição/fisiologia , Desempenho Psicomotor/fisiologia , Som , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico , Córtex Cerebral/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Reconhecimento Psicológico/fisiologia , Localização de Som
11.
J Neurosci ; 25(21): 5148-58, 2005 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-15917455

RESUMO

Human listeners can effortlessly categorize a wide range of environmental sounds. Whereas categorizing visual object classes (e.g., faces, tools, houses, etc.) preferentially activates different regions of visually sensitive cortex, it is not known whether the auditory system exhibits a similar organization for different types or categories of complex sounds outside of human speech. Using functional magnetic resonance imaging, we show that hearing and correctly or incorrectly categorizing animal vocalizations (as opposed to hand-manipulated tool sounds) preferentially activated middle portions of the left and right superior temporal gyri (mSTG). On average, the vocalization sounds had much greater harmonic and phase-coupling content (acoustically similar to human speech sounds), which may represent some of the signal attributes that preferentially activate the mSTG regions. In contrast, correctly categorized tool sounds (and even animal sounds that were miscategorized as being tool-related sounds) preferentially activated a widespread, predominantly left hemisphere cortical "mirror network." This network directly overlapped substantial portions of motor-related cortices that were independently activated when participants pantomimed tool manipulations with their right (dominant) hand. These data suggest that the recognition processing for some sounds involves a causal reasoning mechanism (a high-level auditory "how" pathway), automatically evoked when attending to hand-manipulated tool sounds, that effectively associates the dynamic motor actions likely to have produced the sound(s).


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Mapeamento Encefálico , Reconhecimento Psicológico/fisiologia , Localização de Som/fisiologia , Estimulação Acústica/métodos , Adulto , Animais , Córtex Auditivo/anatomia & histologia , Córtex Auditivo/irrigação sanguínea , Vias Auditivas/anatomia & histologia , Vias Auditivas/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Análise Espectral
12.
Cereb Cortex ; 14(9): 1008-21, 2004 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-15166097

RESUMO

To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.


Assuntos
Estimulação Acústica/métodos , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Meio Ambiente , Reconhecimento Psicológico/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA