Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Lang Cogn Neurosci ; 36(6): 773-790, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34568509

RESUMO

Higher cognitive functions such as linguistic comprehension must ultimately relate to perceptual systems in the brain, though how and why this forms remains unclear. Different brain networks that mediate perception when hearing real-world natural sounds has recently been proposed to respect a taxonomic model of acoustic-semantic categories. Using functional magnetic resonance imaging (fMRI) with Chinese/English bilingual listeners, the present study explored whether reception of short spoken phrases, in both Chinese (Mandarin) and English, describing corresponding sound-producing events would engage overlapping brain regions at a semantic category level. The results revealed a double-dissociation of cortical regions that were preferential for representing knowledge of human versus environmental action events, whether conveyed through natural sounds or the corresponding spoken phrases depicted by either language. These findings of cortical hubs exhibiting linguistic-perceptual knowledge links at a semantic category level should help to advance neurocomputational models of the neurodevelopment of language systems.

2.
Cereb Cortex Commun ; 2(1): tgab002, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33718874

RESUMO

Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.

3.
J Speech Lang Hear Res ; 63(10): 3539-3559, 2020 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-32936717

RESUMO

Purpose From an anthropological perspective of hominin communication, the human auditory system likely evolved to enable special sensitivity to sounds produced by the vocal tracts of human conspecifics whether attended or passively heard. While numerous electrophysiological studies have used stereotypical human-produced verbal (speech voice and singing voice) and nonverbal vocalizations to identify human voice-sensitive responses, controversy remains as to when (and where) processing of acoustic signal attributes characteristic of "human voiceness" per se initiate in the brain. Method To explore this, we used animal vocalizations and human-mimicked versions of those calls ("mimic voice") to examine late auditory evoked potential responses in humans. Results Here, we revealed an N1b component (96-120 ms poststimulus) during a nonattending listening condition showing significantly greater magnitude in response to mimics, beginning as early as primary auditory cortices, preceding the time window reported in previous studies that revealed species-specific vocalization processing initiating in the range of 147-219 ms. During a sound discrimination task, a P600 (500-700 ms poststimulus) component showed specificity for accurate discrimination of human mimic voice. Distinct acoustic signal attributes and features of the stimuli were used in a classifier model, which could distinguish most human from animal voice comparably to behavioral data-though none of these single features could adequately distinguish human voiceness. Conclusions These results provide novel ideas for algorithms used in neuromimetic hearing aids, as well as direct electrophysiological support for a neurocognitive model of natural sound processing that informs both neurodevelopmental and anthropological models regarding the establishment of auditory communication systems in humans. Supplemental Material https://doi.org/10.23641/asha.12903839.


Assuntos
Córtex Auditivo , Voz , Estimulação Acústica , Animais , Percepção Auditiva , Potenciais Evocados Auditivos , Humanos
4.
Brain Lang ; 183: 64-78, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29966815

RESUMO

Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.


Assuntos
Vias Auditivas/diagnóstico por imagem , Percepção Auditiva/fisiologia , Audição/fisiologia , Comportamento Imitativo/fisiologia , Estimulação Acústica/métodos , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Semântica , Som , Adulto Jovem
5.
Perspect Psychol Sci ; 13(1): 66-69, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29016240

RESUMO

In response to our article, Davidson and Dahl offer commentary and advice regarding additional topics crucial to a comprehensive prescriptive agenda for future research on mindfulness and meditation. Their commentary raises further challenges and provides an important complement to our article. More consideration of these issues is especially welcome because limited space precluded us from addressing all relevant topics. While we agree with many of Davidson and Dahl's suggestions, the present reply (a) highlights reasons why the concerns we expressed are still especially germane to mindfulness and meditation research (even though those concerns may not be entirely unique) and (b) gives more context to other issues posed by them. We discuss special characteristics of individuals who participate in mindfulness and meditation research and focus on the vulnerability of this field inherent in its relative youthfulness compared to other more mature scientific disciplines. Moreover, our reply highlights the serious consequences of adverse experiences suffered by a significant subset of individuals during mindfulness and other contemplative practices. We also scrutinize common contemporary applications of mindfulness and meditation to illness, and some caveats are introduced regarding mobile technologies for guidance of contemplative practices.


Assuntos
Meditação , Atenção Plena , Humanos , Pesquisa
6.
Perspect Psychol Sci ; 13(1): 36-61, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29016274

RESUMO

During the past two decades, mindfulness meditation has gone from being a fringe topic of scientific investigation to being an occasional replacement for psychotherapy, tool of corporate well-being, widely implemented educational practice, and "key to building more resilient soldiers." Yet the mindfulness movement and empirical evidence supporting it have not gone without criticism. Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed. Addressing such concerns, the present article discusses the difficulties of defining mindfulness, delineates the proper scope of research into mindfulness practices, and explicates crucial methodological issues for interpreting results from investigations of mindfulness. For doing so, the authors draw on their diverse areas of expertise to review the present state of mindfulness research, comprehensively summarizing what we do and do not know, while providing a prescriptive agenda for contemplative science, with a particular focus on assessment, mindfulness training, possible adverse effects, and intersection with brain imaging. Our goals are to inform interested scientists, the news media, and the public, to minimize harm, curb poor research practices, and staunch the flow of misinformation about the benefits, costs, and future prospects of mindfulness meditation.


Assuntos
Meditação , Atenção Plena , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Humanos , Projetos de Pesquisa , Semântica
7.
Neuropsychologia ; 105: 223-242, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28467888

RESUMO

Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.


Assuntos
Percepção Auditiva/fisiologia , Idioma , Modelos Biológicos , Reconhecimento Fisiológico de Modelo/fisiologia , Estimulação Acústica , Comunicação , Humanos , Neurobiologia
8.
Phys Med Biol ; 61(10): 3681-97, 2016 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-27081753

RESUMO

The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact 'helmet' design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.


Assuntos
Encéfalo/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/instrumentação , Desenho de Equipamento , Humanos , Imagens de Fantasmas , Fótons , Tomografia por Emissão de Pósitrons/métodos , Doses de Radiação , Sensibilidade e Especificidade
9.
Front Hum Neurosci ; 5: 68, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21852969

RESUMO

Facial movements have the potential to be powerful social signals. Previous studies have shown that eye gaze changes and simple mouth movements can elicit robust neural responses, which can be altered as a function of potential social significance. Eye blinks are frequent events and are usually not deliberately communicative, yet blink rate is known to influence social perception. Here, we studied event-related potentials (ERPs) elicited to observing non-task relevant blinks, eye closure, and eye gaze changes in a centrally presented natural face stimulus. Our first hypothesis (H1) that blinks would produce robust ERPs (N170 and later ERP components) was validated, suggesting that the brain may register and process all types of eye movement for potential social relevance. We also predicted an amplitude gradient for ERPs as a function of gaze change, relative to eye closure and then blinks (H2). H2 was only partly validated: large temporo-occipital N170s to all eye change conditions were observed and did not significantly differ between blinks and other conditions. However, blinks elicited late ERPs that, although robust, were significantly smaller relative to gaze conditions. Our data indicate that small and task-irrelevant facial movements such as blinks are measurably registered by the observer's brain. This finding is suggestive of the potential social significance of blinks which, in turn, has implications for the study of social cognition and use of real-life social scenarios.

10.
Hum Brain Mapp ; 32(12): 2241-55, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21305666

RESUMO

Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here, we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, whereas the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when hearing and attempting to recognize action sounds.


Assuntos
Percepção Auditiva/fisiologia , Cegueira/fisiopatologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Reconhecimento Psicológico/fisiologia , Adulto , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Memória Episódica , Pessoa de Meia-Idade , Som
11.
J Neurosci ; 29(7): 2283-96, 2009 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-19228981

RESUMO

The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g., harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using functional MRI, we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human nonverbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for the presence of spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound.


Assuntos
Córtex Auditivo/anatomia & histologia , Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia , Vocalização Animal/fisiologia , Estimulação Acústica , Adolescente , Adulto , Animais , Vias Auditivas/anatomia & histologia , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Percepção da Altura Sonora , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Especificidade da Espécie , Adulto Jovem
12.
J Cogn Neurosci ; 21(7): 1447-60, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-18752412

RESUMO

Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style."


Assuntos
Atenção/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Percepção Espacial/fisiologia , Campos Visuais/fisiologia , Córtex Cerebral/irrigação sanguínea , Movimentos Oculares/fisiologia , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Oxigênio/sangue , Estimulação Luminosa/métodos , Vias Visuais/irrigação sanguínea , Vias Visuais/fisiologia
13.
J Cogn Neurosci ; 18(8): 1314-30, 2006 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16859417

RESUMO

Our ability to manipulate and understand the use of a wide range of tools is a feature that sets humans apart from other animals. In right-handers, we previously reported that hearing hand-manipulated tool sounds preferentially activates a left hemisphere network of motor-related brain regions hypothesized to be related to handedness. Using functional magnetic resonance imaging, we compared cortical activation in strongly right-handed versus left-handed listeners categorizing tool sounds relative to animal vocalizations. Here we show that tool sounds preferentially evoke activity predominantly in the hemisphere "opposite" the dominant hand, in specific high-level motor-related and multisensory cortical regions, as determined by a separate task involving pantomiming tool-use gestures. This organization presumably reflects the idea that we typically learn the "meaning" of tool sounds in the context of using them with our dominant hand, such that the networks underlying motor imagery or action schemas may be recruited to facilitate recognition.


Assuntos
Córtex Cerebral/fisiologia , Lateralidade Funcional/fisiologia , Audição/fisiologia , Desempenho Psicomotor/fisiologia , Som , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico , Córtex Cerebral/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Reconhecimento Psicológico/fisiologia , Localização de Som
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...