Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 21
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
Commun Med (Lond) ; 4(1): 117, 2024 Jun 13.
Article de Anglais | MEDLINE | ID: mdl-38872007

RÉSUMÉ

BACKGROUND: Mobile upright PET devices have the potential to enable previously impossible neuroimaging studies. Currently available options are imagers with deep brain coverage that severely limit head/body movements or imagers with upright/motion enabling properties that are limited to only covering the brain surface. METHODS: In this study, we test the feasibility of an upright, motion-compatible brain imager, our Ambulatory Motion-enabling Positron Emission Tomography (AMPET) helmet prototype, for use as a neuroscience tool by replicating a variant of a published PET/fMRI study of the neurocorrelates of human walking. We validate our AMPET prototype by conducting a walking movement paradigm to determine motion tolerance and assess for appropriate task related activity in motor-related brain regions. Human participants (n = 11 patients) performed a walking-in-place task with simultaneous AMPET imaging, receiving a bolus delivery of F18-Fluorodeoxyglucose. RESULTS: Here we validate three pre-determined measure criteria, including brain alignment motion artifact of less than <2 mm and functional neuroimaging outcomes consistent with existing walking movement literature. CONCLUSIONS: The study extends the potential and utility for use of mobile, upright, and motion-tolerant neuroimaging devices in real-world, ecologically-valid paradigms. Our approach accounts for the real-world logistics of an actual human participant study and can be used to inform experimental physicists, engineers and imaging instrumentation developers undertaking similar future studies. The technical advances described herein help set new priorities for facilitating future neuroimaging devices and research of the human brain in health and disease.


Brain imaging plays an important role in understanding how the human brain functions in both health and disease. However, traditional brain scanners often require people to remain still, limiting the study of the brain in motion, and excluding people who cannot remain still. To overcome this, our team developed an imager that moves with a person's head, which uses a suspended ring of lightweight detectors that fit to the head. Using our imager, we were able to obtain clear brain images of people walking in place that showed the expected brain activity patterns during walking. Further development of our imager could enable it to be used to better understand real-world brain function and behavior, enabling enhanced knowledge and treatment of neurological conditions.

2.
Lang Cogn Neurosci ; 36(6): 773-790, 2021.
Article de Anglais | MEDLINE | ID: mdl-34568509

RÉSUMÉ

Higher cognitive functions such as linguistic comprehension must ultimately relate to perceptual systems in the brain, though how and why this forms remains unclear. Different brain networks that mediate perception when hearing real-world natural sounds has recently been proposed to respect a taxonomic model of acoustic-semantic categories. Using functional magnetic resonance imaging (fMRI) with Chinese/English bilingual listeners, the present study explored whether reception of short spoken phrases, in both Chinese (Mandarin) and English, describing corresponding sound-producing events would engage overlapping brain regions at a semantic category level. The results revealed a double-dissociation of cortical regions that were preferential for representing knowledge of human versus environmental action events, whether conveyed through natural sounds or the corresponding spoken phrases depicted by either language. These findings of cortical hubs exhibiting linguistic-perceptual knowledge links at a semantic category level should help to advance neurocomputational models of the neurodevelopment of language systems.

3.
Cereb Cortex Commun ; 2(1): tgab002, 2021.
Article de Anglais | MEDLINE | ID: mdl-33718874

RÉSUMÉ

Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.

4.
J Speech Lang Hear Res ; 63(10): 3539-3559, 2020 10 16.
Article de Anglais | MEDLINE | ID: mdl-32936717

RÉSUMÉ

Purpose From an anthropological perspective of hominin communication, the human auditory system likely evolved to enable special sensitivity to sounds produced by the vocal tracts of human conspecifics whether attended or passively heard. While numerous electrophysiological studies have used stereotypical human-produced verbal (speech voice and singing voice) and nonverbal vocalizations to identify human voice-sensitive responses, controversy remains as to when (and where) processing of acoustic signal attributes characteristic of "human voiceness" per se initiate in the brain. Method To explore this, we used animal vocalizations and human-mimicked versions of those calls ("mimic voice") to examine late auditory evoked potential responses in humans. Results Here, we revealed an N1b component (96-120 ms poststimulus) during a nonattending listening condition showing significantly greater magnitude in response to mimics, beginning as early as primary auditory cortices, preceding the time window reported in previous studies that revealed species-specific vocalization processing initiating in the range of 147-219 ms. During a sound discrimination task, a P600 (500-700 ms poststimulus) component showed specificity for accurate discrimination of human mimic voice. Distinct acoustic signal attributes and features of the stimuli were used in a classifier model, which could distinguish most human from animal voice comparably to behavioral data-though none of these single features could adequately distinguish human voiceness. Conclusions These results provide novel ideas for algorithms used in neuromimetic hearing aids, as well as direct electrophysiological support for a neurocognitive model of natural sound processing that informs both neurodevelopmental and anthropological models regarding the establishment of auditory communication systems in humans. Supplemental Material https://doi.org/10.23641/asha.12903839.


Sujet(s)
Cortex auditif , Voix , Stimulation acoustique , Animaux , Perception auditive , Potentiels évoqués auditifs , Humains
6.
Brain Sci ; 10(1)2019 Dec 30.
Article de Anglais | MEDLINE | ID: mdl-31905875

RÉSUMÉ

Extended breastfeeding through infancy confers benefits on neurocognitive performance and intelligence tests, though few have examined the biological basis of these effects. To investigate correlations with breastfeeding, we examined the major white matter tracts in 4-8 year-old children using diffusion tensor imaging and volumetric measurements of the corpus callosum. We found a significant correlation between the duration of infant breastfeeding and fractional anisotropy scores in left-lateralized white matter tracts, including the left superior longitudinal fasciculus and left angular bundle, which is indicative of greater intrahemispheric connectivity. However, in contrast to expectations from earlier studies, no correlations were observed with corpus callosum size, and thus no correlations were observed when using such measures of global interhemispheric white matter connectivity development. These findings suggest a complex but significant positive association between breastfeeding duration and white matter connectivity, including in pathways known to be functionally relevant for reading and language development.

7.
Brain Lang ; 183: 64-78, 2018 08.
Article de Anglais | MEDLINE | ID: mdl-29966815

RÉSUMÉ

Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.


Sujet(s)
Voies auditives/imagerie diagnostique , Perception auditive/physiologie , Ouïe/physiologie , Comportement d'imitation/physiologie , Stimulation acoustique/méthodes , Adulte , Voies auditives/physiologie , Cartographie cérébrale , Cortex cérébral/physiologie , Femelle , Humains , Imagerie par résonance magnétique/méthodes , Mâle , Sémantique , Son (physique) , Jeune adulte
8.
Perspect Psychol Sci ; 13(1): 66-69, 2018 01.
Article de Anglais | MEDLINE | ID: mdl-29016240

RÉSUMÉ

In response to our article, Davidson and Dahl offer commentary and advice regarding additional topics crucial to a comprehensive prescriptive agenda for future research on mindfulness and meditation. Their commentary raises further challenges and provides an important complement to our article. More consideration of these issues is especially welcome because limited space precluded us from addressing all relevant topics. While we agree with many of Davidson and Dahl's suggestions, the present reply (a) highlights reasons why the concerns we expressed are still especially germane to mindfulness and meditation research (even though those concerns may not be entirely unique) and (b) gives more context to other issues posed by them. We discuss special characteristics of individuals who participate in mindfulness and meditation research and focus on the vulnerability of this field inherent in its relative youthfulness compared to other more mature scientific disciplines. Moreover, our reply highlights the serious consequences of adverse experiences suffered by a significant subset of individuals during mindfulness and other contemplative practices. We also scrutinize common contemporary applications of mindfulness and meditation to illness, and some caveats are introduced regarding mobile technologies for guidance of contemplative practices.


Sujet(s)
Méditation , Pleine conscience , Humains , Recherche
9.
Perspect Psychol Sci ; 13(1): 36-61, 2018 01.
Article de Anglais | MEDLINE | ID: mdl-29016274

RÉSUMÉ

During the past two decades, mindfulness meditation has gone from being a fringe topic of scientific investigation to being an occasional replacement for psychotherapy, tool of corporate well-being, widely implemented educational practice, and "key to building more resilient soldiers." Yet the mindfulness movement and empirical evidence supporting it have not gone without criticism. Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed. Addressing such concerns, the present article discusses the difficulties of defining mindfulness, delineates the proper scope of research into mindfulness practices, and explicates crucial methodological issues for interpreting results from investigations of mindfulness. For doing so, the authors draw on their diverse areas of expertise to review the present state of mindfulness research, comprehensively summarizing what we do and do not know, while providing a prescriptive agenda for contemplative science, with a particular focus on assessment, mindfulness training, possible adverse effects, and intersection with brain imaging. Our goals are to inform interested scientists, the news media, and the public, to minimize harm, curb poor research practices, and staunch the flow of misinformation about the benefits, costs, and future prospects of mindfulness meditation.


Sujet(s)
Méditation , Pleine conscience , Encéphale/imagerie diagnostique , Encéphale/physiologie , Humains , Plan de recherche , Sémantique
10.
Neuropsychologia ; 105: 223-242, 2017 Oct.
Article de Anglais | MEDLINE | ID: mdl-28467888

RÉSUMÉ

Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.


Sujet(s)
Perception auditive/physiologie , Langage , Modèles biologiques , Reconnaissance physiologique des formes/physiologie , Stimulation acoustique , Communication , Humains , Neurobiologie
11.
Sensors (Basel) ; 17(5)2017 May 19.
Article de Anglais | MEDLINE | ID: mdl-28534848

RÉSUMÉ

Several applications exist for a whole brain positron-emission tomography (PET) brain imager designed as a portable unit that can be worn on a patient's head. Enabled by improvements in detector technology, a lightweight, high performance device would allow PET brain imaging in different environments and during behavioral tasks. Such a wearable system that allows the subjects to move their heads and walk-the Ambulatory Microdose PET (AM-PET)-is currently under development. This imager will be helpful for testing subjects performing selected activities such as gestures, virtual reality activities and walking. The need for this type of lightweight mobile device has led to the construction of a proof of concept portable head-worn unit that uses twelve silicon photomultiplier (SiPM) PET module sensors built into a small ring which fits around the head. This paper is focused on the engineering design of mechanical support aspects of the AM-PET project, both of the current device as well as of the coming next-generation devices. The goal of this work is to optimize design of the scanner and its mechanics to improve comfort for the subject by reducing the effect of weight, and to enable diversification of its applications amongst different research activities.


Sujet(s)
Tomographie par émission de positons , Encéphale , Conception d'appareillage , Humains , Imagerie par résonance magnétique , Fantômes en imagerie , Silicium
12.
Brain Behav ; 6(9): e00530, 2016 09.
Article de Anglais | MEDLINE | ID: mdl-27688946

RÉSUMÉ

BACKGROUND: Positron Emission Tomography (PET) is traditionally used to image patients in restrictive positions, with few devices allowing for upright, brain-dedicated imaging. Our team has explored the concept of wearable PET imagers which could provide functional brain imaging of freely moving subjects. To test feasibility and determine future considerations for development, we built a rudimentary proof-of-concept prototype (Helmet_PET) and conducted tests in phantoms and four human volunteers. METHODS: Twelve Silicon Photomultiplier-based detectors were assembled in a ring with exterior weight support and an interior mechanism that could be adjustably fitted to the head. We conducted brain phantom tests as well as scanned four patients scheduled for diagnostic F(18-) FDG PET/CT imaging. For human subjects the imager was angled such that field of view included basal ganglia and visual cortex to test for typical resting-state pattern. Imaging in two subjects was performed ~4 hr after PET/CT imaging to simulate lower injected F(18-) FDG dose by taking advantage of the natural radioactive decay of the tracer (F(18) half-life of 110 min), with an estimated imaging dosage of 25% of the standard. RESULTS: We found that imaging with a simple lightweight ring of detectors was feasible using a fraction of the standard radioligand dose. Activity levels in the human participants were quantitatively similar to standard PET in a set of anatomical ROIs. Typical resting-state brain pattern activation was demonstrated even in a 1 min scan of active head rotation. CONCLUSION: To our knowledge, this is the first demonstration of imaging a human subject with a novel wearable PET imager that moves with robust head movements. We discuss potential research and clinical applications that will drive the design of a fully functional device. Designs will need to consider trade-offs between a low weight device with high mobility and a heavier device with greater sensitivity and larger field of view.


Sujet(s)
Encéphale/imagerie diagnostique , Fluorodésoxyglucose F18/pharmacocinétique , Neuroimagerie fonctionnelle/instrumentation , Surveillance électronique ambulatoire/instrumentation , Tomographie par émission de positons/instrumentation , Radiopharmaceutiques/pharmacocinétique , Adulte , Conception d'appareillage , Études de faisabilité , Neuroimagerie fonctionnelle/méthodes , Humains , Surveillance électronique ambulatoire/méthodes , Tomographie par émission de positons/méthodes
13.
Phys Med Biol ; 61(10): 3681-97, 2016 05 21.
Article de Anglais | MEDLINE | ID: mdl-27081753

RÉSUMÉ

The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact 'helmet' design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.


Sujet(s)
Encéphale/imagerie diagnostique , Tomographie par émission de positons/instrumentation , Conception d'appareillage , Humains , Fantômes en imagerie , Photons , Tomographie par émission de positons/méthodes , Dose de rayonnement , Sensibilité et spécificité
14.
Front Hum Neurosci ; 7: 282, 2013.
Article de Anglais | MEDLINE | ID: mdl-23785327

RÉSUMÉ

How do our brains respond when we are being watched by a group of people?Despite the large volume of literature devoted to face processing, this question has received very little attention. Here we measured the effects on the face-sensitive N170 and other ERPs to viewing displays of one, two and three faces in two experiments. In Experiment 1, overall image brightness and contrast were adjusted to be constant, whereas in Experiment 2 local contrast and brightness of individual faces were not manipulated. A robust positive-negative-positive (P100-N170-P250) ERP complex and an additional late positive ERP, the P400, were elicited to all stimulus types. As the number of faces in the display increased, N170 amplitude increased for both stimulus sets, and latency increased in Experiment 2. P100 latency and P250 amplitude were affected by changes in overall brightness and contrast, but not by the number of faces in the display per se. In Experiment 1 when overall brightness and contrast were adjusted to be constant, later ERP (P250 and P400) latencies showed differences as a function of hemisphere. Hence, our data indicate that N170 increases its magnitude when multiple faces are seen, apparently impervious to basic low-level stimulus features including stimulus size. Outstanding questions remain regarding category-sensitive neural activity that is elicited to viewing multiple items of stimulus categories other than faces.

15.
Front Hum Neurosci ; 5: 68, 2011.
Article de Anglais | MEDLINE | ID: mdl-21852969

RÉSUMÉ

Facial movements have the potential to be powerful social signals. Previous studies have shown that eye gaze changes and simple mouth movements can elicit robust neural responses, which can be altered as a function of potential social significance. Eye blinks are frequent events and are usually not deliberately communicative, yet blink rate is known to influence social perception. Here, we studied event-related potentials (ERPs) elicited to observing non-task relevant blinks, eye closure, and eye gaze changes in a centrally presented natural face stimulus. Our first hypothesis (H1) that blinks would produce robust ERPs (N170 and later ERP components) was validated, suggesting that the brain may register and process all types of eye movement for potential social relevance. We also predicted an amplitude gradient for ERPs as a function of gaze change, relative to eye closure and then blinks (H2). H2 was only partly validated: large temporo-occipital N170s to all eye change conditions were observed and did not significantly differ between blinks and other conditions. However, blinks elicited late ERPs that, although robust, were significantly smaller relative to gaze conditions. Our data indicate that small and task-irrelevant facial movements such as blinks are measurably registered by the observer's brain. This finding is suggestive of the potential social significance of blinks which, in turn, has implications for the study of social cognition and use of real-life social scenarios.

16.
Hum Brain Mapp ; 32(12): 2241-55, 2011 Dec.
Article de Anglais | MEDLINE | ID: mdl-21305666

RÉSUMÉ

Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here, we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, whereas the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when hearing and attempting to recognize action sounds.


Sujet(s)
Perception auditive/physiologie , Cécité/physiopathologie , Cartographie cérébrale , Cortex cérébral/physiologie , /physiologie , Adulte , Potentiels évoqués auditifs/physiologie , Femelle , Humains , Interprétation d'images assistée par ordinateur , Imagerie par résonance magnétique , Mâle , Mémoire épisodique , Adulte d'âge moyen , Son (physique)
17.
Brain Topogr ; 21(3-4): 193-206, 2009 May.
Article de Anglais | MEDLINE | ID: mdl-19384602

RÉSUMÉ

In an everyday social interaction we automatically integrate another's facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input-a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion.


Sujet(s)
Perception auditive/physiologie , Cortex cérébral/physiologie , Potentiels évoqués/physiologie , Expression faciale , Reconnaissance visuelle des formes/physiologie , Comportement verbal/physiologie , Stimulation acoustique , Adulte , Voies afférentes/anatomie et histologie , Voies afférentes/physiologie , Cartographie cérébrale , Cortex cérébral/anatomie et histologie , Électroencéphalographie , Femelle , Lobe frontal/anatomie et histologie , Lobe frontal/physiologie , Humains , Imagerie par résonance magnétique , Mâle , Lobe pariétal/anatomie et histologie , Lobe pariétal/physiologie , Stimulation lumineuse , Temps de réaction/physiologie , Lobe temporal/anatomie et histologie , Lobe temporal/physiologie , Facteurs temps , Jeune adulte
18.
J Neurosci ; 29(7): 2283-96, 2009 Feb 18.
Article de Anglais | MEDLINE | ID: mdl-19228981

RÉSUMÉ

The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g., harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using functional MRI, we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human nonverbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for the presence of spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound.


Sujet(s)
Cortex auditif/anatomie et histologie , Cortex auditif/physiologie , Potentiels évoqués auditifs/physiologie , Acoustique de la voix , Perception de la parole/physiologie , Vocalisation animale/physiologie , Stimulation acoustique , Adolescent , Adulte , Animaux , Voies auditives/anatomie et histologie , Voies auditives/physiologie , Cartographie cérébrale , Femelle , Humains , Imagerie par résonance magnétique , Mâle , Tests neuropsychologiques , Perception de la hauteur tonale , Traitement du signal assisté par ordinateur , Spectrographie sonore , Spécificité d'espèce , Jeune adulte
19.
J Cogn Neurosci ; 21(7): 1447-60, 2009 Jul.
Article de Anglais | MEDLINE | ID: mdl-18752412

RÉSUMÉ

Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style."


Sujet(s)
Attention/physiologie , Cartographie cérébrale , Cortex cérébral/physiologie , Perception de l'espace/physiologie , Champs visuels/physiologie , Cortex cérébral/vascularisation , Mouvements oculaires/physiologie , Femelle , Latéralité fonctionnelle , Humains , Traitement d'image par ordinateur/méthodes , Imagerie par résonance magnétique/méthodes , Mâle , Oxygène/sang , Stimulation lumineuse/méthodes , Voies optiques/vascularisation , Voies optiques/physiologie
20.
PLoS One ; 3(3): e1897, 2008 Mar 26.
Article de Anglais | MEDLINE | ID: mdl-18365029

RÉSUMÉ

Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.


Sujet(s)
Système nerveux central/physiologie , Émotions , Méditation , Adulte , Femelle , Humains , Mâle , Adulte d'âge moyen
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...