Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Commun Med (Lond) ; 4(1): 117, 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38872007

ABSTRACT

BACKGROUND: Mobile upright PET devices have the potential to enable previously impossible neuroimaging studies. Currently available options are imagers with deep brain coverage that severely limit head/body movements or imagers with upright/motion enabling properties that are limited to only covering the brain surface. METHODS: In this study, we test the feasibility of an upright, motion-compatible brain imager, our Ambulatory Motion-enabling Positron Emission Tomography (AMPET) helmet prototype, for use as a neuroscience tool by replicating a variant of a published PET/fMRI study of the neurocorrelates of human walking. We validate our AMPET prototype by conducting a walking movement paradigm to determine motion tolerance and assess for appropriate task related activity in motor-related brain regions. Human participants (n = 11 patients) performed a walking-in-place task with simultaneous AMPET imaging, receiving a bolus delivery of F18-Fluorodeoxyglucose. RESULTS: Here we validate three pre-determined measure criteria, including brain alignment motion artifact of less than <2 mm and functional neuroimaging outcomes consistent with existing walking movement literature. CONCLUSIONS: The study extends the potential and utility for use of mobile, upright, and motion-tolerant neuroimaging devices in real-world, ecologically-valid paradigms. Our approach accounts for the real-world logistics of an actual human participant study and can be used to inform experimental physicists, engineers and imaging instrumentation developers undertaking similar future studies. The technical advances described herein help set new priorities for facilitating future neuroimaging devices and research of the human brain in health and disease.


Brain imaging plays an important role in understanding how the human brain functions in both health and disease. However, traditional brain scanners often require people to remain still, limiting the study of the brain in motion, and excluding people who cannot remain still. To overcome this, our team developed an imager that moves with a person's head, which uses a suspended ring of lightweight detectors that fit to the head. Using our imager, we were able to obtain clear brain images of people walking in place that showed the expected brain activity patterns during walking. Further development of our imager could enable it to be used to better understand real-world brain function and behavior, enabling enhanced knowledge and treatment of neurological conditions.

2.
Lang Cogn Neurosci ; 36(6): 773-790, 2021.
Article in English | MEDLINE | ID: mdl-34568509

ABSTRACT

Higher cognitive functions such as linguistic comprehension must ultimately relate to perceptual systems in the brain, though how and why this forms remains unclear. Different brain networks that mediate perception when hearing real-world natural sounds has recently been proposed to respect a taxonomic model of acoustic-semantic categories. Using functional magnetic resonance imaging (fMRI) with Chinese/English bilingual listeners, the present study explored whether reception of short spoken phrases, in both Chinese (Mandarin) and English, describing corresponding sound-producing events would engage overlapping brain regions at a semantic category level. The results revealed a double-dissociation of cortical regions that were preferential for representing knowledge of human versus environmental action events, whether conveyed through natural sounds or the corresponding spoken phrases depicted by either language. These findings of cortical hubs exhibiting linguistic-perceptual knowledge links at a semantic category level should help to advance neurocomputational models of the neurodevelopment of language systems.

3.
Cereb Cortex Commun ; 2(1): tgab002, 2021.
Article in English | MEDLINE | ID: mdl-33718874

ABSTRACT

Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.

4.
J Speech Lang Hear Res ; 63(10): 3539-3559, 2020 10 16.
Article in English | MEDLINE | ID: mdl-32936717

ABSTRACT

Purpose From an anthropological perspective of hominin communication, the human auditory system likely evolved to enable special sensitivity to sounds produced by the vocal tracts of human conspecifics whether attended or passively heard. While numerous electrophysiological studies have used stereotypical human-produced verbal (speech voice and singing voice) and nonverbal vocalizations to identify human voice-sensitive responses, controversy remains as to when (and where) processing of acoustic signal attributes characteristic of "human voiceness" per se initiate in the brain. Method To explore this, we used animal vocalizations and human-mimicked versions of those calls ("mimic voice") to examine late auditory evoked potential responses in humans. Results Here, we revealed an N1b component (96-120 ms poststimulus) during a nonattending listening condition showing significantly greater magnitude in response to mimics, beginning as early as primary auditory cortices, preceding the time window reported in previous studies that revealed species-specific vocalization processing initiating in the range of 147-219 ms. During a sound discrimination task, a P600 (500-700 ms poststimulus) component showed specificity for accurate discrimination of human mimic voice. Distinct acoustic signal attributes and features of the stimuli were used in a classifier model, which could distinguish most human from animal voice comparably to behavioral data-though none of these single features could adequately distinguish human voiceness. Conclusions These results provide novel ideas for algorithms used in neuromimetic hearing aids, as well as direct electrophysiological support for a neurocognitive model of natural sound processing that informs both neurodevelopmental and anthropological models regarding the establishment of auditory communication systems in humans. Supplemental Material https://doi.org/10.23641/asha.12903839.


Subject(s)
Auditory Cortex , Voice , Acoustic Stimulation , Animals , Auditory Perception , Evoked Potentials, Auditory , Humans
5.
Brain Lang ; 183: 64-78, 2018 08.
Article in English | MEDLINE | ID: mdl-29966815

ABSTRACT

Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.


Subject(s)
Auditory Pathways/diagnostic imaging , Auditory Perception/physiology , Hearing/physiology , Imitative Behavior/physiology , Acoustic Stimulation/methods , Adult , Auditory Pathways/physiology , Brain Mapping , Cerebral Cortex/physiology , Female , Humans , Magnetic Resonance Imaging/methods , Male , Semantics , Sound , Young Adult
6.
Perspect Psychol Sci ; 13(1): 66-69, 2018 01.
Article in English | MEDLINE | ID: mdl-29016240

ABSTRACT

In response to our article, Davidson and Dahl offer commentary and advice regarding additional topics crucial to a comprehensive prescriptive agenda for future research on mindfulness and meditation. Their commentary raises further challenges and provides an important complement to our article. More consideration of these issues is especially welcome because limited space precluded us from addressing all relevant topics. While we agree with many of Davidson and Dahl's suggestions, the present reply (a) highlights reasons why the concerns we expressed are still especially germane to mindfulness and meditation research (even though those concerns may not be entirely unique) and (b) gives more context to other issues posed by them. We discuss special characteristics of individuals who participate in mindfulness and meditation research and focus on the vulnerability of this field inherent in its relative youthfulness compared to other more mature scientific disciplines. Moreover, our reply highlights the serious consequences of adverse experiences suffered by a significant subset of individuals during mindfulness and other contemplative practices. We also scrutinize common contemporary applications of mindfulness and meditation to illness, and some caveats are introduced regarding mobile technologies for guidance of contemplative practices.


Subject(s)
Meditation , Mindfulness , Humans , Research
7.
Perspect Psychol Sci ; 13(1): 36-61, 2018 01.
Article in English | MEDLINE | ID: mdl-29016274

ABSTRACT

During the past two decades, mindfulness meditation has gone from being a fringe topic of scientific investigation to being an occasional replacement for psychotherapy, tool of corporate well-being, widely implemented educational practice, and "key to building more resilient soldiers." Yet the mindfulness movement and empirical evidence supporting it have not gone without criticism. Misinformation and poor methodology associated with past studies of mindfulness may lead public consumers to be harmed, misled, and disappointed. Addressing such concerns, the present article discusses the difficulties of defining mindfulness, delineates the proper scope of research into mindfulness practices, and explicates crucial methodological issues for interpreting results from investigations of mindfulness. For doing so, the authors draw on their diverse areas of expertise to review the present state of mindfulness research, comprehensively summarizing what we do and do not know, while providing a prescriptive agenda for contemplative science, with a particular focus on assessment, mindfulness training, possible adverse effects, and intersection with brain imaging. Our goals are to inform interested scientists, the news media, and the public, to minimize harm, curb poor research practices, and staunch the flow of misinformation about the benefits, costs, and future prospects of mindfulness meditation.


Subject(s)
Meditation , Mindfulness , Brain/diagnostic imaging , Brain/physiology , Humans , Research Design , Semantics
8.
Neuropsychologia ; 105: 223-242, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28467888

ABSTRACT

Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.


Subject(s)
Auditory Perception/physiology , Language , Models, Biological , Pattern Recognition, Physiological/physiology , Acoustic Stimulation , Communication , Humans , Neurobiology
9.
Phys Med Biol ; 61(10): 3681-97, 2016 05 21.
Article in English | MEDLINE | ID: mdl-27081753

ABSTRACT

The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact 'helmet' design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.


Subject(s)
Brain/diagnostic imaging , Positron-Emission Tomography/instrumentation , Equipment Design , Humans , Phantoms, Imaging , Photons , Positron-Emission Tomography/methods , Radiation Dosage , Sensitivity and Specificity
10.
Front Hum Neurosci ; 5: 68, 2011.
Article in English | MEDLINE | ID: mdl-21852969

ABSTRACT

Facial movements have the potential to be powerful social signals. Previous studies have shown that eye gaze changes and simple mouth movements can elicit robust neural responses, which can be altered as a function of potential social significance. Eye blinks are frequent events and are usually not deliberately communicative, yet blink rate is known to influence social perception. Here, we studied event-related potentials (ERPs) elicited to observing non-task relevant blinks, eye closure, and eye gaze changes in a centrally presented natural face stimulus. Our first hypothesis (H1) that blinks would produce robust ERPs (N170 and later ERP components) was validated, suggesting that the brain may register and process all types of eye movement for potential social relevance. We also predicted an amplitude gradient for ERPs as a function of gaze change, relative to eye closure and then blinks (H2). H2 was only partly validated: large temporo-occipital N170s to all eye change conditions were observed and did not significantly differ between blinks and other conditions. However, blinks elicited late ERPs that, although robust, were significantly smaller relative to gaze conditions. Our data indicate that small and task-irrelevant facial movements such as blinks are measurably registered by the observer's brain. This finding is suggestive of the potential social significance of blinks which, in turn, has implications for the study of social cognition and use of real-life social scenarios.

11.
Hum Brain Mapp ; 32(12): 2241-55, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21305666

ABSTRACT

Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here, we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, whereas the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when hearing and attempting to recognize action sounds.


Subject(s)
Auditory Perception/physiology , Blindness/physiopathology , Brain Mapping , Cerebral Cortex/physiology , Recognition, Psychology/physiology , Adult , Evoked Potentials, Auditory/physiology , Female , Humans , Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Male , Memory, Episodic , Middle Aged , Sound
12.
J Neurosci ; 29(7): 2283-96, 2009 Feb 18.
Article in English | MEDLINE | ID: mdl-19228981

ABSTRACT

The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g., harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using functional MRI, we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human nonverbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for the presence of spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound.


Subject(s)
Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Speech Acoustics , Speech Perception/physiology , Vocalization, Animal/physiology , Acoustic Stimulation , Adolescent , Adult , Animals , Auditory Pathways/anatomy & histology , Auditory Pathways/physiology , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Pitch Perception , Signal Processing, Computer-Assisted , Sound Spectrography , Species Specificity , Young Adult
13.
J Cogn Neurosci ; 21(7): 1447-60, 2009 Jul.
Article in English | MEDLINE | ID: mdl-18752412

ABSTRACT

Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style."


Subject(s)
Attention/physiology , Brain Mapping , Cerebral Cortex/physiology , Space Perception/physiology , Visual Fields/physiology , Cerebral Cortex/blood supply , Eye Movements/physiology , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Male , Oxygen/blood , Photic Stimulation/methods , Visual Pathways/blood supply , Visual Pathways/physiology
14.
J Cogn Neurosci ; 18(8): 1314-30, 2006 Aug.
Article in English | MEDLINE | ID: mdl-16859417

ABSTRACT

Our ability to manipulate and understand the use of a wide range of tools is a feature that sets humans apart from other animals. In right-handers, we previously reported that hearing hand-manipulated tool sounds preferentially activates a left hemisphere network of motor-related brain regions hypothesized to be related to handedness. Using functional magnetic resonance imaging, we compared cortical activation in strongly right-handed versus left-handed listeners categorizing tool sounds relative to animal vocalizations. Here we show that tool sounds preferentially evoke activity predominantly in the hemisphere "opposite" the dominant hand, in specific high-level motor-related and multisensory cortical regions, as determined by a separate task involving pantomiming tool-use gestures. This organization presumably reflects the idea that we typically learn the "meaning" of tool sounds in the context of using them with our dominant hand, such that the networks underlying motor imagery or action schemas may be recruited to facilitate recognition.


Subject(s)
Cerebral Cortex/physiology , Functional Laterality/physiology , Hearing/physiology , Psychomotor Performance/physiology , Sound , Acoustic Stimulation/methods , Adult , Brain Mapping , Cerebral Cortex/blood supply , Female , Humans , Image Processing, Computer-Assisted , Linear Models , Magnetic Resonance Imaging/methods , Male , Middle Aged , Recognition, Psychology/physiology , Sound Localization
SELECTION OF CITATIONS
SEARCH DETAIL
...