Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
Nat Neurosci ; 24(8): 1176-1186, 2021 08.
Article in English | MEDLINE | ID: mdl-34099922

ABSTRACT

The Adolescent Brain Cognitive Development (ABCD) Study® is a 10-year longitudinal study of children recruited at ages 9 and 10. A battery of neuroimaging tasks are administered biennially to track neurodevelopment and identify individual differences in brain function. This study reports activation patterns from functional MRI (fMRI) tasks completed at baseline, which were designed to measure cognitive impulse control with a stop signal task (SST; N = 5,547), reward anticipation and receipt with a monetary incentive delay (MID) task (N = 6,657) and working memory and emotion reactivity with an emotional N-back (EN-back) task (N = 6,009). Further, we report the spatial reproducibility of activation patterns by assessing between-group vertex/voxelwise correlations of blood oxygen level-dependent (BOLD) activation. Analyses reveal robust brain activations that are consistent with the published literature, vary across fMRI tasks/contrasts and slightly correlate with individual behavioral performance on the tasks. These results establish the preadolescent brain function baseline, guide interpretation of cross-sectional analyses and will enable the investigation of longitudinal changes during adolescent development.


Subject(s)
Brain/physiology , Adolescent , Adolescent Development/physiology , Child , Female , Humans , Magnetic Resonance Imaging , Male , Reference Values
2.
Prog Neurobiol ; 195: 101824, 2020 12.
Article in English | MEDLINE | ID: mdl-32446882

ABSTRACT

Relatively little is known about how the human brain identifies movement of objects while the observer is also moving in the environment. This is, ecologically, one of the most fundamental motion processing problems, critical for survival. To study this problem, we used a task which involved nine textured spheres moving in depth, eight simulating the observer's forward motion while the ninth, the target, moved independently with a different speed towards or away from the observer. Capitalizing on the high temporal resolution of magnetoencephalography (MEG) we trained a Support Vector Classifier (SVC) using the sensor-level data to identify correct and incorrect responses. Using the same MEG data, we addressed the dynamics of cortical processes involved in the detection of the independently moving object and investigated whether we could obtain confirmatory evidence for the brain activity patterns used by the classifier. Our findings indicate that response correctness could be reliably predicted by the SVC, with the highest accuracy during the blank period after motion and preceding the response. The spatial distribution of the areas critical for the correct prediction was similar but not exclusive to areas underlying the evoked activity. Importantly, SVC identified frontal areas otherwise not detected with evoked activity that seem to be important for the successful performance in the task. Dynamic connectivity further supported the involvement of frontal and occipital-temporal areas during the task periods. This is the first study to dynamically map cortical areas using a fully data-driven approach in order to investigate the neural mechanisms involved in the detection of moving objects during observer's self-motion.


Subject(s)
Cerebral Cortex/physiology , Connectome , Motion Perception/physiology , Optic Flow/physiology , Space Perception/physiology , Support Vector Machine , Adult , Connectome/methods , Female , Humans , Magnetoencephalography , Male , Young Adult
3.
Multisens Res ; 32(1): 45-65, 2019.
Article in English | MEDLINE | ID: mdl-30613468

ABSTRACT

The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.

4.
Spinal Cord ; 54(1): 16-23, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26123212

ABSTRACT

STUDY DESIGN: Prospective experimental. OBJECTIVES: To compare sensory function as revealed by light touch and pin prick tests of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) and the electrical perceptual threshold (EPT) exams in individuals with chronic incomplete cervical spinal cord injury (SCI). SETTING: Pittsburgh, United States. METHODS: EPT was tested using cutaneous electrical stimulation (0.5 ms pulse width, 3 Hz) in 32 healthy controls and in 17 participants with SCI over key points on dermatomes C2 to T4 on each side of the body. Light touch and pin prick ISNCSCI scores were tested at the same key dermatomes in SCI participants. RESULTS: In controls, EPT values were higher in older males (1.26±0.2 mA, mean±s.d.) compared with younger males (1.0±0.2 mA) and older females (0.9±0.2 mA), regardless of the dermatome and side tested. Fifteen out of the seventeen SCI participants showed that the level of sensory impairment detected by the EPT was below the level detected by the ISNCSCI (mean=4.5±2.4, range 1-9). The frequency distribution of EPTs was similar to older male controls in dermatomes above but not below the ISNCSCI sensory level. The difference between EPT and ISNCSCI sensory level was negatively correlated with the time post injury. CONCLUSIONS: The results show that, in the chronic stage of cervical SCI, the EPT reveals spared sensory function at lower (~5) spinal segments compared with the ISNCSCI sensory exam. It is hence found that the EPT is a sensitive tool to assess recovery of sensory function after chronic SCI.


Subject(s)
Sensory Thresholds/physiology , Spinal Cord Injuries/physiopathology , Adult , Aged , Aged, 80 and over , Case-Control Studies , Cervical Cord/pathology , Chronic Disease , Electric Stimulation , Female , Humans , Male , Middle Aged , Neurologic Examination , Psychophysics , Severity of Illness Index , Skin/innervation , Statistics as Topic , Touch/physiology
5.
Exp Brain Res ; 221(2): 177-89, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22811215

ABSTRACT

The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects' psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Feedback, Physiological/physiology , Motion Perception/physiology , Nerve Net/physiology , Signal Detection, Psychological/physiology , Adult , Cerebral Cortex/blood supply , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Movement/physiology , Nerve Net/blood supply , Oxygen/blood , Photic Stimulation , Psychomotor Performance , Psychophysics , Young Adult
6.
Proc Biol Sci ; 278(1719): 2840-7, 2011 Sep 22.
Article in English | MEDLINE | ID: mdl-21307050

ABSTRACT

In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.


Subject(s)
Acoustic Stimulation , Motion Perception/physiology , Movement/physiology , Adult , Humans , Male , Photic Stimulation , Retina/physiology , Visual Perception , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL