ABSTRACT
Neuropsychological studies have demonstrated that meningioma patients frequently exhibit cognitive deficits before surgery and show only limited improvement after surgery. Combining neuropsychological with functional imaging measurements can shed more light on the impact of surgery on cognitive brain function. We aimed to evaluate whether surgery affects cognitive brain activity in such a manner that it may mask possible changes in cognitive functioning measured by neuropsychological tests. Twenty-three meningioma patients participated in a fMRI measurement using a verbal working memory task as well as three neuropsychological tests focused on working memory, just before and 3 months after surgery. A region of interest based fMRI analysis was used to examine cognitive brain activity at these timepoints within the central executive network and default mode network. Neuropsychological assessment showed impaired cognitive functioning before as well as 3 months after surgery. Neuropsychological test scores, in-scanner task performance as well as brain activity within the central executive and default mode network were not significantly different between both timepoints. Our results indicate that surgery does not significantly affect cognitive brain activity in meningioma patients the first few months after surgery. Therefore, the lack of cognitive improvement after surgery is not likely the result of compensatory processes in the brain. Cognitive deficits that are already present before surgery appear to be persistent after surgery and a considerable recovery period. Our study shows potential leads that comprehensive cognitive evaluation can be of added value so that cognitive functioning may become a more prominent factor in clinical decision making.
Subject(s)
Magnetic Resonance Imaging , Meningeal Neoplasms , Meningioma , Neuropsychological Tests , Humans , Meningioma/surgery , Meningioma/physiopathology , Female , Male , Middle Aged , Meningeal Neoplasms/surgery , Meningeal Neoplasms/physiopathology , Aged , Adult , Cognition/physiology , Memory, Short-Term/physiology , Brain/physiopathology , Brain/diagnostic imagingABSTRACT
BACKGROUND: The main goal of this functional MRI (fMRI) study was to examine whether cognitive deficits in glioma patients prior to treatment are associated with abnormal brain activity in either the central executive network (CEN) or default mode network (DMN). METHODS: Forty-six glioma patients, and 23 group-matched healthy controls (HCs) participated in this fMRI experiment, performing an N-back task. Additionally, cognitive profiles of patients were evaluated outside the scanner. A region of interest-based analysis was used to compare brain activity in CEN and DMN between groups. Post hoc analyses were performed to evaluate differences between low-grade glioma (LGG) and high-grade glioma (HGG) patients. RESULTS: In-scanner performance was lower in glioma patients compared to HCs. Neuropsychological testing indicated cognitive impairment in LGG as well as HGG patients. fMRI results revealed normal CEN activation in glioma patients, whereas patients showed reduced DMN deactivation compared to HCs. Brain activity levels did not differ between LGG and HGG patients. CONCLUSIONS: Our study suggests that cognitive deficits in glioma patients prior to treatment are associated with reduced responsiveness of the DMN, but not with abnormal CEN activation. These results suggest that cognitive deficits in glioma patients reflect a reduced capacity to achieve a brain state necessary for normal cognitive performance, rather than abnormal functioning of executive brain regions. Solely focusing on increases in brain activity may well be insufficient if we want to understand the underlying brain mechanism of cognitive impairments in patients, as our results indicate the importance of assessing deactivation.
ABSTRACT
Previous research shows that people can use a cue to mentally prepare for a cognitive challenge. The response to a cue has been defined as phasic alertness which is reflected in faster responses and increased activity in frontal, parietal, thalamic, and visual brain regions. We examine if and how phasic alertness can be tuned to the expected difficulty of an upcoming challenge. If people in general are able to tune their level of alertness, then an inability to tune may be linked to disease. Twenty-two healthy volunteers performed a cued visual perception task with two levels of task difficulty. Performance and brain activity were compared between these two levels. Performance was lower for difficult stimuli than for easy stimuli. For both cue types, participants showed activation in a network associated with central executive function and deactivation in regions of the default mode network (DMN) and visual cortex. Deactivation was significantly stronger for cues signaling difficult stimuli than for cues signaling easy stimuli. This effect was most prominent in medial prefrontal gyrus, visual, and temporal cortices. Activation did not differ between the cues. Our study shows that phasic alertness is represented by activated as well as deactivated brain regions. However only deactivated brain regions tuned their level of activity to the expected task difficulty. These results suggest that people, in general, are able to tune their level of alertness to an upcoming task. Cognition may be facilitated by a brain-state coupled to expectations about an upcoming cognitive challenge. Unique identifier = 842003004.
ABSTRACT
A required response forces the brain to react overtly on a stimulus. This may be a factor that influences cognitive activity during a task, as it could facilitate for instance alertness, especially in tasks that are relatively easy. In the current article, we therefore tested the hypothesis that response frequency affects cognitive brain activity in an alertness task. In this 3T functional MRI study, healthy volunteers performed a continuous performance task with three conditions with increasing response frequency. Only scans during presentation of non-targets were analyzed, to exclude activity related to the change in frequency in response selection and motor responses between conditions. To evaluate changes in cognitive brain activity, a network analysis was performed based on two main networks including regions with task-induced activation and task-induced deactivation. We tested for differences in brain activity as an effect of target frequency. Performance results indicated no effect of target frequency on accuracy or reaction time. During non-targets, we found significant signal changes in TID for all three conditions, whereas TIA showed no significant signal changes in any condition. Target frequency did not have a significant effect on the level of signal change at network level, as well as at individual region level. Our study showed predominantly deactivation during non-responses in all three task conditions. Furthermore, our results indicate that response frequency does not influence brain activity during an alertness task. Our results provide additional information relevant for the understanding of the neurophysiological implementation of cognitive control or alertness.
Subject(s)
Attention/physiology , Brain/physiology , Cognition/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , MaleABSTRACT
Comfortable walking speed (CWS) is indicative of clinically relevant factors in the elderly, such as fall risk and mortality. Standard CWS tests involve walking on a straight, unobstructed surface, while in reality surfaces are uneven and cluttered and so walkers rely on visually guided adaptations to avoid trips or slips. Hence, the predictive value of CWS may be expected to increase when assessed for walking in more realistic (visually guided) conditions. We examined CWS in young (n=18) and older (n=18) adults for both overground and treadmill walking. Overground CWS was assessed using the 10-meter walk test with and without visual stepping targets. For treadmill walking, four conditions were examined: (i) uncued walking, and (ii-iv) cued walking with visual stepping targets where the inter-stepping target distance varied by 0%, 20%, or 40%. Pre-experimental measures were taken so that the average inter-stepping target distance could be adjusted for each belt speed based on each participant's self-selected gait characteristics. Results showed that CWS was significantly slower when stepping targets were present in both overground (p<.001) and treadmill walking (p<.001). Thus, attuning steps to visual targets significantly affected CWS, even when the patterning of these targets matched the participant's own gait pattern (viz. 0%-treadmill-walking condition). Results from the treadmill-walking task showed that the amount of variation in inter-stepping target distance did not differentially affect CWS. Our results suggest that it may be worthwhile in clinical assessments to not only determine walking speed using standard conditions but also in situations that require visually guided stepping.