RESUMO
STUDY DESIGN: Prospective experimental. OBJECTIVES: To compare sensory function as revealed by light touch and pin prick tests of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) and the electrical perceptual threshold (EPT) exams in individuals with chronic incomplete cervical spinal cord injury (SCI). SETTING: Pittsburgh, United States. METHODS: EPT was tested using cutaneous electrical stimulation (0.5 ms pulse width, 3 Hz) in 32 healthy controls and in 17 participants with SCI over key points on dermatomes C2 to T4 on each side of the body. Light touch and pin prick ISNCSCI scores were tested at the same key dermatomes in SCI participants. RESULTS: In controls, EPT values were higher in older males (1.26±0.2 mA, mean±s.d.) compared with younger males (1.0±0.2 mA) and older females (0.9±0.2 mA), regardless of the dermatome and side tested. Fifteen out of the seventeen SCI participants showed that the level of sensory impairment detected by the EPT was below the level detected by the ISNCSCI (mean=4.5±2.4, range 1-9). The frequency distribution of EPTs was similar to older male controls in dermatomes above but not below the ISNCSCI sensory level. The difference between EPT and ISNCSCI sensory level was negatively correlated with the time post injury. CONCLUSIONS: The results show that, in the chronic stage of cervical SCI, the EPT reveals spared sensory function at lower (~5) spinal segments compared with the ISNCSCI sensory exam. It is hence found that the EPT is a sensitive tool to assess recovery of sensory function after chronic SCI.
Assuntos
Limiar Sensorial/fisiologia , Traumatismos da Medula Espinal/fisiopatologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Medula Cervical/patologia , Doença Crônica , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Exame Neurológico , Psicofísica , Índice de Gravidade de Doença , Pele/inervação , Estatística como Assunto , Tato/fisiologiaRESUMO
The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects' psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.
Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Retroalimentação Fisiológica/fisiologia , Percepção de Movimento/fisiologia , Rede Nervosa/fisiologia , Detecção de Sinal Psicológico/fisiologia , Adulto , Córtex Cerebral/irrigação sanguínea , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Movimento/fisiologia , Rede Nervosa/irrigação sanguínea , Oxigênio/sangue , Estimulação Luminosa , Desempenho Psicomotor , Psicofísica , Adulto JovemRESUMO
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.
Assuntos
Estimulação Acústica , Percepção de Movimento/fisiologia , Movimento/fisiologia , Adulto , Humanos , Masculino , Estimulação Luminosa , Retina/fisiologia , Percepção Visual , Adulto JovemRESUMO
The Adolescent Brain Cognitive Development (ABCD) Study® is a 10-year longitudinal study of children recruited at ages 9 and 10. A battery of neuroimaging tasks are administered biennially to track neurodevelopment and identify individual differences in brain function. This study reports activation patterns from functional MRI (fMRI) tasks completed at baseline, which were designed to measure cognitive impulse control with a stop signal task (SST; N = 5,547), reward anticipation and receipt with a monetary incentive delay (MID) task (N = 6,657) and working memory and emotion reactivity with an emotional N-back (EN-back) task (N = 6,009). Further, we report the spatial reproducibility of activation patterns by assessing between-group vertex/voxelwise correlations of blood oxygen level-dependent (BOLD) activation. Analyses reveal robust brain activations that are consistent with the published literature, vary across fMRI tasks/contrasts and slightly correlate with individual behavioral performance on the tasks. These results establish the preadolescent brain function baseline, guide interpretation of cross-sectional analyses and will enable the investigation of longitudinal changes during adolescent development.
Assuntos
Encéfalo/fisiologia , Adolescente , Desenvolvimento do Adolescente/fisiologia , Criança , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Valores de ReferênciaRESUMO
Relatively little is known about how the human brain identifies movement of objects while the observer is also moving in the environment. This is, ecologically, one of the most fundamental motion processing problems, critical for survival. To study this problem, we used a task which involved nine textured spheres moving in depth, eight simulating the observer's forward motion while the ninth, the target, moved independently with a different speed towards or away from the observer. Capitalizing on the high temporal resolution of magnetoencephalography (MEG) we trained a Support Vector Classifier (SVC) using the sensor-level data to identify correct and incorrect responses. Using the same MEG data, we addressed the dynamics of cortical processes involved in the detection of the independently moving object and investigated whether we could obtain confirmatory evidence for the brain activity patterns used by the classifier. Our findings indicate that response correctness could be reliably predicted by the SVC, with the highest accuracy during the blank period after motion and preceding the response. The spatial distribution of the areas critical for the correct prediction was similar but not exclusive to areas underlying the evoked activity. Importantly, SVC identified frontal areas otherwise not detected with evoked activity that seem to be important for the successful performance in the task. Dynamic connectivity further supported the involvement of frontal and occipital-temporal areas during the task periods. This is the first study to dynamically map cortical areas using a fully data-driven approach in order to investigate the neural mechanisms involved in the detection of moving objects during observer's self-motion.
Assuntos
Córtex Cerebral/fisiologia , Conectoma , Percepção de Movimento/fisiologia , Fluxo Óptico/fisiologia , Percepção Espacial/fisiologia , Máquina de Vetores de Suporte , Adulto , Conectoma/métodos , Feminino , Humanos , Magnetoencefalografia , Masculino , Adulto JovemRESUMO
The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.