RESUMO
For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown. We demonstrate, at both the individual unit and population levels, that neural activity in macaque middle temporal (MT) area is biased by peripheral optic flow in a manner that can at least partially account for perceptual biases induced by flow parsing. These effects cannot be explained by conventional surround suppression mechanisms or choice-related activity and have substantial neural latency. Together, our findings establish the first neural basis for the computation of scene-relative object motion based on flow parsing.
Assuntos
Percepção de Movimento , Fluxo Óptico , Córtex Visual , Animais , Córtex Visual/fisiologia , Percepção de Movimento/fisiologia , Fluxo Óptico/fisiologia , Macaca mulatta/fisiologia , Estimulação Luminosa , Macaca/fisiologiaRESUMO
Efficient sensory detection requires the capacity to ignore task-irrelevant information, for example when optic flow patterns created by egomotion need to be disentangled from object perception. To investigate how this is achieved in the visual system, predictive coding with sensorimotor mismatch detection is an attractive starting point. Indeed, experimental evidence for sensorimotor mismatch signals in early visual areas exists, but it is not understood how they are integrated into cortical networks that perform input segmentation and categorization. Our model advances a biologically plausible solution by extending predictive coding models with the ability to distinguish self-generated from externally caused optic flow. We first show that a simple three neuron circuit produces experience-dependent sensorimotor mismatch responses, in agreement with calcium imaging data from mice. This microcircuit is then integrated into a neural network with two generative streams. The motor-to-visual stream consists of parallel microcircuits between motor and visual areas and learns to spatially predict optic flow resulting from self-motion. The second stream bidirectionally connects a motion-selective higher visual area (mHVA) to V1, assigning a crucial role to the abundant feedback connections to V1: the maintenance of a generative model of externally caused optic flow. In the model, area mHVA learns to segment moving objects from the background, and facilitates object categorization. Based on shared neurocomputational principles across species, the model also maps onto primate visual cortex. Our work extends Hebbian predictive coding to sensorimotor settings, in which the agent actively moves - and learns to predict the consequences of its own movements.
RESUMO
The visual system relies on both motion and form signals to perceive the direction of self-motion, yet the coordination mechanisms between these two elements in this process remain elusive. In the current study, we employed heading perception as a model to delve into the interaction characteristics between form and motion signals. We recorded the responses of neurons in the ventral intraparietal area (VIP), an area with strong heading selectivity, to motion-only, form-only, and combined stimuli of simulated self-motion. Intriguingly, VIP neurons responded to form-only cues defined by Glass patterns, although they exhibited no tuning selectivity. In combined condition, introducing a small offset between form and motion cues significantly enhanced neuronal sensitivity to motion cues. However, with a larger offset, the enhancement effect on sensitivity became comparatively smaller. Moreover, we observed that the influence of form cues on neuronal response to motion cues is more pronounced in the later stage (1-2 s) of stimulation, with a relatively smaller effect in the early stage (0-1 s). This suggests a dynamic interaction between motion and form cues over time for heading perception. In summary, our study uncovered that in area VIP, form information plays a role in constructing accurate self-motion perception. This adds valuable insights into the complex dynamics of how the brain integrates motion and form cues for the perception of one's own movements.
RESUMO
Accuracy-optimized convolutional neural networks (CNNs) have emerged as highly effective models at predicting neural responses in brain areas along the primate ventral stream, but it is largely unknown whether they effectively model neurons in the complementary primate dorsal stream. We explored how well CNNs model the optic flow tuning properties of neurons in dorsal area MSTd and we compared our results with the Non-Negative Matrix Factorization (NNMF) model, which successfully models many tuning properties of MSTd neurons. To better understand the role of computational properties in the NNMF model that give rise to optic flow tuning that resembles that of MSTd neurons, we created additional CNN model variants that implement key NNMF constraints - non-negative weights and sparse coding of optic flow. While the CNNs and NNMF models both accurately estimate the observer's self-motion from purely translational or rotational optic flow, NNMF and the CNNs with nonnegative weights yield substantially less accurate estimates than the other CNNs when tested on more complex optic flow that combines observer translation and rotation. Despite its poor accuracy, NNMF gives rise to tuning properties that align more closely with those observed in primate MSTd than any of the accuracy-optimized CNNs. This work offers a step toward a deeper understanding of the computational properties and constraints that describe the optic flow tuning of primate area MSTd.
RESUMO
We showed to the same observers both dynamic and static 2D patterns that can both evoke distinctive perceptions of motion or optic flow, as if moving in a tunnel or into a dark hole. At all times pupil diameters were monitored with an infrared eye tracker. We found a converging set of results indicating stronger pupil dilations to expansive growth of shapes or optic flows evoking a forward motion into a dark tunnel. Multiple regression analyses showed that the pupil responses to the illusory expanding black holes of static patterns were predicted by the individuals' pupil response to optic flows showing spiraling motion or "free fall" into a black hole. Also, individuals' pupil responses to spiraling motion into dark tunnels predicted the individuals' sense of illusory expansion with the static, illusory expanding, dark holes. This correspondence across individuals between their pupil responses to both dynamic and static, illusory expanding, holes suggests that these percepts reflect a common perceptual mechanism, deriving motion from 2D scenes, and that the observers' pupil adjustments reflect the direction and strength of motion they perceive and the expected outcome of an increase in darkness.
Assuntos
Percepção de Movimento , Fluxo Óptico , Ilusões Ópticas , Pupila , Humanos , Pupila/fisiologia , Percepção de Movimento/fisiologia , Adulto , Ilusões Ópticas/fisiologia , Adulto Jovem , Fluxo Óptico/fisiologia , Masculino , Feminino , Ilusões/fisiologiaRESUMO
Self-motion perception is a vital skill for all species. It is an inherently multisensory process that combines inertial (body-based) and relative (with respect to the environment) motion cues. Although extensively studied in human and non-human primates, there is currently no paradigm to test self-motion perception in rodents using both inertial and relative self-motion cues. We developed a novel rodent motion simulator using two synchronized robotic arms to generate inertial, relative, or combined (inertial and relative) cues of self-motion. Eight rats were trained to perform a task of heading discrimination, similar to the popular primate paradigm. Strikingly, the rats relied heavily on airflow for relative self-motion perception, with little contribution from the (limited) optic flow cues provided-performance in the dark was almost as good. Relative self-motion (airflow) was perceived with greater reliability vs. inertial. Disrupting airflow, using a fan or windshield, damaged relative, but not inertial, self-motion perception. However, whiskers were not needed for this function. Lastly, the rats integrated relative and inertial self-motion cues in a reliability-based (Bayesian-like) manner. These results implicate airflow as an important cue for self-motion perception in rats and provide a new domain to investigate the neural bases of self-motion perception and multisensory processing in awake behaving rodents.
Assuntos
Sinais (Psicologia) , Percepção de Movimento , Animais , Percepção de Movimento/fisiologia , Ratos , Masculino , Ratos Long-Evans , Fluxo Óptico/fisiologiaRESUMO
The influence of travel time on perceived traveled distance has often been studied, but the results are inconsistent regarding the relationship between the two magnitudes. We argue that this is due to differences in the lengths of investigated travel distances and hypothesize that the influence of travel time differs for rather short compared to rather long traveled distances. We tested this hypothesis in a virtual environment presented on a desktop as well as through a head-mounted display. Our results show that, for longer distances, more travel time leads to longer perceived distance, while we do not find an influence of travel time on shorter distances. The presentation through an HMD vs. desktop only influenced distance judgments in the short distance condition. These results are in line with the idea that the influence of travel time varies by the length of the traveled distance, and provide insights on the question of how distance perception in path integration studies is affected by travel time, thereby resolving inconsistencies reported in previous studies.
Assuntos
Percepção de Distância , Humanos , Percepção de Distância/fisiologia , Feminino , Masculino , Adulto Jovem , Adulto , Fatores de Tempo , Percepção Espacial/fisiologia , Realidade Virtual , Julgamento/fisiologiaRESUMO
Significance: Information about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy method [three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE)] has been developed to image nerves over extended depths ex vivo. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required. Aim: Our objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample. Approach: We modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack. Results: We found that a normalized Dice overlap ( Dice norm ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean Dice norm values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move â¼ 5 mm along the nerve's length. Conclusions: Overall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.
Assuntos
Fibras Nervosas , Software , Imageamento Tridimensional/métodos , Algoritmos , Animais , Processamento de Imagem Assistida por Computador/métodos , Nervo Tibial/diagnóstico por imagem , Nervo Vago/diagnóstico por imagem , Microscopia Ultravioleta/métodos , Microscopia/métodosRESUMO
Optic flow provides useful information in service of spatial navigation. However, whether brain networks supporting these two functions overlap is still unclear. Here we used Activation Likelihood Estimation (ALE) to assess the correspondence between brain correlates of optic flow processing and spatial navigation and their specific neural activations. Since computational and connectivity evidence suggests that visual input from optic flow provides information mainly during egocentric navigation, we further tested the correspondence between brain correlates of optic flow processing and that of both egocentric and allocentric navigation. Optic flow processing shared activation with egocentric (but not allocentric) navigation in the anterior precuneus, suggesting its role in providing information about self-motion, as derived from the analysis of optic flow, in service of egocentric navigation. We further documented that optic flow perception and navigation are partially segregated into two functional and anatomical networks, i.e., the dorsal and the ventromedial networks. Present results point to a dynamic interplay between the dorsal and ventral visual pathways aimed at coordinating visually guided navigation in the environment.
Assuntos
Mapeamento Encefálico , Encéfalo , Fluxo Óptico , Navegação Espacial , Humanos , Fluxo Óptico/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Navegação Espacial/fisiologia , Mapeamento Encefálico/métodos , Neuroimagem/métodos , Vias Visuais/fisiologia , Vias Visuais/diagnóstico por imagem , Percepção Visual/fisiologiaRESUMO
A map showing how neurons that process motion are wired together in the visual system of fruit flies provides new insights into how animals navigate and remain stable when flying.
Assuntos
Drosophila , Neurônios , Animais , Movimento (Física)RESUMO
Stable visual perception, while we are moving, depends on complex interactions between multiple brain regions. We report a patient with damage to the right occipital and temporal lobes who presented with a visual disturbance of inward movement of roadside buildings towards the centre of his visual field, that occurred only when he moved forward on his motorbike. We describe this phenomenon as "self-motion induced environmental kinetopsia". Additionally, he was identified to have another illusion, in which objects displayed on the screen, appeared to pop out of the background. Here, we describe the clinical phenomena and the behavioural tasks specifically designed to document and measure this altered visual experience. Using the methods of lesion mapping and lesion network mapping we were able to demonstrate disrupted functional connectivity in the areas that process flow-parsing such as V3A and V6 that may underpin self-motion induced environmental kinetopsia. Moreover, we suggest that altered connectivity to the regions that process environmental frames of reference such as retrosplenial cortex (RSC) might explain the pop-out illusion. Our case adds novel and convergent lesion-based evidence to the role of these brain regions in visual processing.
Assuntos
Ilusões , Percepção de Movimento , Masculino , Humanos , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Estimulação LuminosaRESUMO
The placenta is a transient organ critical for fetal development. Disruptions of normal placental functions can impact health throughout an individual's entire life. Although being recognized by the NIH Human Placenta Project as an important organ, the placenta remains understudied, partly because of a lack of non-invasive tools for longitudinally evaluation for key aspects of placental functionalities. Non-invasive imaging that can longitudinally probe murine placental health in vivo are critical to understanding placental development throughout pregnancy. We developed advanced imaging processing schemes to establish functional biomarkers for non-invasive longitudinal evaluation of placental development. We developed a dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) pipeline combined with advanced image process methods to model uterine contraction and placental perfusion dynamics. Our novel imaging pipeline uses subcutaneous administration of gadolinium for steepest-slope based perfusion evaluation. This enables non-invasive longitudinal monitoring. Additionally, we advance the placental perfusion chamber paradigm with a novel physiologically-based threshold model for chamber localization and demonstrate spatially varying placental chambers using multiple functional metrics that assess mouse placental development and continuing remodeling throughout gestation. Lastly, using optic flow to quantify placental motions arisen from uterine contractions in conjunction with time-frequency analysis, we demonstrated that the placenta exhibited asymmetric contractile motion.
RESUMO
Human balance control relies on various sensory modalities, and conflict of sensory input may result in postural instability. Virtual reality (VR) technology allows to train balance under conflicting sensory information by decoupling visual from somatosensory and vestibular systems, creating additional demands on sensory reweighting for balance control. However, there is no metric for the design of visual input manipulations that can induce persistent sensory conflicts to perturb balance. This limits the possibilities to generate sustained sensory reweighting processes and design well-defined training approaches. This study aimed to investigate the effects that different onset characteristics, amplitudes and velocities of visual input manipulations may have on balance control and their ability to create persistent balance responses. Twenty-four young adults were recruited for the study. The VR was provided using a state-of-the-art head-mounted display and balance was challenged in two experiments by rotations of the visual scene in the frontal plane with scaled constellations of trajectories, amplitudes and velocities. Mean center of pressure speed was recorded and revealed to be greater when the visual input manipulation had an abrupt onset compared to a smooth onset. Furthermore, the balance response was greatest and most persistent when stimulus velocity was low and stimulus amplitude was large. These findings show clear dissociation in the state of the postural system for abrupt and smooth visual manipulation onsets with no indication of short-term adaption to abrupt manipulations with slow stimulus velocity. This augments our understanding of how conflicting visual information affect balance responses and could help to optimize the conceptualization of training and rehabilitation interventions.
Assuntos
Transtornos dos Movimentos , Equilíbrio Postural , Adulto Jovem , Humanos , Equilíbrio Postural/fisiologiaRESUMO
The ability to detect and assess world-relative object-motion is a critical computation performed by the visual system. This computation, however, is greatly complicated by the observer's movements, which generate a global pattern of motion on the observer's retina. How the visual system implements this computation is poorly understood. Since we are potentially able to detect a moving object if its motion differs in velocity (or direction) from the expected optic flow generated by our own motion, here we manipulated the relative motion velocity between the observer and the object within a stationary scene as a strategy to test how the brain accomplishes object-motion detection. Specifically, we tested the neural sensitivity of brain regions that are known to respond to egomotion-compatible visual motion (i.e., egomotion areas: cingulate sulcus visual area, posterior cingulate sulcus area, posterior insular cortex [PIC], V6+, V3A, IPSmot/VIP, and MT+) to a combination of different velocities of visually induced translational self- and object-motion within a virtual scene while participants were instructed to detect object-motion. To this aim, we combined individual surface-based brain mapping, task-evoked activity by functional magnetic resonance imaging, and parametric and representational similarity analyses. We found that all the egomotion regions (except area PIC) responded to all the possible combinations of self- and object-motion and were modulated by the self-motion velocity. Interestingly, we found that, among all the egomotion areas, only MT+, V6+, and V3A were further modulated by object-motion velocities, hence reflecting their possible role in discriminating between distinct velocities of self- and object-motion. We suggest that these egomotion regions may be involved in the complex computation required for detecting scene-relative object-motion during self-motion.
Assuntos
Percepção de Movimento , Neocórtex , Humanos , Percepção de Movimento/fisiologia , Mapeamento Encefálico , Movimento (Física) , Giro do Cíngulo , Estimulação Luminosa/métodosRESUMO
Optic flow provides information on movement direction and speed during locomotion. Changing the relationship between optic flow and walking speed via training has been shown to influence subsequent distance and hill steepness estimations. Previous research has shown that experience with slow optic flow at a given walking speed was associated with increased effort and distance overestimation in comparison to experiencing with fast optic flow at the same walking speed. Here, we investigated whether exposure to different optic flow speeds relative to gait influences perceptions of leaping and jumping ability. Participants estimated their maximum leaping and jumping ability after exposure to either fast or moderate optic flow at the same walking speed. Those calibrated to fast optic flow estimated farther leaping and jumping abilities than those calibrated to moderate optic flow. Findings suggest that recalibration between optic flow and walking speed may specify an action boundary when calibrated or scaled to actions such as leaping, and possibly, the manipulation of optic flow speed has resulted in a change in the associated anticipated effort for walking a prescribed distance, which in turn influence one's perceived action capabilities for jumping and leaping.
Assuntos
Fluxo Óptico , Humanos , Fluxo Óptico/fisiologia , Adulto , Adulto Jovem , Masculino , Feminino , Velocidade de Caminhada/fisiologia , Caminhada/fisiologia , Desempenho Psicomotor/fisiologia , Locomoção/fisiologiaRESUMO
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Assuntos
Cognição , Lobo Frontal , Animais , MamíferosRESUMO
Dynamic occlusion, such as the accretion and deletion of texture near a boundary, is a major factor in determining relative depth of surfaces. However, the shape of the contour bounding the dynamic texture can significantly influence what kind of 3D shape, and what relative depth, are conveyed by the optic flow. This can lead to percepts that are inconsistent with traditional accounts of shape and depth from motion, where accreting/deleting texture can indicate the figural region, and/or 3D rotation can be perceived despite the constant speed of the optic flow. This suggests that the speed profile of the dynamic texture and the shape of its bounding contours combine to determine relative depth in a way that is not explained by existing models. Here, we investigated how traditional structure-from-motion principles and contour geometry interact to determine the relative-depth interpretation of dynamic textures. We manipulated the consistency of the dynamic texture with rotational or translational motion by varying the speed profile of the texture. In Experiment 1, we used a multi-region figure-ground display consisting of regions with dots moving horizontally in opposite directions in adjacent regions. In Experiment 2, we used stimuli including two regions separated by a common border, with dot textures moving horizontally in opposite directions. Both contour geometry (convexity) and the speed profile of the dynamic dot texture influenced relative-depth judgments, but contour geometry was the stronger factor. The results underscore the importance of contour geometry, which most current models disregard, in determining depth from motion.
Assuntos
Percepção de Forma , Percepção de Movimento , Fluxo Óptico , Humanos , Rotação , Percepção de ProfundidadeRESUMO
CLINICAL RELEVANCE: Vision-related problems can be part of longstanding sequelae after COVID-19 and hamper the return to work and daily activities. Knowledge about symptoms, visual, and oculomotor dysfunctions is however scarce, particularly for non-hospitalised patients. Clinically applicable tools are needed as support in the assessment and determination of intervention needs. BACKGROUND: The purpose of this study was to evaluate vision-related symptoms, assess visual and oculomotor function, and to test the clinical assessment of saccadic eye movements and sensitivity to visual motion in non-hospitalised post-COVID-19 outpatients. The patients (n = 38) in this observational cohort study were recruited from a post-COVID-19 clinic and had been referred for neurocognitive assessment. METHODS: Patients who reported vision-related symptoms reading problems and intolerance to movement in the environment were examined. A structured symptom assessment and a comprehensive vision examination were undertaken, and saccadic eye movements and visual motion sensitivity were assessed. RESULTS: High symptom scores (26-60%) and prevalence of visual function impairments were observed. An increased symptom score when reading was associated with less efficient saccadic eye movement behaviour (p < 0.001) and binocular dysfunction (p = 0.029). Patients with severe symptoms in visually busy places scored significantly higher on the Visual Motion Sensitivity Clinical Test Protocol (p = 0.029). CONCLUSION: Vision-related symptoms and impairments were prevalent in the study group. The Developmental Eye Movement Test and the Visual Motion Sensitivity Clinical Test Protocol showed promise for clinical assessment of saccadic performance and sensitivity to movement in the environment. Further study will be required to explore the utility of these tools.
Assuntos
COVID-19 , Síndrome de COVID-19 Pós-Aguda , Humanos , COVID-19/complicações , Movimentos Oculares , Movimentos Sacádicos , Testes VisuaisRESUMO
In selecting appropriate behaviors, animals should weigh sensory evidence both for and against specific beliefs about the world. For instance, animals measure optic flow to estimate and control their own rotation. However, existing models of flow detection can be spuriously triggered by visual motion created by objects moving in the world. Here, we show that stationary patterns on the retina, which constitute evidence against observer rotation, suppress inappropriate stabilizing rotational behavior in the fruit fly Drosophila. In silico experiments show that artificial neural networks (ANNs) that are optimized to distinguish observer movement from external object motion similarly detect stationarity and incorporate negative evidence. Employing neural measurements and genetic manipulations, we identified components of the circuitry for stationary pattern detection, which runs parallel to the fly's local motion and optic-flow detectors. Our results show how the fly brain incorporates negative evidence to improve heading stability, exemplifying how a compact brain exploits geometrical constraints of the visual world.
Assuntos
Percepção de Movimento , Fluxo Óptico , Animais , Movimento , Rotação , Drosophila , Estimulação Luminosa/métodosRESUMO
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.