Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Front Bioeng Biotechnol ; 12: 1285107, 2024.
Article in English | MEDLINE | ID: mdl-38638317

ABSTRACT

Immersive technology, such as extended reality, holds promise as a tool for educating ophthalmologists about the effects of low vision and for enhancing visual rehabilitation protocols. However, immersive simulators have not been evaluated for their ability to induce changes in the oculomotor system, which is crucial for understanding the visual experiences of visually impaired individuals. This study aimed to assess the REALTER (Wearable Egocentric Altered Reality Simulator) system's capacity to induce specific alterations in healthy individuals' oculomotor systems under simulated low-vision conditions. We examined task performance, eye movements, and head movements in healthy participants across various simulated scenarios. Our findings suggest that REALTER can effectively elicit behaviors in healthy individuals resembling those observed in individuals with low vision. Participants with simulated binocular maculopathy demonstrated unstable fixations and a high frequency of wide saccades. Individuals with simulated homonymous hemianopsia showed a tendency to maintain a fixed head position while executing wide saccades to survey their surroundings. Simulation of tubular vision resulted in a significant reduction in saccade amplitudes. REALTER holds promise as both a training tool for ophthalmologists and a research instrument for studying low vision conditions. The simulator has the potential to enhance ophthalmologists' comprehension of the limitations imposed by visual disabilities, thereby facilitating the development of new rehabilitation protocols.

2.
Sci Rep ; 13(1): 7445, 2023 05 08.
Article in English | MEDLINE | ID: mdl-37156822

ABSTRACT

Contrary to a photographer, who puts a great effort in keeping the lens still, eyes insistently move even during fixation. This benefits signal decorrelation, which underlies an efficient encoding of visual information. Yet, camera motion is not sufficient alone; it must be coupled with a sensor specifically selective to temporal changes. Indeed, motion induced on standard imagers only results in burring effects. Neuromorphic sensors represent a valuable solution. Here we characterize the response of an event-based camera equipped with fixational eye movements (FEMs) on both synthetic and natural images. Our analyses prove that the system starts an early stage of redundancy suppression, as a precursor of subsequent whitening processes on the amplitude spectrum. This does not come at the price of corrupting structural information contained in local spatial phase across oriented axes. Isotropy of FEMs ensures proper representations of image features without introducing biases towards specific contrast orientations.


Subject(s)
Fixation, Ocular , Eye Movements , Motion , Vision, Ocular
3.
Front Robot AI ; 9: 994284, 2022.
Article in English | MEDLINE | ID: mdl-36329691

ABSTRACT

When exploring the surrounding environment with the eyes, humans and primates need to interpret three-dimensional (3D) shapes in a fast and invariant way, exploiting a highly variant and gaze-dependent visual information. Since they have front-facing eyes, binocular disparity is a prominent cue for depth perception. Specifically, it serves as computational substrate for two ground mechanisms of binocular active vision: stereopsis and binocular coordination. To this aim, disparity information, which is expressed in a retinotopic reference frame, is combined along the visual cortical pathways with gaze information and transformed in a head-centric reference frame. Despite the importance of this mechanism, the underlying neural substrates still remain widely unknown. In this work, we investigate the capabilities of the human visual system to interpret the 3D scene exploiting disparity and gaze information. In a psychophysical experiment, human subjects were asked to judge the depth orientation of a planar surface either while fixating a target point or while freely exploring the surface. Moreover, we used the same stimuli to train a recurrent neural network to exploit the responses of a modelled population of cortical (V1) cells to interpret the 3D scene layout. The results for both human performance and from the model network show that integrating disparity information across gaze directions is crucial for a reliable and invariant interpretation of the 3D geometry of the scene.

4.
J Vis ; 21(10): 13, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34529006

ABSTRACT

Evidences of perceptual changes that accompany motor activity have been limited primarily to audition and somatosensation. Here we asked whether motor learning results in changes to visual motion perception. We designed a reaching task in which participants were trained to make movements along several directions, while the visual feedback was provided by an intrinsically ambiguous moving stimulus directly tied to hand motion. We find that training improves coherent motion perception and that changes in movement are correlated with perceptual changes. No perceptual changes are observed in passive training even when observers were provided with an explicit strategy to facilitate single motion perception. A Bayesian model suggests that movement training promotes the fine-tuning of the internal representation of stimulus geometry. These results emphasize the role of sensorimotor interaction in determining the persistent properties in space and time that define a percept.


Subject(s)
Motion Perception , Bayes Theorem , Hand , Humans , Motion , Visual Perception
5.
Behav Res Methods ; 53(1): 167-187, 2021 02.
Article in English | MEDLINE | ID: mdl-32643061

ABSTRACT

Saccades are rapid ballistic eye movements that humans make to direct the fovea to an object of interest. Their kinematics is well defined, showing regular relationships between amplitude, duration, and velocity: the saccadic 'main sequence'. Deviations of eye movements from the main sequence can be used as markers of specific neurological disorders. Despite its significance, there is no general methodological consensus for reliable and repeatable measurements of the main sequence. In this work, we propose a novel approach for standard indicators of oculomotor performance. The obtained measurements are characterized by high repeatability, allowing for fine assessments of inter- and intra-subject variability, and inter-ocular differences. The designed experimental procedure is natural and non-fatiguing, thus it is well suited for fragile or non-collaborative subjects like neurological patients and infants. The method has been released as a software toolbox for public use. This framework lays the foundation for a normative dataset of healthy oculomotor performance for the assessment of oculomotor dysfunctions.


Subject(s)
Eye Movements , Saccades , Biomechanical Phenomena , Humans , Vision, Ocular
6.
Sci Rep ; 10(1): 15634, 2020 09 24.
Article in English | MEDLINE | ID: mdl-32973252

ABSTRACT

Strabismus is a prevalent impairment of binocular alignment that is associated with a spectrum of perceptual deficits and social disadvantages. Current treatments for strabismus involve ocular alignment through surgical or optical methods and may include vision therapy exercises. In the present study, we explore the potential of real-time dichoptic visual feedback that may be used to quantify and manipulate interocular alignment. A gaze-contingent ring was presented independently to each eye of 11 normally-sighted observers as they fixated a target dot presented only to their dominant eye. Their task was to center the rings within 2° of the target for at least 1 s, with feedback provided by the sizes of the rings. By offsetting the ring in the non-dominant eye temporally or nasally, this task required convergence or divergence, respectively, of the non-dominant eye. Eight of 11 observers attained 5° asymmetric convergence and 3 of 11 attained 3° asymmetric divergence. The results suggest that real-time gaze-contingent feedback may be used to quantify and transiently simulate strabismus and holds promise as a method to augment existing therapies for oculomotor alignment disorders.


Subject(s)
Eye Movements/physiology , Feedback , Oculomotor Muscles/physiology , Sensory Thresholds , Strabismus/physiopathology , Vision, Binocular/physiology , Visual Acuity/physiology , Contrast Sensitivity , Female , Humans , Male , Photic Stimulation , Task Performance and Analysis , Visual Perception
7.
Sci Data ; 4: 170034, 2017 03 28.
Article in English | MEDLINE | ID: mdl-28350382

ABSTRACT

Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO-GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction.


Subject(s)
Algorithms , Depth Perception , Humans , Vision Disparity
8.
Sci Rep ; 7: 44800, 2017 03 20.
Article in English | MEDLINE | ID: mdl-28317909

ABSTRACT

Depth perception in near viewing strongly relies on the interpretation of binocular retinal disparity to obtain stereopsis. Statistical regularities of retinal disparities have been claimed to greatly impact on the neural mechanisms that underlie binocular vision, both to facilitate perceptual decisions and to reduce computational load. In this paper, we designed a novel and unconventional approach in order to assess the role of fixation strategy in conditioning the statistics of retinal disparity. We integrated accurate realistic three-dimensional models of natural scenes with binocular eye movement recording, to obtain accurate ground-truth statistics of retinal disparity experienced by a subject in near viewing. Our results evidence how the organization of human binocular visual system is finely adapted to the disparity statistics characterizing actual fixations, thus revealing a novel role of the active fixation strategy over the binocular visual functionality. This suggests an ecological explanation for the intrinsic preference of stereopsis for a close central object surrounded by a far background, as an early binocular aspect of the figure-ground segregation process.


Subject(s)
Depth Perception , Vision Disparity , Environment , Eye Movements , Humans , Vision, Binocular
9.
ScientificWorldJournal ; 2014: 179391, 2014.
Article in English | MEDLINE | ID: mdl-24672295

ABSTRACT

Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data.


Subject(s)
Fixation, Ocular , Vision, Binocular , Robotics
10.
Network ; 23(4): 272-91, 2012.
Article in English | MEDLINE | ID: mdl-23116085

ABSTRACT

The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is well suited to be implemented on the multi-core architectures of modern graphics cards. The design strategies that allow us to optimally take advantage of such parallelism, in order to efficiently map on GPU the hierarchy of layers and the canonical neural computations, are proposed. Specifically, the advantages of a cortical map-like representation of the data are exploited. Moreover, a GPU implementation of a novel neural architecture for the computation of binocular disparity from stereo image pairs, based on populations of binocular energy neurons, is presented. The implemented neural model achieves good performances in terms of reliability of the disparity estimates and a near real-time execution speed, thus demonstrating the effectiveness of the devised design strategies. The proposed approach is valid in general, since the neural building blocks we implemented are a common basis for the modeling of visual neural functionalities.


Subject(s)
Computer Graphics/instrumentation , Computer Simulation , Models, Neurological , Nerve Net/physiology , Signal Processing, Computer-Assisted/instrumentation , Vision, Binocular/physiology , Visual Cortex/physiology , Action Potentials/physiology , Algorithms , Animals , Computer Systems , Equipment Design , Humans , Programming Languages , Software
11.
Sensors (Basel) ; 12(2): 1771-99, 2012.
Article in English | MEDLINE | ID: mdl-22438737

ABSTRACT

This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.


Subject(s)
Image Enhancement/instrumentation , Image Interpretation, Computer-Assisted/instrumentation , Pattern Recognition, Automated/methods , Robotics/instrumentation , Transducers , Video Recording/instrumentation , Equipment Design , Equipment Failure Analysis , Feedback , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods
12.
J Neurosci ; 32(1): 159-69, 2012 Jan 04.
Article in English | MEDLINE | ID: mdl-22219279

ABSTRACT

Eye position signals are pivotal in the visuomotor transformations performed by the posterior parietal cortex (PPC), but to date there are few studies addressing the influence of vergence angle upon single PPC neurons. In the present study, we investigated the influence on single neurons of the medial PPC area V6A of vergence and version signals. Single-unit activity was recorded from V6A in two Macaca fascicularis fixating real targets in darkness. The fixation targets were placed at eye level and at different vergence and version angles within the peripersonal space. Few neurons were modulated by version or vergence only, while the majority of cells were affected by both signals. We advance here the hypothesis that gaze-modulated V6A cells are able to encode gazed positions in the three-dimensional space. In single cells, version and vergence influenced the discharge with variable time course. In several cases, the two gaze variables influence neural discharges during only a part of the fixation time, but, more often, their influence persisted through large parts of it. Cells discharging for the first 400-500 ms of fixation could signal the arrival of gaze (and/or of spotlight of attention) in a new position in the peripersonal space. Cells showing a more sustained activity during the fixation period could better signal the location in space of the gazed objects. Both signals are critical for the control of upcoming or ongoing arm movements, such as those needed to reach and grasp objects located in the peripersonal space.


Subject(s)
Eye Movements/physiology , Fixation, Ocular/physiology , Orientation/physiology , Parietal Lobe/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Animals , Macaca fascicularis , Male , Photic Stimulation/methods
13.
PLoS One ; 6(8): e23335, 2011.
Article in English | MEDLINE | ID: mdl-21858075

ABSTRACT

Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping.


Subject(s)
Neurons/physiology , Parietal Lobe/physiology , Psychomotor Performance/physiology , Saccades/physiology , Action Potentials/physiology , Analysis of Variance , Animals , Brain Mapping/methods , Macaca fascicularis , Male , Models, Anatomic , Models, Neurological , Parietal Lobe/anatomy & histology , Space Perception/physiology , Visual Cortex/anatomy & histology , Visual Cortex/cytology , Visual Cortex/physiology , Visual Fields/physiology , Visual Pathways/anatomy & histology , Visual Pathways/cytology , Visual Pathways/physiology , Visual Perception/physiology
14.
Int J Neural Syst ; 20(4): 267-78, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20726038

ABSTRACT

We present two neural models for vergence angle control of a robotic head, a simplified and a more complex one. Both models work in a closed-loop manner and do not rely on explicitly computed disparity, but extract the desired vergence angle from the post-processed response of a population of disparity tuned complex cells, the actual gaze direction and the actual vergence angle. The first model assumes that the gaze direction of the robotic head is orthogonal to its baseline and the stimulus is a frontoparallel plane orthogonal to the gaze direction. The second model goes beyond these assumptions, and operates reliably in the general case where all restrictions on the orientation of the gaze, as well as the stimulus position, type and orientation, are dropped.


Subject(s)
Convergence, Ocular/physiology , Eye Movements/physiology , Models, Neurological , Vision Disparity , Humans , Robotics , Vision, Binocular/physiology
15.
Biosystems ; 87(2-3): 314-21, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17045391

ABSTRACT

A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.


Subject(s)
Computer Simulation , Depth Perception/physiology , Models, Biological , Biomedical Engineering , Computers , Software , Systems Biology
16.
Biosystems ; 79(1-3): 101-8, 2005.
Article in English | MEDLINE | ID: mdl-15649594

ABSTRACT

A neural field model of the reaction-diffusion type for the emergence of oscillatory phenomena in visual cortices is proposed. To investigate the joint spatio-temporal oscillatory dynamics in a continuous distribution of excitatory and inhibitory neurons, the coupling among oscillators is modelled as a diffusion process, combined with non-linear point interactions. The model exhibits cooperative activation properties in both time and space, by reacting to volleys of activations at multiple cortical sites with ordered spatio-temporal oscillatory states, similar to those found in the physiological experiments on slow-wave field potentials. The possible use of the resulting spatial distributions of coherent states, as a flexible medium to establish feature association, is discussed.


Subject(s)
Models, Neurological , Neurons/physiology
17.
Vision Res ; 43(13): 1473-93, 2003 Jun.
Article in English | MEDLINE | ID: mdl-12767315

ABSTRACT

We propose a two-layer neuromorphic architecture by which motion field pattern, generated during locomotion, are processed by template detectors specialized for gaze-directed self-motion (expansion and rotation). The templates provide a gaze-centered computation for analyzing motion field in terms of how it is related to the fixation point (i.e., the fovea). The analysis is performed by relating the vectorial components of the act of motion to variations (i.e., asymmetries) of the local structure of the motion field. Notwithstanding their limited extension in space, such centric-minded templates extract, as a whole, global information from the input flow field, being sensitive to different local instances of the same global property of the vector field with respect to the fixation point; a quantitative analysis, in terms of vectorial operators, evidences this property as tuning curves for heading direction. Model performances, evaluated in several situations characterized by conditions of absence and presence of pursuit eye movements, validate the approach. We observe that the gaze-centered model provides an explicit testable hypothesis that can guide further explorations of visual motion processing in extrastriate cortical areas.


Subject(s)
Kinesthesis , Models, Psychological , Motion Perception/physiology , Computational Biology , Fixation, Ocular , Fovea Centralis , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...