Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 13: 881269, 2022.
Article in English | MEDLINE | ID: mdl-36160516

ABSTRACT

When considering external assistive systems for people with motor impairments, gaze has been shown to be a powerful tool as it is anticipatory to motor actions and is promising for understanding intentions of an individual even before the action. Up until now, the vast majority of studies investigating the coordinated eye and hand movement in a grasping task focused on single objects manipulation without placing them in a meaningful scene. Very little is known about the impact of the scene context on how we manipulate objects in an interactive task. In the present study, it was investigated how the scene context affects human object manipulation in a pick-and-place task in a realistic scenario implemented in VR. During the experiment, participants were instructed to find the target object in a room, pick it up, and transport it to a predefined final location. Thereafter, the impact of the scene context on different stages of the task was examined using head and hand movement, as well as eye tracking. As the main result, the scene context had a significant effect on the search and transport phases, but not on the reach phase of the task. The present work provides insights into the development of potential supporting intention predicting systems, revealing the dynamics of the pick-and-place task behavior once it is realized in a realistic context-rich scenario.

2.
Biomed Opt Express ; 13(11): 5849-5859, 2022 Nov 01.
Article in English | MEDLINE | ID: mdl-36733729

ABSTRACT

Presbyopia is an age-related loss of accommodation ability of the eye which affects individuals in their late 40s or early 50s. Presbyopia reduces the ability of a person to focus on closer objects at will. In this study, we assessed electronically tunable lenses for their aberration properties as well as for their use as correction lenses. The tunable lenses were evaluated in healthy subjects with cycloplegia by measuring visual acuity and contrast sensitivity for their use in presbyopia correction. Furthermore, we have developed and demonstrated the feasibility of a feedback mechanism for the operation of tunable lenses using a portable solid-state LIDAR camera with a processing time of 40 ± 5 ms.

3.
Vision (Basel) ; 5(2)2021 Apr 15.
Article in English | MEDLINE | ID: mdl-33920907

ABSTRACT

With rapidly developing technology, visual cues became a powerful tool for deliberate guiding of attention and affecting human performance. Using cues to manipulate attention introduces a trade-off between increased performance in cued, and decreased in not cued, locations. For higher efficacy of visual cues designed to purposely direct user's attention, it is important to know how manipulation of cue properties affects attention. In this verification study, we addressed how varying cue complexity impacts the allocation of spatial endogenous covert attention in space and time. To gradually vary cue complexity, the discriminability of the cue was systematically modulated using a shape-based design. Performance was compared in attended and unattended locations in an orientation-discrimination task. We evaluated additional temporal costs due to processing of a more complex cue by comparing performance at two different inter-stimulus intervals. From preliminary data, attention scaled with cue discriminability, even for supra-threshold cue discriminability. Furthermore, individual cue processing times partly impacted performance for the most complex, but not simpler cues. We conclude that, first, cue complexity expressed by discriminability modulates endogenous covert attention at supra-threshold cue discriminability levels, with increasing benefits and decreasing costs; second, it is important to consider the temporal processing costs of complex visual cues.

4.
Brain Sci ; 11(3)2021 Feb 25.
Article in English | MEDLINE | ID: mdl-33669081

ABSTRACT

Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant's ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user's performance in everyday visual search tasks.

SELECTION OF CITATIONS
SEARCH DETAIL
...