Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39255110

RESUMEN

Eye tracking filters have been shown to improve accuracy of gaze estimation and input for stationary settings. However, their effectiveness during physical movement remains underexplored. In this work, we compare common online filters in the context of physical locomotion in extended reality and propose alterations to improve them for on-the-go settings. We conducted a computational experiment where we simulate performance of the online filters using data on participants attending visual targets located in world-, path-, and two head-based reference frames while standing, walking, and jogging. Our results provide insights into the filters' effectiveness and factors that affect it, such as the amount of noise caused by locomotion and differences in compensatory eye movements, and demonstrate that filters with saccade detection prove most useful for on-the-go settings. We discuss the implications of our findings and conclude with guidance on gaze data filtering for interaction in extended reality.

2.
J Eye Mov Res ; 17(3)2024.
Artículo en Inglés | MEDLINE | ID: mdl-38826772

RESUMEN

Prior research has shown that sighting eye dominance is a dynamic behavior and dependent on horizontal viewing angle. Virtual reality (VR) offers high flexibility and control for studying eye movement and human behavior, yet eye dominance has not been given significant attention within this domain. In this work, we replicate Khan and Crawford's (2001) original study in VR to confirm their findings within this specific context. Additionally, this study extends its scope to study alignment with objects presented at greater depth in the visual field. Our results align with previous results, remaining consistent when targets are presented at greater distances in the virtual scene. Using greater target distances presents opportunities to investigate alignment with objects at varying depths, providing greater flexibility for the design of methods that infer eye dominance from interaction in VR.

3.
IEEE Trans Vis Comput Graph ; 29(11): 4740-4750, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37782604

RESUMEN

This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user.


Asunto(s)
Gafas Inteligentes , Realidad Virtual , Gráficos por Computador , Factores de Tiempo
4.
IEEE Trans Vis Comput Graph ; 28(11): 3585-3595, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36048981

RESUMEN

Gaze-based interaction is a fast and ergonomic type of hands-free interaction that is often used with augmented and virtual reality when pointing at targets. Such interaction, however, can be cumbersome whenever user, tracking, or environmental factors cause eye tracking errors. Recent research has suggested that fallback modalities could be leveraged to ensure stable interaction irrespective of the current level of eye tracking error. This work thus presents Weighted Pointer interaction, a collection of error-aware pointing techniques that determine whether pointing should be performed by gaze, a fallback modality, or a combination of the two, depending on the level of eye tracking error that is present. These techniques enable users to accurately point at targets when eye tracking is accurate and inaccurate. A virtual reality target selection study demonstrated that Weighted Pointer techniques were more performant and preferred over techniques that required the use of manual modality switching.


Asunto(s)
Gráficos por Computador , Movimientos Oculares
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA