Your browser doesn't support javascript.
loading
Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data.
Deane, Oliver; Toth, Eszter; Yeo, Sang-Hoon.
  • Deane O; School of Sport, Exercise and Rehabilitation Sciences, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK.
  • Toth E; School of Psychology, The University of Birmingham, Birmingham, UK.
  • Yeo SH; School of Sport, Exercise and Rehabilitation Sciences, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK. s.yeo@bham.ac.uk.
Behav Res Methods ; 55(3): 1372-1391, 2023 04.
Article en En | MEDLINE | ID: mdl-35650384
With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user's gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm's output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system's practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Movimientos Oculares / Aprendizaje Profundo Tipo de estudio: Guideline Límite: Humans Idioma: En Año: 2023 Tipo del documento: Article

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Movimientos Oculares / Aprendizaje Profundo Tipo de estudio: Guideline Límite: Humans Idioma: En Año: 2023 Tipo del documento: Article