Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
2.
Accid Anal Prev ; 184: 107010, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36806077

RESUMEN

While the negative effects of alcohol on driving performance are undisputed, it is unclear how driver attention, eye movements and visual information sampling are affected by alcohol consumption. A simulator study with 35 participants was conducted to investigate whether and how a driver's level of attention is related to self-paced non-driving related task (NDRT)-engagement and tactical aspects of undesirable driver behaviour under increasing levels of breath alcohol concentration (BrAC) up to 1.0 ‰. Increasing BrAC levels lead to more frequent speeding, short time headways and weaving, and higher NDRT engagement. Instantaneous distraction events become more frequent, with more and longer glances to the NDRT, and a general decline in visual attention to the forward roadway. With alcohol, the compensatory behaviour that is typically seen when drivers engage in NDRTs did not appear. These findings support the theory that alcohol reduces the ability to shift attention between multiple tasks. To conclude, the independent reduction in safety margins in combination with impaired attention and an increased willingness to engage in NDRTs is likely the reason behind increased crash risk when driving under the influence of alcohol.


Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Humanos , Accidentes de Tránsito/prevención & control , Factores de Tiempo , Movimientos Oculares , Consumo de Bebidas Alcohólicas/efectos adversos
3.
Behav Res Methods ; 55(4): 1653-1714, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35715615

RESUMEN

Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular , Humanos
4.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-35384605

RESUMEN

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular , Humanos , Investigación Empírica
5.
Behav Res Methods ; 54(2): 845-863, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34357538

RESUMEN

We empirically investigate the role of small, almost imperceptible balance and breathing movements of the head on the level and colour of noise in data from five commercial video-based P-CR eye trackers. By comparing noise from recordings with completely static artificial eyes to noise from recordings where the artificial eyes are worn by humans, we show that very small head movements increase levels and colouring of the noise in data recorded from all five eye trackers in this study. This increase of noise levels is seen not only in the gaze signal, but also in the P and CR signals of the eye trackers that provide these camera image features. The P and CR signals of the SMI eye trackers correlate strongly during small head movements, but less so or not at all when the head is completely still, indicating that head movements are registered by the P and CR images in the eye camera. By recording with artificial eyes, we can also show that the pupil size artefact has no major role in increasing and colouring noise. Our findings add to and replicate the observation by Niehorster et al., (2021) that lowpass filters in video-based P-CR eye trackers colour the data. Irrespective of source, filters or head movements, coloured noise can be confused for oculomotor drift. We also find that usage of the default head restriction in the EyeLink 1000+, the EyeLink II and the HiSpeed240 result in noisier data compared to less head restriction. Researchers investigating data quality in eye trackers should consider not using the Gen 2 artificial eye from SR Research / EyeLink. Data recorded with this artificial eye are much noisier than data recorded with other artificial eyes, on average 2.2-14.5 times worse for the five eye trackers.


Asunto(s)
Movimientos Oculares , Movimientos de la Cabeza , Color , Exactitud de los Datos , Ojo Artificial , Humanos
6.
Accid Anal Prev ; 153: 106058, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33640613

RESUMEN

The objective of this study was to compare the development of sleepiness during manual driving versus level 2 partially automated driving, when driving on a motorway in Sweden. The hypothesis was that partially automated driving will lead to higher levels of fatigue due to underload. Eighty-nine drivers were included in the study using a 2 × 2 design with the conditions manual versus partially automated driving and daytime (full sleep) versus night-time (sleep deprived). The results showed that night-time driving led to markedly increased levels of sleepiness in terms of subjective sleepiness ratings, blink durations, PERCLOS, pupil diameter and heart rate. Partially automated driving led to slightly higher subjective sleepiness ratings, longer blink durations, decreased pupil diameter, slower heart rate, and higher EEG alpha and theta activity. However, elevated levels of sleepiness mainly arose from the night-time drives when the sleep pressure was high. During daytime, when the drivers were alert, partially automated driving had little or no detrimental effects on driver fatigue. Whether the negative effects of increased sleepiness during partially automated driving can be compensated by the positive effects of lateral and longitudinal driving support needs to be investigated in further studies.


Asunto(s)
Somnolencia , Accidentes de Tránsito/prevención & control , Conducción de Automóvil , Humanos , Suecia , Vigilia
7.
Behav Res Methods ; 53(1): 311-324, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-32705655

RESUMEN

Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal's spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular , Color , Ojo Artificial , Fijación Ocular , Humanos , Investigadores
9.
Behav Res Methods ; 52(6): 2515-2534, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32472501

RESUMEN

The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker's data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular , Exactitud de los Datos , Recolección de Datos , Ojo Artificial , Fijación Ocular , Humanos
10.
Behav Res Methods ; 51(1): 451-452, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30251005

RESUMEN

It has come to our attention that the section "Post-processing: Labeling final events" on page 167 of "Using Machine Learning to Detect Events in Eye-Tracking Data" (Zemblys, Niehorster, Komogortsev, & Holmqvist, 2018) contains an erroneous description of the process by which post-processing was performed.

11.
Behav Res Methods ; 51(2): 840-864, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30334148

RESUMEN

Existing event detection algorithms for eye-movement data almost exclusively rely on thresholding one or more hand-crafted signal features, each computed from the stream of raw gaze data. Moreover, this thresholding is largely left for the end user. Here we present and develop gazeNet, a new framework for creating event detectors that do not require hand-crafted signal features or signal thresholding. It employs an end-to-end deep learning approach, which takes raw eye-tracking data as input and classifies it into fixations, saccades and post-saccadic oscillations. Our method thereby challenges an established tacit assumption that hand-crafted features are necessary in the design of event detection algorithms. The downside of the deep learning approach is that a large amount of training data is required. We therefore first develop a method to augment hand-coded data, so that we can strongly enlarge the data set used for training, minimizing the time spent on manual coding. Using this extended hand-coded data, we train a neural network that produces eye-movement event classification from raw eye-movement data without requiring any predefined feature extraction or post-processing steps. The resulting classification performance is at the level of expert human coders. Moreover, an evaluation of gazeNet on two other datasets showed that gazeNet generalized to data from different eye trackers and consistently outperformed several other event detection algorithms that we tested.


Asunto(s)
Investigación Conductal/métodos , Movimientos Oculares , Redes Neurales de la Computación , Algoritmos , Humanos , Movimientos Sacádicos , Análisis y Desempeño de Tareas
12.
Behav Res Methods ; 50(1): 160-181, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-28233250

RESUMEN

Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.


Asunto(s)
Movimientos Oculares/fisiología , Aprendizaje Automático , Algoritmos , Investigación Conductal , Biometría/instrumentación , Biometría/métodos , Humanos , Análisis y Desempeño de Tareas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA