Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
2.
Behav Res Methods ; 55(4): 1513-1536, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35680764

RESUMO

Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Humanos , Cães , Animais , Movimentos Oculares , Piscadela , Cognição
3.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35384605

RESUMO

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Humanos , Pesquisa Empírica
4.
Behav Res Methods ; 54(2): 845-863, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34357538

RESUMO

We empirically investigate the role of small, almost imperceptible balance and breathing movements of the head on the level and colour of noise in data from five commercial video-based P-CR eye trackers. By comparing noise from recordings with completely static artificial eyes to noise from recordings where the artificial eyes are worn by humans, we show that very small head movements increase levels and colouring of the noise in data recorded from all five eye trackers in this study. This increase of noise levels is seen not only in the gaze signal, but also in the P and CR signals of the eye trackers that provide these camera image features. The P and CR signals of the SMI eye trackers correlate strongly during small head movements, but less so or not at all when the head is completely still, indicating that head movements are registered by the P and CR images in the eye camera. By recording with artificial eyes, we can also show that the pupil size artefact has no major role in increasing and colouring noise. Our findings add to and replicate the observation by Niehorster et al., (2021) that lowpass filters in video-based P-CR eye trackers colour the data. Irrespective of source, filters or head movements, coloured noise can be confused for oculomotor drift. We also find that usage of the default head restriction in the EyeLink 1000+, the EyeLink II and the HiSpeed240 result in noisier data compared to less head restriction. Researchers investigating data quality in eye trackers should consider not using the Gen 2 artificial eye from SR Research / EyeLink. Data recorded with this artificial eye are much noisier than data recorded with other artificial eyes, on average 2.2-14.5 times worse for the five eye trackers.


Assuntos
Movimentos Oculares , Movimentos da Cabeça , Cor , Confiabilidade dos Dados , Olho Artificial , Humanos
5.
Behav Res Methods ; 53(5): 2049-2068, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33754324

RESUMO

We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.


Assuntos
Fixação Ocular , Rememoração Mental , Algoritmos , Consenso , Movimentos Oculares , Humanos
6.
Behav Res Methods ; 53(1): 311-324, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32705655

RESUMO

Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal's spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Cor , Olho Artificial , Fixação Ocular , Humanos , Pesquisadores
8.
Sci Rep ; 10(1): 13035, 2020 08 03.
Artigo em Inglês | MEDLINE | ID: mdl-32747683

RESUMO

When retrieving image from memory, humans usually move their eyes spontaneously as if the image were in front of them. Such eye movements correlate strongly with the spatial layout of the recalled image content and function as memory cues facilitating the retrieval procedure. However, how close the correlation is between imagery eye movements and the eye movements while looking at the original image is unclear so far. In this work we first quantify the similarity of eye movements between recalling an image and encoding the same image, followed by the investigation on whether comparing such pairs of eye movements can be used for computational image retrieval. Our results show that computational image retrieval based on eye movements during spontaneous imagery is feasible. Furthermore, we show that such a retrieval approach can be generalized to unseen images.


Assuntos
Fixação Ocular/fisiologia , Imagens, Psicoterapia , Rememoração Mental , Redes Neurais de Computação , Adulto , Área Sob a Curva , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Curva ROC
9.
Behav Res Methods ; 52(6): 2515-2534, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32472501

RESUMO

The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker's data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Confiabilidade dos Dados , Coleta de Dados , Olho Artificial , Fixação Ocular , Humanos
10.
Behav Res Methods ; 52(5): 2098-2121, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32206998

RESUMO

For evaluating whether an eye-tracker is suitable for measuring microsaccades, Poletti & Rucci (2016) propose that a measure called 'resolution' could be better than the more established root-mean-square of the sample-to-sample distances (RMS-S2S). Many open questions exist around the resolution measure, however. Resolution needs to be calculated using data from an artificial eye that can be turned in very small steps. Furthermore, resolution has an unclear and uninvestigated relationship to the RMS-S2S and STD (standard deviation) measures of precision (Holmqvist & Andersson, 2017, p. 159-190), and there is another metric by the same name (Clarke, Ditterich, Drüen, Schönfeld, and Steineke 2002), which instead quantifies the errors of amplitude measurements. In this paper, we present a mechanism, the Stepperbox, for rotating artificial eyes in arbitrary angles from 1' (arcmin) and upward. We then use the Stepperbox to find the minimum reliably detectable rotations in 11 video-based eye-trackers (VOGs) and the Dual Purkinje Imaging (DPI) tracker. We find that resolution correlates significantly with RMS-S2S and, to a lesser extent, with STD. In addition, we find that although most eye-trackers can detect some small rotations of an artificial eye, the rotations of amplitudes up to 2∘ are frequently erroneously measured by video-based eye-trackers. We show evidence that the corneal reflection (CR) feature of these eye-trackers is a major cause of erroneous measurements of small rotations of artificial eyes. Our data strengthen the existing body of evidence that video-based eye-trackers produce errors that may require that we reconsider some results from research on reading, microsaccades, and vergence, where the amplitude of small eye movements have been measured with past or current video-based eye-trackers. In contrast, the DPI reports correct rotation amplitudes down to 1'.


Assuntos
Movimentos Oculares , Olho Artificial , Tecnologia de Rastreamento Ocular , Gravação em Vídeo , Coleta de Dados , Humanos
11.
J Eye Mov Res ; 12(8)2020 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-33828775

RESUMO

Eye movement of a species reflects the visual behavior strategy that it has adapted to during its evolution. What are eye movements of domestic dogs (Canis lupus familiaris) like? Investigations of dog eye movements per se have not been done, despite the increasing number of visuo-cognitive studies in dogs using eye-tracking systems. To fill this gap, we have recorded dog eye movements using a video-based eye-tracking system, and compared the dog data to that of humans. We found dog saccades follow the systematic relationships between saccade metrics previously shown in humans and other animal species. Yet, the details of the relationships, and the quantities of each metric of dog saccades and fixations differed from those of humans. Overall, dog saccades were slower and fixations were longer than those of humans. We hope our findings contribute to existing comparative analyses of eye movement across animal species, and also to improvement of algorithms used for classifying eye movement data of dogs.

12.
Atten Percept Psychophys ; 81(3): 666-683, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30593653

RESUMO

Although in real life people frequently perform visual search together, in lab experiments this social dimension is typically left out. Here, we investigate individual, collaborative and competitive visual search with visualization of search partners' gaze. Participants were instructed to search a grid of Gabor patches while being eye tracked. For collaboration and competition, searchers were shown in real time at which element the paired searcher was looking. To promote collaboration or competition, points were rewarded or deducted for correct or incorrect answers. Early in collaboration trials, searchers rarely fixated the same elements. Reaction times of couples were roughly halved compared with individual search, although error rates did not increase. This indicates searchers formed an efficient collaboration strategy. Overlap, the proportion of dwells that landed on hexagons that the other searcher had already looked at, was lower than expected from simulated overlap of two searchers who are blind to the behavior of their partner. The proportion of overlapping dwells correlated positively with ratings of the quality of collaboration. During competition, overlap increased earlier in time, indicating that competitors divided space less efficiently. Analysis of the entropy of the dwell locations and scan paths revealed that in the competition condition, a less fixed looking pattern was exhibited than in the collaborate and individual search conditions. We conclude that participants can efficiently search together when provided only with information about their partner's gaze position by dividing up the search space. Competing search exhibited more random gaze patterns, potentially reflecting increased interaction between searchers in this condition.


Assuntos
Comportamento Competitivo/fisiologia , Movimentos Oculares/fisiologia , Comportamento Social , Adulto , Feminino , Humanos , Masculino , Tempo de Reação , Recompensa , Adulto Jovem
13.
Behav Res Methods ; 51(2): 840-864, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30334148

RESUMO

Existing event detection algorithms for eye-movement data almost exclusively rely on thresholding one or more hand-crafted signal features, each computed from the stream of raw gaze data. Moreover, this thresholding is largely left for the end user. Here we present and develop gazeNet, a new framework for creating event detectors that do not require hand-crafted signal features or signal thresholding. It employs an end-to-end deep learning approach, which takes raw eye-tracking data as input and classifies it into fixations, saccades and post-saccadic oscillations. Our method thereby challenges an established tacit assumption that hand-crafted features are necessary in the design of event detection algorithms. The downside of the deep learning approach is that a large amount of training data is required. We therefore first develop a method to augment hand-coded data, so that we can strongly enlarge the data set used for training, minimizing the time spent on manual coding. Using this extended hand-coded data, we train a neural network that produces eye-movement event classification from raw eye-movement data without requiring any predefined feature extraction or post-processing steps. The resulting classification performance is at the level of expert human coders. Moreover, an evaluation of gazeNet on two other datasets showed that gazeNet generalized to data from different eye trackers and consistently outperformed several other event detection algorithms that we tested.


Assuntos
Pesquisa Comportamental/métodos , Movimentos Oculares , Redes Neurais de Computação , Algoritmos , Humanos , Movimentos Sacádicos , Análise e Desempenho de Tarefas
14.
Behav Res Methods ; 51(1): 451-452, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30251005

RESUMO

It has come to our attention that the section "Post-processing: Labeling final events" on page 167 of "Using Machine Learning to Detect Events in Eye-Tracking Data" (Zemblys, Niehorster, Komogortsev, & Holmqvist, 2018) contains an erroneous description of the process by which post-processing was performed.

15.
J Eye Mov Res ; 12(4)2019 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-33828744

RESUMO

The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical noise distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The estimated mean vergence points seem to contain different errors among individuals but they generally show the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of gaze estimation under projection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment.

16.
Vision Res ; 149: 9-23, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29857021

RESUMO

More and more researchers are considering the omnibus eye movement sequence-the scanpath-in their studies of visual and cognitive processing (e.g. Hayes, Petrov, & Sederberg, 2011; Madsen, Larson, Loschky, & Rebello, 2012; Ni et al., 2011; von der Malsburg & Vasishth, 2011). However, it remains unclear how recent methods for comparing scanpaths perform in experiments producing variable scanpaths, and whether these methods supplement more traditional analyses of individual oculomotor statistics. We address this problem for MultiMatch (Jarodzka et al., 2010; Dewhurst et al., 2012), evaluating its performance with a visual search-like task in which participants must fixate a series of target numbers in a prescribed order. This task should produce predictable sequences of fixations and thus provide a testing ground for scanpath measures. Task difficulty was manipulated by making the targets more or less visible through changes in font and the presence of distractors or visual noise. These changes in task demands led to slower search and more fixations. Importantly, they also resulted in a reduction in the between-subjects scanpath similarity, demonstrating that participants' gaze patterns became more heterogenous in terms of saccade length and angle, and fixation position. This implies a divergent strategy or random component to eye-movement behaviour which increases as the task becomes more difficult. Interestingly, the duration of fixations along aligned vectors showed the opposite pattern, becoming more similar between observers in 2 of the 3 difficulty manipulations. This provides important information for vision scientists who may wish to use scanpath metrics to quantify variations in gaze across a spectrum of perceptual and cognitive tasks.


Assuntos
Atenção/fisiologia , Movimentos Oculares/fisiologia , Adulto , Análise de Variância , Feminino , Fixação Ocular/fisiologia , Humanos , Masculino , Mascaramento Perceptivo/fisiologia , Estimulação Luminosa/métodos , Adulto Jovem
17.
Cognition ; 175: 53-68, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29471198

RESUMO

When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself? The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants' gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants' gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play.


Assuntos
Movimentos Oculares/fisiologia , Memória Episódica , Rememoração Mental/fisiologia , Feminino , Humanos , Masculino , Leitura
18.
J Eye Mov Res ; 13(4)2018 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-33828804

RESUMO

Reading students' faces and their body language, checking their worksheets, and keeping eye contact is a key trait of teacher competence. The new technology of mobile eye-tracking provides researchers with possibilities to explore teaching from the viewpoint of teacher gaze, but also introduces many new method questions. This study had the primary aim to investigate teachers´ attention distribution over space: the number and durations of several types of their gazes, and how their gaze depends on the factors of students´ gender, achievement, and position in the classroom. Results show that teacher gaze was distributed unevenly across both space and time. Teachers looked at the most-watched students 3-8 times more often than at the least-watched ones. Students sitting in the first row and the middle section received significantly more gaze than those sitting outside this zone. All three teachers made more single gaze visits - looking at the students but making no eye contact - than mutual gazes or student material gazes. The three teachers' gaze distribution also varied substantially from lesson to lesson. Our results are important for understanding teacher behavior in real classrooms, but also point to the relevance of appropriate method design in future classroom studies with eye-tracking.

19.
Behav Res Methods ; 50(1): 213-227, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28205131

RESUMO

The marketing materials of remote eye-trackers suggest that data quality is invariant to the position and orientation of the participant as long as the eyes of the participant are within the eye-tracker's headbox, the area where tracking is possible. As such, remote eye-trackers are marketed as allowing the reliable recording of gaze from participant groups that cannot be restrained, such as infants, schoolchildren and patients with muscular or brain disorders. Practical experience and previous research, however, tells us that eye-tracking data quality, e.g. the accuracy of the recorded gaze position and the amount of data loss, deteriorates (compared to well-trained participants in chinrests) when the participant is unrestrained and assumes a non-optimal pose in front of the eye-tracker. How then can researchers working with unrestrained participants choose an eye-tracker? Here we investigated the performance of five popular remote eye-trackers from EyeTribe, SMI, SR Research, and Tobii in a series of tasks where participants took on non-optimal poses. We report that the tested systems varied in the amount of data loss and systematic offsets observed during our tasks. The EyeLink and EyeTribe in particular had large problems. Furthermore, the Tobii eye-trackers reported data for two eyes when only one eye was visible to the eye-tracker. This study provides practical insight into how popular remote eye-trackers perform when recording from unrestrained participants. It furthermore provides a testing method for evaluating whether a tracker is suitable for studying a certain target population, and that manufacturers can use during the development of new eye-trackers.


Assuntos
Medições dos Movimentos Oculares , Posicionamento do Paciente/métodos , Confiabilidade dos Dados , Movimentos Oculares , Feminino , Humanos , Masculino , Orientação
20.
Behav Res Methods ; 50(1): 160-181, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28233250

RESUMO

Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.


Assuntos
Movimentos Oculares/fisiologia , Aprendizado de Máquina , Algoritmos , Pesquisa Comportamental , Biometria/instrumentação , Biometria/métodos , Humanos , Análise e Desempenho de Tarefas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA