Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38555550

RESUMO

Self-monitoring is essential for effectively regulating learning, but difficult in visual diagnostic tasks such as radiograph interpretation. Eye-tracking technology can visualize viewing behavior in gaze displays, thereby providing information about visual search and decision-making. We hypothesized that individually adaptive gaze-display feedback improves posttest performance and self-monitoring of medical students who learn to detect nodules in radiographs. We investigated the effects of: (1) Search displays, showing which part of the image was searched by the participant; and (2) Decision displays, showing which parts of the image received prolonged attention in 78 medical students. After a pretest and instruction, participants practiced identifying nodules in 16 cases under search-display, decision-display, or no feedback conditions (n = 26 per condition). A 10-case posttest, without feedback, was administered to assess learning outcomes. After each case, participants provided self-monitoring and confidence judgments. Afterward, participants reported on self-efficacy, perceived competence, feedback use, and perceived usefulness of the feedback. Bayesian analyses showed no benefits of gaze displays for post-test performance, monitoring accuracy (absolute difference between participants' estimated and their actual test performance), completeness of viewing behavior, self-efficacy, and perceived competence. Participants receiving search-displays reported greater feedback utilization than participants receiving decision-displays, and also found the feedback more useful when the gaze data displayed was precise and accurate. As the completeness of search was not related to posttest performance, search displays might not have been sufficiently informative to improve self-monitoring. Information from decision displays was rarely used to inform self-monitoring. Further research should address if and when gaze displays can support learning.

2.
Behav Res Methods ; 56(4): 3280-3299, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38424292

RESUMO

Blinks, the closing and opening of the eyelids, are used in a wide array of fields where human function and behavior are studied. In data from video-based eye trackers, blink rate and duration are often estimated from the pupil-size signal. However, blinks and their parameters can be estimated only indirectly from this signal, since it does not explicitly contain information about the eyelid position. We ask whether blinks detected from an eye openness signal that estimates the distance between the eyelids (EO blinks) are comparable to blinks detected with a traditional algorithm using the pupil-size signal (PS blinks) and how robust blink detection is when data quality is low. In terms of rate, there was an almost-perfect overlap between EO and PS blink (F1 score: 0.98) when the head was in the center of the eye tracker's tracking range where data quality was high and a high overlap (F1 score 0.94) when the head was at the edge of the tracking range where data quality was worse. When there was a difference in blink rate between EO and PS blinks, it was mainly due to data loss in the pupil-size signal. Blink durations were about 60 ms longer in EO blinks compared to PS blinks. Moreover, the dynamics of EO blinks was similar to results from previous literature. We conclude that the eye openness signal together with our proposed blink detection algorithm provides an advantageous method to detect and describe blinks in greater detail.


Assuntos
Algoritmos , Piscadela , Tecnologia de Rastreamento Ocular , Humanos , Piscadela/fisiologia , Pupila/fisiologia , Pálpebras/fisiologia , Masculino , Adulto , Feminino , Movimentos Oculares/fisiologia
3.
Behav Res Methods ; 56(4): 3226-3241, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38114880

RESUMO

We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using synthetic data. Using only synthetic data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with synthetic CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on two datasets consisting of high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 3-41.5% reduction in terms of spatial precision across data sets, and performed on par with state-of-the-art on synthetic images in terms of spatial accuracy. We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem, which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers.


Assuntos
Córnea , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Córnea/diagnóstico por imagem , Córnea/fisiologia , Algoritmos
4.
Behav Res Methods ; 2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38200239

RESUMO

We built a novel setup to record large gaze shifts (up to 140[Formula: see text]). The setup consists of a wearable eye tracker and a high-speed camera with fiducial marker technology to track the head. We tested our setup by replicating findings from the classic eye-head gaze shift literature. We conclude that our new inexpensive setup is good enough to investigate the dynamics of large eye-head gaze shifts. This novel setup could be used for future research on large eye-head gaze shifts, but also for research on gaze during e.g., human interaction. We further discuss reference frames and terminology in head-free eye tracking. Despite a transition from head-fixed eye tracking to head-free gaze tracking, researchers still use head-fixed eye movement terminology when discussing world-fixed gaze phenomena. We propose to use more specific terminology for world-fixed phenomena, including gaze fixation, gaze pursuit, and gaze saccade.

5.
Behav Res Methods ; 56(3): 1476-1484, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37326770

RESUMO

According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Humanos , Software
6.
Behav Res Methods ; 56(3): 1900-1915, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37101100

RESUMO

Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.


Assuntos
Movimentos Oculares , Visão Ocular , Adulto , Humanos , Olho , Calibragem , Gravação em Vídeo
7.
Behav Res Methods ; 55(4): 1513-1536, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35680764

RESUMO

Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Humanos , Cães , Animais , Movimentos Oculares , Piscadela , Cognição
8.
Behav Res Methods ; 55(2): 657-669, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35419703

RESUMO

Estimating the gaze direction with a digital video-based pupil and corneal reflection (P-CR) eye tracker is challenging partly since a video camera is limited in terms of spatial and temporal resolution, and because the captured eye images contain noise. Through computer simulation, we evaluated the localization accuracy of pupil-, and CR centers in the eye image for small eye rotations (≪ 1 deg). Results highlight how inaccuracies in center localization are related to 1) how many pixels the pupil and CR span in the eye camera image, 2) the method to compute the center of the pupil and CRs, and 3) the level of image noise. Our results provide a possible explanation to why the amplitude of small saccades may not be accurately estimated by many currently used video-based eye trackers. We conclude that eye movements with arbitrarily small amplitudes can be accurately estimated using the P-CR eye-tracking principle given that the level of image noise is low and the pupil and CR span enough pixels in the eye camera, or if localization of the CR is based on the intensity values in the eye image instead of a binary representation.


Assuntos
Movimentos Oculares , Movimentos Sacádicos , Humanos , Simulação por Computador , Pupila
9.
Behav Res Methods ; 55(8): 4128-4142, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-36326998

RESUMO

How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.


Assuntos
Movimentos Oculares , Dispositivos Eletrônicos Vestíveis , Humanos , Movimento , Medições dos Movimentos Oculares , Movimentos da Cabeça
10.
Behav Res Methods ; 2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37507649

RESUMO

A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.

11.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35384605

RESUMO

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Humanos , Pesquisa Empírica
12.
Infancy ; 27(1): 25-45, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34687142

RESUMO

The Tobii Pro TX300 is a popular eye tracker in developmental eye-tracking research, yet it is no longer manufactured. If a TX300 breaks down, it may have to be replaced. The data quality of the replacement eye tracker may differ from that of the TX300, which may affect the experimental outcome measures. This is problematic for longitudinal and multi-site studies, and for researchers replacing eye trackers between studies. We, therefore, ask how the TX300 and its successor, the Tobii Pro Spectrum, compare in terms of eye-tracking data quality. Data quality-operationalized through precision, accuracy, and data loss-was compared between eye trackers for three age groups (around 5-months, 10-months, and 3-years). Precision was better for all gaze position signals obtained with the Spectrum in comparison to the TX300. Accuracy of the Spectrum was higher for the 5-month-old and 10-month-old children. For the three-year-old children, accuracy was similar across both eye trackers. Gaze position signals from the Spectrum exhibited lower proportions of data loss, and the duration of the data loss periods tended to be shorter. In conclusion, the Spectrum produces gaze position signals with higher data quality, especially for the younger infants. Implications for data analysis are discussed.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Criança , Pré-Escolar , Coleta de Dados , Movimentos Oculares , Humanos , Lactente
13.
Behav Res Methods ; 54(6): 2765-2776, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35023066

RESUMO

Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.

14.
Behav Res Methods ; 53(1): 311-324, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32705655

RESUMO

Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal's spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Cor , Olho Artificial , Fixação Ocular , Humanos , Pesquisadores
15.
Behav Res Methods ; 53(1): 335-353, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32705656

RESUMO

Due to its reported high sampling frequency and precision, the Tobii Pro Spectrum is of potential interest to researchers who want to study small eye movements during fixation. We test how suitable the Tobii Pro Spectrum is for research on microsaccades by computing data-quality measures and common properties of microsaccades and comparing these to the currently most used system in this field: the EyeLink 1000 Plus. Results show that the EyeLink data provide higher RMS precision and microsaccade rates compared with data acquired with the Tobii Pro Spectrum. However, both systems provide microsaccades with similar directions and shapes, as well as rates consistent with previous literature. Data acquired at 1200 Hz with the Tobii Pro Spectrum provide results that are more similar to the EyeLink, compared to data acquired at 600 Hz. We conclude that the Tobii Pro Spectrum is a useful tool for researchers investigating microsaccades.


Assuntos
Fixação Ocular , Movimentos Sacádicos , Movimentos Oculares , Humanos , Percepção Visual
16.
Behav Res Methods ; 53(5): 1986-2006, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33709298

RESUMO

The pupil size artefact (PSA) is the gaze deviation reported by an eye tracker during pupil size changes if the eye does not rotate. In the present study, we ask three questions: 1) how stable is the PSA over time, 2) does the PSA depend on properties of the eye tracker set up, and 3) does the PSA depend on the participants' viewing direction? We found that the PSA is very stable over time for periods as long as 1 year, but may differ between participants. When comparing the magnitude of the PSA between eye trackers, we found the magnitude of the obtained PSA to be related to the direction of the eye-tracker-camera axis, suggesting that the angle between the participants' viewing direction and the camera axis affects the PSA. We then investigated the PSA as a function of the participants' viewing direction. The PSA was non-zero for viewing direction 0∘ and depended on the viewing direction. These findings corroborate the suggestion by Choe et al. (Vision Research 118(6755):48-59, 2016), that the PSA can be described by an idiosyncratic and a viewing direction-dependent component. Based on a simulation, we cannot claim that the viewing direction-dependent component of the PSA is caused by the optics of the cornea.


Assuntos
Artefatos , Pupila , Medições dos Movimentos Oculares , Movimentos Oculares , Humanos
17.
J Neuroeng Rehabil ; 17(1): 56, 2020 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-32334622

RESUMO

BACKGROUND: Tremor is a cardinal symptom of Parkinson's disease (PD) that may cause severe disability. As such, objective methods to determine the exact characteristics of the tremor may improve the evaluation of therapy. This methodology study aims to validate the utility of two objective technical methods of recording Parkinsonian tremor and evaluate their ability to determine the effects of Deep Brain Stimulation (DBS) of the subthalamic nucleus and of vision. METHODS: We studied 10 patients with idiopathic PD, who were responsive to L-Dopa and had more than 1 year use of bilateral subthalamic nucleus stimulation. The patients did not have to display visible tremor to be included in the study. Tremor was recorded with two objective methods, a force platform and a 3 dimensional (3D) motion capture system that tracked movements in four key proximal sections of the body (knee, hip, shoulder and head). They were assessed after an overnight withdrawal of anti-PD medications with DBS ON and OFF and with eyes open and closed during unperturbed and perturbed stance with randomized calf vibration, using a randomized test order design. RESULTS: Tremor was detected with the Unified Parkinson's Disease Rating Scale (UPDRS) in 6 of 10 patients but only distally (hands and feet) with DBS OFF. With the force platform and the 3D motion capture system, tremor was detected in 6 of 10 and 7 of 10 patients respectively, mostly in DBS OFF but also with DBS ON in some patients. The 3D motion capture system revealed that more than one body section was usually affected by tremor and that the tremor amplitude was non-uniform, but the frequency almost identical, across sites. DBS reduced tremor amplitude non-uniformly across the body. Visual input mostly reduced tremor amplitude with DBS ON. CONCLUSIONS: Technical recording methods offer objective and sensitive detection of tremor that provide detailed characteristics such as peak amplitude, frequency and distribution pattern, and thus, provide information that can guide the optimization of treatments. Both methods detected the effects of DBS and visual input but the 3D motion system was more versatile in that it could detail the presence and properties of tremor at individual body sections.


Assuntos
Imageamento Tridimensional/métodos , Doença de Parkinson/complicações , Doença de Parkinson/terapia , Tremor/diagnóstico , Idoso , Estimulação Encefálica Profunda/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Núcleo Subtalâmico/fisiologia , Tremor/etiologia
18.
J Vis ; 20(10): 15, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33052410

RESUMO

Perceiving object motion during self-movement is an essential ability of humans. Previous studies have reported that the visual system can use both visual information (such as optic flow) and non-visual information (such as vestibular, somatosensory, and proprioceptive information) to identify and globally subtract the retinal motion component due to self-movement to recover scene-relative object motion. In this study, we used a motion-nulling method to directly measure and quantify the contribution of visual and non-visual information to the perception of scene-relative object motion during walking. We found that about 50% of the retinal motion component of the probe due to translational self-movement was removed with non-visual information alone and about 80% with visual information alone. With combined visual and non-visual information, the self-movement component was removed almost completely. Although non-visual information played an important role in the removal of self-movement-induced retinal motion, it was associated with decreased precision of probe motion estimates. We conclude that neither non-visual nor visual information alone is sufficient for the accurate perception of scene-relative object motion during walking, which instead requires the integration of both sources of information.


Assuntos
Percepção de Forma/fisiologia , Processos Mentais/fisiologia , Percepção de Movimento/fisiologia , Retina/fisiologia , Caminhada/fisiologia , Adulto , Feminino , Humanos , Masculino , Fluxo Óptico , Percepção Visual , Adulto Jovem
19.
Behav Res Methods ; 52(1): 295-304, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-30937844

RESUMO

We present SMITE, a toolbox for the measurement of eye movements using eye trackers manufactured by SMI GmbH. The toolbox provides a wrapper around the iViewX SDK provided by SMI, allowing simple integration of SMI eye trackers into Psychophysics Toolbox and PsychoPy programs. The toolbox provides a graphical interface for participant setup and calibration that is implemented natively in Psychophysics Toolbox and PsychoPy drawing commands, as well as providing several convenience features for, inter alia, creating gaze-contingent experiments and working with two-computer setups. Given that SMI GmbH and its support department have closed down, it is expected that this toolbox will provide owners of SMI eye trackers with an important new way to continue to create experiments with their systems. The eye trackers supported by this toolbox are the SMI HiSpeed 1250, SMI RED systems, SMI RED-m, SMI RED250mobile, and SMI REDn.


Assuntos
Movimentos Oculares , Psicofísica/métodos , Humanos , Software
20.
Behav Res Methods ; 52(5): 1970-1979, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32128697

RESUMO

We present Titta, an open-source toolbox for controlling eye trackers manufactured by Tobii AB from MATLAB and Python. The toolbox provides a wrapper around the Tobii Pro SDK, providing a convenient graphical participant setup, calibration and validation interface implemented using the PsychToolbox and PsychoPy toolboxes. The toolbox furthermore enables MATLAB and Python experiments to communicate with Tobii Pro Lab through the TalkToProLab tool. This enables experiments to be created and run using the freedom of MATLAB and Python, while the recording can be visualized and analyzed in Tobii Pro Lab. All screen-mounted Tobii eye trackers that are supported by the Tobii Pro SDK are also supported by Titta. At the time of writing, these are the Spectrum, Nano, TX300, T60XL, X3-120, X2-60, X2-30, X60, X120, T60 and T120 from Tobii Pro, and the 4C from Tobii Tech.


Assuntos
Movimentos Oculares , Software , Cor , Cabeça , Humanos , Pupila
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA