Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 39(4): 833-840, 2022 Aug 25.
Article in Zh | MEDLINE | ID: mdl-36008348

ABSTRACT

The eye-computer interaction technology based on electro-oculogram provides the users with a convenient way to control the device, which has great social significance. However, the eye-computer interaction is often disturbed by the involuntary eye movements, resulting in misjudgment, affecting the users' experience, and even causing danger in severe cases. Therefore, this paper starts from the basic concepts and principles of eye-computer interaction, sorts out the current mainstream classification methods of voluntary/involuntary eye movement, and analyzes the characteristics of each technology. The performance analysis is carried out in combination with specific application scenarios, and the problems to be solved are further summarized, which are expected to provide research references for researchers in related fields.


Subject(s)
Eye Movements , Movement , Computers , Electrooculography/methods
2.
Behav Res Methods ; 52(4): 1671-1680, 2020 08.
Article in English | MEDLINE | ID: mdl-32291731

ABSTRACT

Zemblys et al. (Behavior Research Methods, 51(2), 840-864, 2019) reported on a method for the classification of eye-movements ("gazeNet"). I have found three errors and two problems with that paper that are explained herein. Error 1: The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200 Hz. Of the six datasets that were used as the training set for the gazeNet algorithm, two were actually collected at 200 Hz. Problem 1 has to do with the fact that even among the 500 Hz data, the inter-timestamp intervals varied widely. Problem 2 is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. Error 2 The gazeNet algorithm was trained on the Lund dataset, and then compared to other methods, not trained on this dataset, in terms of performance on this dataset. This is an inherently unfair comparison, and yet nowhere in the gazeNet paper is this unfairness mentioned. Error 3 arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower.


Subject(s)
Algorithms , Communication , Neural Networks, Computer , Eye Movements , Humans , Saccades
3.
Behav Res Methods ; 51(2): 556-572, 2019 04.
Article in English | MEDLINE | ID: mdl-30411227

ABSTRACT

Deep learning approaches have achieved breakthrough performance in various domains. However, the segmentation of raw eye-movement data into discrete events is still done predominantly either by hand or by algorithms that use hand-picked parameters and thresholds. We propose and make publicly available a small 1D-CNN in conjunction with a bidirectional long short-term memory network that classifies gaze samples as fixations, saccades, smooth pursuit, or noise, simultaneously assigning labels in windows of up to 1 s. In addition to unprocessed gaze coordinates, our approach uses different combinations of the speed of gaze, its direction, and acceleration, all computed at different temporal scales, as input features. Its performance was evaluated on a large-scale hand-labeled ground truth data set (GazeCom) and against 12 reference algorithms. Furthermore, we introduced a novel pipeline and metric for event detection in eye-tracking recordings, which enforce stricter criteria on the algorithmically produced events in order to consider them as potentially correct detections. Results show that our deep approach outperforms all others, including the state-of-the-art multi-observer smooth pursuit detector. We additionally test our best model on an independent set of recordings, where our approach stays highly competitive compared to literature methods.


Subject(s)
Eye Movements , Pattern Recognition, Automated/methods , Algorithms , Humans , Pursuit, Smooth , Saccades
4.
J Vis ; 14(2)2014 Feb 25.
Article in English | MEDLINE | ID: mdl-24569984

ABSTRACT

Microsaccades, small involuntary eye movements that occur once or twice per second during attempted visual fixation, are relevant to perception, cognition, and oculomotor control and present distinctive characteristics in visual and oculomotor pathologies. Thus, the development of robust and accurate microsaccade-detection techniques is important for basic and clinical neuroscience research. Due to the diminutive size of microsaccades, however, automatic and reliable detection can be difficult. Current challenges in microsaccade detection include reliance on set, arbitrary thresholds and lack of objective validation. Here we describe a novel microsaccade-detecting method, based on unsupervised clustering techniques, that does not require an arbitrary threshold and provides a detection reliability index. We validated the new clustering method using real and simulated eye-movement data. The clustering method reduced detection errors by 62% for binocular data and 78% for monocular data, when compared to standard contemporary microsaccade-detection techniques. Further, the clustering method's reliability index was correlated with the microsaccade-detection error rate, suggesting that the reliability index may be used to determine the comparative precision of eye-tracking devices.


Subject(s)
Fixation, Ocular/physiology , Saccades/physiology , Visual Perception/physiology , Cluster Analysis , Humans , Photic Stimulation/methods , Reproducibility of Results
5.
J Eye Mov Res ; 13(4)2020 Jul 27.
Article in English | MEDLINE | ID: mdl-33828806

ABSTRACT

In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or physically implausible signals). In order to achieve more consistent annotations, the gaze samples were labelled by a novice rater based on rudimentary algorithmic suggestions, and subsequently corrected by an expert rater. Overall, we annotated eye movement events in the recordings corresponding to 50 randomly selected test set clips and 6 training set clips from Hollywood2, which were viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data. In these labels, 62.4% of the samples were attributed to fixations, 9.1% - to saccades, and, notably, 24.2% - to pursuit (the remainder marked as noise). After evaluation of 15 published eye movement classification algorithms on our newly collected annotated data set, we found that the most recent algorithms perform very well on average, and even reach human-level labelling quality for fixations and saccades, but all have a much larger room for improvement when it comes to smooth pursuit classification. The data set is made available at https://gin.g-node.org/ioannis.agtzidis/hollywood2_em.

6.
Front Psychol ; 11: 542752, 2020.
Article in English | MEDLINE | ID: mdl-33013592

ABSTRACT

Surgical skill-level assessment is key to collecting the required feedback and adapting the educational programs accordingly. Currently, these assessments for the minimal invasive surgery programs are primarily based on subjective methods, and there is no consensus on skill level classifications. One of the most detailed of these classifications categorize skill levels as beginner, novice, intermediate, sub-expert, and expert. To properly integrate skill assessment into minimal invasive surgical education programs and provide skill-based training alternatives, it is necessary to classify the skill levels in as detailed a way as possible and identify the differences between all skill levels in an objective manner. Yet, despite the existence of very encouraging results in the literature, most of the studies have been conducted to better understand the differences between novice and expert surgical skill levels leaving out the other crucial skill levels between them. Additionally, there are very limited studies by considering the eye-movement behaviors of surgical residents. To this end, the present study attempted to distinguish novice- and intermediate-level surgical residents based on their eye movements. The eye-movement data was recorded from 23 volunteer surgical residents while they were performing four computer-based simulated surgical tasks under different hand conditions. The data was analyzed using logistic regression to estimate the skill levels of both groups. The best results of the estimation revealing a 91.3% recognition rate of predicting novice and intermediate surgical residents on one scenario were selected from four under the dominant hand condition. These results show that the eye-movements can be potentially used to identify surgeons with intermediate and novice skills. However, the results also indicate that the order in which the scenarios are provided, and the design of the scenario, the tasks, and their appropriateness with the skill levels of the participants are all critical factors to be considered in improving the estimation ratio, and hence require thorough assessment for future research.

SELECTION OF CITATIONS
SEARCH DETAIL