Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
Sensors (Basel) ; 24(16)2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39205057

RESUMEN

Virtual speeches are a very popular way for remote multi-user communication, but it has the disadvantage of the lack of eye contact. This paper proposes the evaluation of an online audience attention based on gaze tracking. Our research only uses webcams to capture the audience's head posture, gaze time, and other features, providing a low-cost method for attention monitoring with reference values across multiple domains. Meantime, we also propose a set of indexes which can be used to evaluate the audience's degree of attention, making up for the fact that the speaker cannot gauge the audience's concentration through eye contact during online speeches. We selected 96 students for a 20 min group simulation session and used Spearman's correlation coefficient to analyze the correlation between our evaluation indicators and concentration. The result showed that each evaluation index has a significant correlation with the degree of attention (p = 0.01), and all the students in the focused group met the thresholds set by each of our evaluation indicators, while the students in the non-focused group failed to reach the standard. During the simulation, eye movement data and EEG signals were measured synchronously for the second group of students. The EEG results of the students were consistent with the systematic evaluation. The performance of the measured EEG signals confirmed the accuracy of the systematic evaluation.


Asunto(s)
Atención , Movimientos Oculares , Habla , Humanos , Atención/fisiología , Habla/fisiología , Movimientos Oculares/fisiología , Electroencefalografía/métodos , Masculino , Tecnología de Seguimiento Ocular , Femenino , Interfaz Usuario-Computador
2.
Sensors (Basel) ; 23(13)2023 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-37448047

RESUMEN

Detection of fatigue is extremely important in the development of different kinds of preventive systems (such as driver monitoring or operator monitoring for accident prevention). The presence of fatigue for this task should be determined with physiological and objective behavioral indicators. To develop an effective model of fatigue detection, it is important to record a dataset with people in a state of fatigue as well as in a normal state. We carried out data collection using an eye tracker, a video camera, a stage camera, and a heart rate monitor to record a different kind of signal to analyze them. In our proposed dataset, 10 participants took part in the experiment and recorded data 3 times a day for 8 days. They performed different types of activity (choice reaction time, reading, correction test Landolt rings, playing Tetris), imitating everyday tasks. Our dataset is useful for studying fatigue and finding indicators of its manifestation. We have analyzed datasets that have public access to find the best for this task. Each of them contains data of eye movements and other types of data. We evaluated each of them to determine their suitability for fatigue studies, but none of them fully fit the fatigue detection task. We evaluated the recorded dataset by calculating the correspondences between eye-tracking data and CRT (choice reaction time) that show the presence of fatigue.


Asunto(s)
Movimientos Oculares , Movimientos de la Cabeza , Humanos , Frecuencia Cardíaca , Grabación de Cinta de Video , Tiempo de Reacción , Movimientos de la Cabeza/fisiología
3.
Behav Res Methods ; 55(3): 1372-1391, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-35650384

RESUMEN

With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user's gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm's output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system's practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias.


Asunto(s)
Aprendizaje Profundo , Movimientos Oculares , Humanos , Tecnología de Seguimiento Ocular , Redes Neurales de la Computación , Algoritmos
4.
Sensors (Basel) ; 22(9)2022 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-35590821

RESUMEN

Gaze tracking is basic research in the era of the Internet of Things. This study attempts to improve the performance of gaze tracking in an active infrared source gaze-tracking system. Owing to unavoidable noise interference, the estimated points of regard (PORs) tend to fluctuate within a certain range. To reduce the fluctuation range and obtain more stable results, we introduced a Kalman filter (KF) to filter the gaze parameters. Considering that the effect of filtering is relevant to the motion state of the gaze, we design the measurement noise that varies with the speed of the gaze. In addition, we used a correlation filter-based tracking method to quickly locate the pupil, instead of the detection method. Experiments indicated that the variance of the estimation error decreased by 73.83%, the size of the extracted pupil image decreased by 93.75%, and the extraction speed increased by 1.84 times. We also comprehensively discussed the advantages and disadvantages of the proposed method, which provides a reference for related research. It must be pointed out that the proposed algorithm can also be adopted in any eye camera-based gaze tracker.


Asunto(s)
Tecnología de Seguimiento Ocular , Internet de las Cosas , Algoritmos , Fijación Ocular , Pupila
5.
Sensors (Basel) ; 22(6)2022 Mar 17.
Artículo en Inglés | MEDLINE | ID: mdl-35336497

RESUMEN

The human eye gaze plays a vital role in monitoring people's attention, and various efforts have been made to improve in-vehicle driver gaze tracking systems. Most of them build the specific gaze estimation model by pre-annotated data training in an offline way. These systems usually tend to have poor generalization performance during the online gaze prediction, which is caused by the estimation bias between the training domain and the deployment domain, making the predicted gaze points shift from their correct location. To solve this problem, a novel driver's eye gaze tracking method with non-linear gaze point refinement is proposed in a monitoring system using two cameras, which eliminates the estimation bias and implicitly fine-tunes the gaze points. Supported by the two-stage gaze point clustering algorithm, the non-linear gaze point refinement method can gradually extract the representative gaze points of the forward and mirror gaze zone and establish the non-linear gaze point re-mapping relationship. In addition, the Unscented Kalman filter is utilized to track the driver's continuous status features. Experimental results show that the non-linear gaze point refinement method outperforms several previous gaze calibration and gaze mapping methods, and improves the gaze estimation accuracy even on the cross-subject evaluation. The system can be used for predicting the driver's attention.


Asunto(s)
Tecnología de Seguimiento Ocular , Fijación Ocular , Algoritmos , Atención , Calibración , Humanos
6.
Sensors (Basel) ; 22(5)2022 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-35271188

RESUMEN

For haptic interaction, a user in a virtual environment needs to interact with proxies attached to a robot. The device must be at the exact location defined in the virtual environment in time. However, due to device limitations, delays are always unavoidable. One of the solutions to improve the device response is to infer human intended motion and move the robot at the earliest time possible to the desired goal. This paper presents an experimental study to improve the prediction time and reduce the robot time taken to reach the desired position. We developed motion strategies based on the hand motion and eye-gaze direction to determine the point of user interaction in a virtual environment. To assess the performance of the strategies, we conducted a subject-based experiment using an exergame for reach and grab tasks designed for upper limb rehabilitation training. The experimental results in this study revealed that eye-gaze-based prediction significantly improved the detection time by 37% and the robot time taken to reach the target by 27%. Further analysis provided more insight on the effect of the eye-gaze window and the hand threshold on the device response for the experimental task.


Asunto(s)
Robótica , Mano/fisiología , Tecnología Háptica , Humanos , Motivación , Robótica/métodos , Extremidad Superior
7.
Sensors (Basel) ; 22(23)2022 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-36502099

RESUMEN

Eye-gaze direction-tracking technology is used in fields such as medicine, education, engineering, and gaming. Stability, accuracy, and precision of eye-gaze direction-tracking are demanded with simultaneous upgrades in response speed. In this study, a method is proposed to improve the speed with decreases in the system load and precision in the human pupil orbit model (HPOM) estimation method. The new method was proposed based on the phenomenon that the minor axis of the elliptical-deformed pupil always pointed toward the rotational center presented in various eye-gaze direction detection studies and HPOM estimation methods. Simulation experimental results confirmed that the speed was improved by at least 74 times by consuming less than 7 ms compared to the HPOM estimation. The accuracy of the eye's ocular rotational center point showed a maximum error of approximately 0.2 pixels on the x-axis and approximately 8 pixels on the y-axis. The precision of the proposed method was 0.0 pixels when the number of estimation samples (ES) was 7 or less, which showed results consistent with those of the HPOM estimation studies. However, the proposed method was judged to work conservatively against the allowable angle error (AAE), considering that the experiment was conducted under the worst conditions and the cost used to estimate the final model. Therefore, the proposed method could estimate HPOM with high accuracy and precision through AAE adjustment according to system performance and the usage environment.


Asunto(s)
Fijación Ocular , Pupila , Humanos , Pupila/fisiología , Cabeza , Simulación por Computador
8.
Sensors (Basel) ; 22(2)2022 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-35062502

RESUMEN

Based on experimental observations, there is a correlation between time and consecutive gaze positions in visual behaviors. Previous studies on gaze point estimation usually use images as the input for model trainings without taking into account the sequence relationship between image data. In addition to the spatial features, the temporal features are considered to improve the accuracy in this paper by using videos instead of images as the input data. To be able to capture spatial and temporal features at the same time, the convolutional neural network (CNN) and long short-term memory (LSTM) network are introduced to build a training model. In this way, CNN is used to extract the spatial features, and LSTM correlates temporal features. This paper presents a CNN Concatenating LSTM network (CCLN) that concatenates spatial and temporal features to improve the performance of gaze estimation in the case of time-series videos as the input training data. In addition, the proposed model can be optimized by exploring the numbers of LSTM layers, the influence of batch normalization (BN) and global average pooling layer (GAP) on CCLN. It is generally believed that larger amounts of training data will lead to better models. To provide data for training and prediction, we propose a method for constructing datasets of video for gaze point estimation. The issues are studied, including the effectiveness of different commonly used general models and the impact of transfer learning. Through exhaustive evaluation, it has been proved that the proposed method achieves a better prediction accuracy than the existing CNN-based methods. Finally, 93.1% of the best model and 92.6% of the general model MobileNet are obtained.


Asunto(s)
Tecnología de Seguimiento Ocular , Redes Neurales de la Computación , Memoria a Largo Plazo , Factores de Tiempo
9.
J Phys Ther Sci ; 34(1): 36-39, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35035077

RESUMEN

[Purpose] Visual assessment of the quality of movement is a common and important component of physiotherapy. The purpose of this study is to quantify the level of proficiency of therapists and to obtain a new index of proficiency by measuring the coordinates of the gaze tracking trajectories of therapists with years of experience. [Participants and Methods] Eighteen voluntary physiotherapists (1st year (n=4), 7th year (n=1), 9th year (n=4), 10th year (n=3), 11th year (n=4), 13th year (n=1), and 21st year (n=1)) were recruited for this study. [Results] Discriminant analysis according to the size of the deviation between the X-axis and Y-axis of the range of gaze tracking during movement analysis measured from each therapist showed that the percentage of classification accuracy in the 10th year or less was 72.2%. Cluster analysis showed that two clusters were formed. Thirteen therapists in Cluster 2 were in their 9th year or more. Eye tracking trajectories can be classified by the 10th year of experience as a therapist. [Conclusion] It was shown that full-fledged therapists with 10 years of experience also expanded the range of eye tracking. The trajectory in the Y-axis direction tends to be extended with their 9th year or more of experience. In this point, quantitative judgments of eye-tracking results can serve as indicators of proficiency. The eye movements are important as a tool to objectively measure skills from experience.

10.
Sensors (Basel) ; 21(4)2021 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-33671222

RESUMEN

Presentation attack artefacts can be used to subvert the operation of biometric systems by being presented to the sensors of such systems. In this work, we propose the use of visual stimuli with randomised trajectories to stimulate eye movements for the detection of such spoofing attacks. The presentation of a moving visual challenge is used to ensure that some pupillary motion is stimulated and then captured with a camera. Various types of challenge trajectories are explored on different planar geometries representing prospective devices where the challenge could be presented to users. To evaluate the system, photo, 2D mask and 3D mask attack artefacts were used and pupillary movement data were captured from 80 volunteers performing genuine and spoofing attempts. The results support the potential of the proposed features for the detection of biometric presentation attacks.


Asunto(s)
Algoritmos , Biometría , Movimientos Oculares , Humanos , Estudios Prospectivos
11.
Sensors (Basel) ; 21(5)2021 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-33806263

RESUMEN

In this paper, we propose a detection method for salient objects whose eyes are focused on gaze tracking; this method does not require a device in a single image. A network was constructed using Neg-Region Attention (NRA), which predicts objects with a concentrated line of sight using deep learning techniques. The existing deep learning-based method has an autoencoder structure, which causes feature loss during the encoding process of compressing and extracting features from the image and the decoding process of expanding and restoring. As a result, a feature loss occurs in the area of the object from the detection results, or another area is detected as an object. The proposed method, that is, NRA, can be used for reducing feature loss and emphasizing object areas with encoders. After separating positive and negative regions using the exponential linear unit activation function, converted attention was performed for each region. The attention method provided without using the backbone network emphasized the object area and suppressed the background area. In the experimental results, the proposed method showed higher detection results than the conventional methods.


Asunto(s)
Tecnología de Seguimiento Ocular , Redes Neurales de la Computación
12.
Behav Res Methods ; 53(2): 487-506, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-32748237

RESUMEN

Eye and head movements are used to scan the environment when driving. In particular, when approaching an intersection, large gaze scans to the left and right, comprising head and multiple eye movements, are made. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end points of each gaze scan marked in time and eccentricity. We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually marked "consensus ground truth" gaze scans taken from gaze data collected in a high-fidelity driving simulator. We found that the gaze scan algorithm successfully marked 96% of gaze scans and produced magnitudes and durations close to ground truth. Furthermore, the differences between the algorithm and ground truth were similar to the differences found between expert coders. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time-consuming marking of gaze movement data in driving simulator studies. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude, and timing of gaze scans and can be used to better understand how individuals scan their environment.


Asunto(s)
Conducción de Automóvil , Movimientos Oculares , Fijación Ocular , Cabeza , Movimientos de la Cabeza , Humanos , Movimientos Sacádicos
13.
Sensors (Basel) ; 20(13)2020 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-32635375

RESUMEN

The automatic detection of eye positions, their temporal consistency, and their mapping into a line of sight in the real world (to find where a person is looking at) is reported in the scientific literature as gaze tracking. This has become a very hot topic in the field of computer vision during the last decades, with a surprising and continuously growing number of application fields. A very long journey has been made from the first pioneering works, and this continuous search for more accurate solutions process has been further boosted in the last decade when deep neural networks have revolutionized the whole machine learning area, and gaze tracking as well. In this arena, it is being increasingly useful to find guidance through survey/review articles collecting most relevant works and putting clear pros and cons of existing techniques, also by introducing a precise taxonomy. This kind of manuscripts allows researchers and technicians to choose the better way to move towards their application or scientific goals. In the literature, there exist holistic and specifically technological survey documents (even if not updated), but, unfortunately, there is not an overview discussing how the great advancements in computer vision have impacted gaze tracking. Thus, this work represents an attempt to fill this gap, also introducing a wider point of view that brings to a new taxonomy (extending the consolidated ones) by considering gaze tracking as a more exhaustive task that aims at estimating gaze target from different perspectives: from the eye of the beholder (first-person view), from an external camera framing the beholder's, from a third-person view looking at the scene where the beholder is placed in, and from an external view independent from the beholder.


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular/instrumentación , Ojo , Fijación Ocular , Computadores , Humanos , Redes Neurales de la Computación
14.
Biomed Eng Online ; 18(1): 51, 2019 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-31053071

RESUMEN

BACKGROUND: Avoidance to look others in the eye is a characteristic symptom of Autism Spectrum Disorders (ASD), and it has been hypothesised that quantitative monitoring of gaze patterns could be useful to objectively evaluate treatments. However, tools to measure gaze behaviour on a regular basis at a manageable cost are missing. In this paper, we investigated whether a smartphone-based tool could address this problem. Specifically, we assessed the accuracy with which the phone-based, state-of-the-art eye-tracking algorithm iTracker can distinguish between gaze towards the eyes and the mouth of a face displayed on the smartphone screen. This might allow mobile, longitudinal monitoring of gaze aversion behaviour in ASD patients in the future. RESULTS: We simulated a smartphone application in which subjects were shown an image on the screen and their gaze was analysed using iTracker. We evaluated the accuracy of our set-up across three tasks in a cohort of 17 healthy volunteers. In the first two tasks, subjects were shown different-sized images of a face and asked to alternate their gaze focus between the eyes and the mouth. In the last task, participants were asked to trace out a circle on the screen with their eyes. We confirm that iTracker can recapitulate the true gaze patterns, and capture relative position of gaze correctly, even on a different phone system to what it was trained on. Subject-specific bias can be corrected using an error model informed from the calibration data. We compare two calibration methods and observe that a linear model performs better than a previously proposed support vector regression-based method. CONCLUSIONS: Under controlled conditions it is possible to reliably distinguish between gaze towards the eyes and the mouth with a smartphone-based set-up. However, future research will be required to improve the robustness of the system to roll angle of the phone and distance between the user and the screen to allow deployment in a home setting. We conclude that a smartphone-based gaze-monitoring tool provides promising opportunities for more quantitative monitoring of ASD.


Asunto(s)
Trastorno del Espectro Autista/fisiopatología , Movimientos Oculares , Teléfono Inteligente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
15.
Sensors (Basel) ; 19(1)2019 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-30621110

RESUMEN

Studies are being actively conducted on camera-based driver gaze tracking in a vehicle environment for vehicle interfaces and analyzing forward attention for judging driver inattention. In existing studies on the single-camera-based method, there are frequent situations in which the eye information necessary for gaze tracking cannot be observed well in the camera input image owing to the turning of the driver's head during driving. To solve this problem, existing studies have used multiple-camera-based methods to obtain images to track the driver's gaze. However, this method has the drawback of an excessive computation process and processing time, as it involves detecting the eyes and extracting the features of all images obtained from multiple cameras. This makes it difficult to implement it in an actual vehicle environment. To solve these limitations of existing studies, this study proposes a method that uses a shallow convolutional neural network (CNN) for the images of the driver's face acquired from two cameras to adaptively select camera images more suitable for detecting eye position; faster R-CNN is applied to the selected driver images, and after the driver's eyes are detected, the eye positions of the camera image of the other side are mapped through a geometric transformation matrix. Experiments were conducted using the self-built Dongguk Dual Camera-based Driver Database (DDCD-DB1) including the images of 26 participants acquired from inside a vehicle and the Columbia Gaze Data Set (CAVE-DB) open database. The results confirmed that the performance of the proposed method is superior to those of the existing methods.


Asunto(s)
Atención/fisiología , Conducción de Automóvil , Movimientos Oculares/fisiología , Redes Neurales de la Computación , Humanos
16.
Sensors (Basel) ; 19(24)2019 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-31847432

RESUMEN

Tracking drivers' eyes and gazes is a topic of great interest in the research of advanced driving assistance systems (ADAS). It is especially a matter of serious discussion among the road safety researchers' community, as visual distraction is considered among the major causes of road accidents. In this paper, techniques for eye and gaze tracking are first comprehensively reviewed while discussing their major categories. The advantages and limitations of each category are explained with respect to their requirements and practical uses. In another section of the paper, the applications of eyes and gaze tracking systems in ADAS are discussed. The process of acquisition of driver's eyes and gaze data and the algorithms used to process this data are explained. It is explained how the data related to a driver's eyes and gaze can be used in ADAS to reduce the losses associated with road accidents occurring due to visual distraction of the driver. A discussion on the required features of current and future eye and gaze trackers is also presented.


Asunto(s)
Conducción de Automóvil , Accidentes de Tránsito/prevención & control , Algoritmos , Movimientos Oculares/fisiología , Humanos
17.
Sensors (Basel) ; 18(9)2018 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-30200380

RESUMEN

We introduce a two-stream model to use reflexive eye movements for smart mobile device authentication. Our model is based on two pre-trained neural networks, iTracker and PredNet, targeting two independent tasks: (i) gaze tracking and (ii) future frame prediction. We design a procedure to randomly generate the visual stimulus on the screen of mobile device, and the frontal camera will simultaneously capture head motions of the user as one watches it. Then, iTracker calculates the gaze-coordinates error which is treated as a static feature. To solve the imprecise gaze-coordinates caused by the low resolution of the frontal camera, we further take advantage of PredNet to extract the dynamic features between consecutive frames. In order to resist traditional attacks (shoulder surfing and impersonation attacks) during the procedure of mobile device authentication, we innovatively combine static features and dynamic features to train a 2-class support vector machine (SVM) classifier. The experiment results show that the classifier achieves accuracy of 98.6% to authenticate the user identity of mobile devices.


Asunto(s)
Identificación Biométrica/métodos , Teléfono Celular , Movimientos Oculares/fisiología , Movimientos de la Cabeza/fisiología , Adulto , Femenino , Fijación Ocular/fisiología , Humanos , Masculino , Máquina de Vectores de Soporte , Adulto Joven
18.
Sensors (Basel) ; 18(5)2018 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-29783738

RESUMEN

Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system 'Etracker' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.

19.
Sensors (Basel) ; 18(2)2018 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-29401681

RESUMEN

A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.


Asunto(s)
Aprendizaje Automático , Conducción de Automóvil , Automóviles , Movimientos Oculares , Fijación Ocular , Movimientos de la Cabeza , Humanos
20.
Scand J Psychol ; 59(4): 360-367, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29697860

RESUMEN

Recent research has revealed enhanced autonomic and subjective responses to eye contact only when perceiving another live person. However, these enhanced responses to eye contact are abolished if the viewer believes that the other person is not able to look back at the viewer. We purported to investigate whether this "genuine" eye contact effect can be reproduced with pre-recorded videos of stimulus persons. Autonomic responses, gaze behavior, and subjective self-assessments were measured while participants viewed pre-recorded video persons with direct or averted gaze, imagined that the video person was real, and mentalized that the person could see them or not. Pre-recorded videos did not evoke similar physiological or subjective eye contact effect as previously observed with live persons, not even when the participants were mentalizing being seen by the person. Gaze tracking results showed, however, increased attention allocation to faces with direct gaze compared to averted gaze directions. The results suggest that elicitation of the physiological arousal in response to genuine eye contact seems to require spontaneous experience of seeing and of being seen by another individual.


Asunto(s)
Nivel de Alerta/fisiología , Sistema Nervioso Autónomo/fisiología , Reconocimiento Facial/fisiología , Fijación Ocular/fisiología , Respuesta Galvánica de la Piel/fisiología , Percepción Social , Teoría de la Mente/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda