Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 131
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Cogn Affect Behav Neurosci ; 24(4): 720-739, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38773022

RESUMO

"Pavlovian" or "motivational" biases describe the phenomenon that the valence of prospective outcomes modulates action invigoration: Reward prospect invigorates action, whereas punishment prospect suppresses it. The adaptive role of these biases in decision-making is still unclear. One idea is that they constitute a fast-and-frugal decision strategy in situations characterized by high arousal, e.g., in presence of a predator, which demand a quick response. In this pre-registered study (N = 35), we tested whether such a situation-induced via subliminally presented angry versus neutral faces-leads to increased reliance on Pavlovian biases. We measured trial-by-trial arousal by tracking pupil diameter while participants performed an orthogonalized Motivational Go/NoGo Task. Pavlovian biases were present in responses, reaction times, and even gaze, with lower gaze dispersion under aversive cues reflecting "freezing of gaze." The subliminally presented faces did not affect responses, reaction times, or pupil diameter, suggesting that the arousal manipulation was ineffective. However, pupil dilations reflected facets of bias suppression, specifically the physical (but not cognitive) effort needed to overcome aversive inhibition: Particularly strong and sustained dilations occurred when participants managed to perform Go responses to aversive cues. Conversely, no such dilations occurred when they managed to inhibit responses to Win cues. These results suggest that pupil diameter does not reflect response conflict per se nor the inhibition of prepotent responses, but specifically effortful action invigoration as needed to overcome aversive inhibition. We discuss our results in the context of the "value of work" theory of striatal dopamine.


Assuntos
Condicionamento Clássico , Motivação , Pupila , Tempo de Reação , Humanos , Pupila/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Condicionamento Clássico/fisiologia , Tempo de Reação/fisiologia , Motivação/fisiologia , Nível de Alerta/fisiologia , Inibição Psicológica , Expressão Facial , Tomada de Decisões/fisiologia , Recompensa , Sinais (Psicologia)
2.
Dev Sci ; 27(2): e13452, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37800410

RESUMO

Adults shift their attention to the right or to the left along a spatial continuum when solving additions and subtractions, respectively. Studies suggest that these shifts not only support the exact computation of the results but also anticipatively narrow down the range of plausible answers when processing the operands. However, little is known on when and how these attentional shifts arise in childhood during the acquisition of arithmetic. Here, an eye-tracker with high spatio-temporal resolution was used to measure spontaneous eye movements, used as a proxy for attentional shifts, while children of 2nd (8 y-o; N = 50) and 4th (10 y-o; N = 48) Grade solved simple additions (e.g., 4+3) and subtractions (e.g., 3-2). Gaze patterns revealed horizontal and vertical attentional shifts in both groups. Critically, horizontal eye movements were observed in 4th Graders as soon as the first operand and the operator were presented and thus before the beginning of the exact computation. In 2nd Graders, attentional shifts were only observed after the presentation of the second operand just before the response was made. This demonstrates that spatial attention is recruited when children solve arithmetic problems, even in the early stages of learning mathematics. The time course of these attentional shifts suggests that with practice in arithmetic children start to use spatial attention to anticipatively guide the search for the answer and facilitate the implementation of solving procedures. RESEARCH HIGHLIGHTS: Additions and subtractions are associated to right and left attentional shifts in adults, but it is unknown when these mechanisms arise in childhood. Children of 8-10 years old solved single-digit additions and subtractions while looking at a blank screen. Eye movements showed that children of 8 years old already show spatial biases possibly to represent the response when knowing both operands. Children of 10 years old shift attention before knowing the second operand to anticipatively guide the search for plausible answers.


Assuntos
Movimentos Oculares , Resolução de Problemas , Adulto , Criança , Humanos , Resolução de Problemas/fisiologia , Aprendizagem , Movimento , Matemática , Tempo de Reação/fisiologia
3.
Behav Res Methods ; 56(4): 3300-3314, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38448726

RESUMO

Eye movements offer valuable insights for clinical interventions, diagnostics, and understanding visual perception. The process usually involves recording a participant's eye movements and analyzing them in terms of various gaze events. Manual identification of these events is extremely time-consuming. Although the field has seen the development of automatic event detection and classification methods, these methods have primarily focused on distinguishing events when participants remain stationary. With increasing interest in studying gaze behavior in freely moving participants, such as during daily activities like walking, new methods are required to automatically classify events in data collected under unrestricted conditions. Existing methods often rely on additional information from depth cameras or inertial measurement units (IMUs), which are not typically integrated into mobile eye trackers. To address this challenge, we present a framework for classifying gaze events based solely on eye-movement signals and scene video footage. Our approach, the Automatic Classification of gaze Events in Dynamic and Natural Viewing (ACE-DNV), analyzes eye movements in terms of velocity and direction and leverages visual odometry to capture head and body motion. Additionally, ACE-DNV assesses changes in image content surrounding the point of gaze. We evaluate the performance of ACE-DNV using a publicly available dataset and showcased its ability to discriminate between gaze fixation, gaze pursuit, gaze following, and gaze shifting (saccade) events. ACE-DNV exhibited comparable performance to previous methods, while eliminating the necessity for additional devices such as IMUs and depth cameras. In summary, ACE-DNV simplifies the automatic classification of gaze events in natural and dynamic environments. The source code is accessible at https://github.com/arnejad/ACE-DNV .


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Fixação Ocular , Humanos , Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Percepção Visual/fisiologia , Gravação em Vídeo/métodos , Masculino , Adulto , Feminino
4.
Conscious Cogn ; 113: 103551, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37429212

RESUMO

This study investigates bistable perception as a function of the presentation side of the ambiguous figures and of participants' sex, to evaluate left-right hemispheric (LH-RH) asymmetries related to consciousness. In two experiments using the divided visual field paradigm, two Rubin's vase-faces figures were projected simultaneously and continuously 180 s long to the left (LVF) and right (RVF; Experiment 1) or to the upper (UVF) and lower (DVF; Experiment 2) visual hemifields of 48 healthy subjects monitored with eye-tracker. Experiment 1 enables stimulus segregation from the LVF to the RH and from the RVF to the LH, whereas Experiment 2 does not. Results from Experiment 1 show that males perceived the face profiles for more time in the LVF than in the RVF, with an opposite trend for the vase, whereas females show a similar pattern of perception in the two hemifields. A related result confirmed the previously reported possibility to have simultaneously two different percepts (qualia) in the two hemifields elicited by the two identic ambiguous stimuli, which was here observed to occur more frequently in males. Similar effects were not observed in Experiment 2. These findings suggest that the percepts display the processing abilities of the hemisphere currently processing the stimulus eliciting them (e.g., RH-faces), and that females and males reflect in bistable perception, a genuine manifestation of consciousness, the well-known hemispheric asymmetry differences they show in ordinary perception.


Assuntos
Campos Visuais , Percepção Visual , Masculino , Feminino , Humanos , Lateralidade Funcional
5.
Ophthalmic Physiol Opt ; 43(6): 1540-1549, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37470168

RESUMO

PURPOSE: This study presents a novel video-based eye-tracking system for analysing the dynamics of the binocular near-reflex response. The system enables the simultaneous measurement of convergence, divergence and pupillary size during accommodation and disaccommodation to aid the comprehensive understanding of the three-component near-reflex. METHODS: A high-speed (90 Hz) video-based eye tracker was used to capture changes in eye gaze and pupil radius in 15 participants in response to altering stimulus conditions. An offline analysis involved separating the gaze vector components and pupil radius, which were fitted to a hyperbolic tangent function to characterise the dynamics of the near-reflex process. RESULTS: Significant differences in the temporal parameters of the pupil radius were observed between the near-to-far and far-to-near vision changes, with faster miosis compared with mydriasis. Additionally, differences in response times were found between gaze angle components, with longer convergence times compared to changes in the vertical direction (saccades). The steady-state values of the gaze components and pupil radius were in line with theoretical expectations and previous reports. CONCLUSIONS: The proposed system provides a non-invasive, portable and cost-effective method for evaluating near-reflex dynamics under natural viewing conditions using a video-based eye tracker. The sampling rate ensures the accurate assessment of vergence eye movements and pupillary dynamics. By simultaneously measuring eye convergence, divergence and pupil size, the system offers a more comprehensive assessment of the near-reflex response. This makes it a valuable tool for clinical diagnosis, research studies and investigating the effects of near work on the visual system.

6.
Sensors (Basel) ; 23(4)2023 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-36850892

RESUMO

Understanding users' visual attention on websites is paramount to enhance the browsing experience, such as providing emergent information or dynamically adapting Web interfaces. Existing approaches to accomplish these challenges are generally based on the computation of salience maps of static Web interfaces, while websites increasingly become more dynamic and interactive. This paper proposes a method and provides a proof-of-concept to predict user's visual attention on specific regions of a website with dynamic components. This method predicts the regions of a user's visual attention without requiring a constant recording of the current layout of the website, but rather by knowing the structure it presented in a past period. To address this challenge, the concept of visit intention is introduced in this paper, defined as the probability that a user, while browsing, will fixate their gaze on a specific region of the website in the next period. Our approach uses the gaze patterns of a population that browsed a specific website, captured via an eye-tracker device, to aid personalized prediction models built with individual visual kinetics features. We show experimentally that it is possible to conduct such a prediction through multilabel classification models using a small number of users, obtaining an average area under curve of 84.3%, and an average accuracy of 79%. Furthermore, the user's visual kinetics features are consistently selected in every set of a cross-validation evaluation.

7.
Sensors (Basel) ; 23(23)2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38067928

RESUMO

The aim of this study was to assess the characteristics of visual search behavior in elderly drivers in reverse parking. Fourteen healthy elderly and fourteen expert drivers performed a perpendicular parking task. The parking process was divided into three consecutive phases (Forward, Reverse, and Straighten the wheel) and the visual search behavior was monitored using an eye tracker (Tobii Pro Glasses 2). In addition, driving-related tests and quality of life were evaluated in elderly drivers. As a result, elderly drivers had a shorter time of gaze at the vertex of the parking space both in direct vision and reflected in the driver-side mirror during the Forward and the Reverse phases. In contrast, they had increased gaze time in the passenger-side mirror in the Straighten the wheel phase. Multiple regression analysis revealed that quality of life could be predicted by the total gaze time in the Straighten the wheel phase (ß = -0.45), driving attitude (ß = 0.62), and driving performance (ß = 0.58); the adjusted R2 value was 0.87. These observations could improve our understanding of the characteristics of visual search behavior in parking performance and how this behavior is related to quality of life in elderly drivers.


Assuntos
Condução de Veículo , Qualidade de Vida , Humanos , Idoso , Visão Ocular , Análise de Regressão , Acidentes de Trânsito
8.
Behav Res Methods ; 2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37550466

RESUMO

Over the past few decades, there have been significant developments in eye-tracking technology, particularly in the domain of mobile, head-mounted devices. Nevertheless, questions remain regarding the accuracy of these eye-trackers during static and dynamic tasks. In light of this, we evaluated the performance of two widely used devices: Tobii Pro Glasses 2 and Tobii Pro Glasses 3. A total of 36 participants engaged in tasks under three dynamicity conditions. In the "seated with a chinrest" trial, only the eyes could be moved; in the "seated without a chinrest" trial, both the head and the eyes were free to move; and during the walking trial, participants walked along a straight path. During the seated trials, participants' gaze was directed towards dots on a wall by means of audio instructions, whereas in the walking trial, participants maintained their gaze on a bullseye while walking towards it. Eye-tracker accuracy was determined using computer vision techniques to identify the target within the scene camera image. The findings showed that Tobii 3 outperformed Tobii 2 in terms of accuracy during the walking trials. Moreover, the results suggest that employing a chinrest in the case of head-mounted eye-trackers is counterproductive, as it necessitates larger eye eccentricities for target fixation, thereby compromising accuracy compared to not using a chinrest, which allows for head movement. Lastly, it was found that participants who reported higher workload demonstrated poorer eye-tracking accuracy. The current findings may be useful in the design of experiments that involve head-mounted eye-trackers.

9.
Behav Res Methods ; 55(3): 1372-1391, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35650384

RESUMO

With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user's gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm's output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system's practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias.


Assuntos
Aprendizado Profundo , Movimentos Oculares , Humanos , Tecnologia de Rastreamento Ocular , Redes Neurais de Computação , Algoritmos
10.
Behav Res Methods ; 2023 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-37488465

RESUMO

We present a method to automatically calculate time to fixate (TTF) from the eye-tracker data in subjects with neurological impairment using a driving simulator. TTF presents the time interval for a person to notice the stimulus from its first occurrence. Precisely, we measured the time since the children started to cross the street until the drivers directed their look to the children. From 108 neurological patients recruited for the study, the analysis of TTF was performed in 56 patients to assess fit-, unfit-, and conditionally-fit-to-drive patients. The results showed that the proposed method based on the YOLO (you only look once) object detector is efficient for computing TTFs from the eye-tracker data. We obtained discriminative results for fit-to-drive patients by application of Tukey's honest significant difference post hoc test (p < 0.01), while no difference was observed between conditionally-fit and unfit-to-drive groups (p = 0.542). Moreover, we show that time-to-collision (TTC), initial gaze distance (IGD) from pedestrians, and speed at the hazard onset did not influence the result, while the only significant interaction is among fitness, IGD, and TTC on TTF. Obtained TTFs are also compared with the perception response times (PRT) calculated independently from eye-tracker data and YOLO. Although we reached statistically significant results that speak in favor of possible method application for assessment of fitness to drive, we provide detailed directions for future driving simulation-based evaluation and propose processing workflow to secure reliable TTF calculation and its possible application in for example psychology and neuroscience.

11.
Sensors (Basel) ; 22(2)2022 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-35062555

RESUMO

We present the design, fabrication, and test of a multipurpose integrated circuit (Application Specific Integrated Circuit) in AMS 0.35 µm Complementary Metal Oxide Semiconductor technology. This circuit is embedded in a scleral contact lens, combined with photodiodes enabling the gaze direction detection when illuminated and wirelessly powered by an eyewear. The gaze direction is determined by means of a centroid computation from the measured photocurrents. The ASIC is used simultaneously to detect specific eye blinking sequences to validate target designations, for instance. Experimental measurements and validation are performed on a scleral contact lens prototype integrating four infrared photodiodes, mounted on a mock-up eyeball, and combined with an artificial eyelid. The eye-tracker has an accuracy of 0.2°, i.e., 2.5 times better than current mobile video-based eye-trackers, and is robust with respect to process variations, operating time, and supply voltage. Variations of the computed gaze direction transmitted to the eyewear, when the eyelid moves, are detected and can be interpreted as commands based on blink duration or using blinks alternation on both eyes.


Assuntos
Piscadela , Lentes de Contato , Pálpebras , Monitorização Fisiológica
12.
Sensors (Basel) ; 23(1)2022 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-36616641

RESUMO

Driving simulators are increasingly being incorporated by driving schools into a training process for a variety of vehicles. The motion platform is a major component integrated into simulators to enhance the sense of presence and fidelity of the driving simulator. However, less effort has been devoted to assessing the motion cues feedback on trainee performance in simulators. To address this gap, we thoroughly study the impact of motion cues on braking at a target point as an elementary behavior that reflects the overall driver's performance. In this paper, we use an eye-tracking device to evaluate driver behavior in addition to evaluating data from a driving simulator and considering participants' feedback. Furthermore, we compare the effect of different motion levels ("No motion", "Mild motion", and "Full motion") in two road scenarios: with and without the pre-braking warning signs with the speed feedback given by the speedometer. The results showed that a full level of motion cues had a positive effect on braking smoothness and gaze fixation on the track. In particular, the presence of full motion cues helped the participants to gradually decelerate from 5 to 0 ms-1 in the last 240 m before the stop line in both scenarios, without and with warning signs, compared to the hardest braking from 25 to 0 ms-1 produced under the no motion cues conditions. Moreover, the results showed that a combination of the mild motion conditions and warning signs led to an underestimation of the actual speed and a greater fixation of the gaze on the speedometer. Questionnaire data revealed that 95% of the participants did not suffer from motion sickness symptoms, yet participants' preferences did not indicate that they were aware of the impact of simulator conditions on their driving behavior.


Assuntos
Condução de Veículo , Enjoo devido ao Movimento , Humanos , Sinais (Psicologia) , Simulação por Computador , Movimento (Física) , Inquéritos e Questionários , Acidentes de Trânsito
13.
Child Psychiatry Hum Dev ; 53(4): 623-634, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-33738689

RESUMO

Callous-unemotional traits have been associated with difficulties in identifying and responding to others' emotions. To inform this line of research, the current study investigated the eye gaze behavior of children (n = 59; mean-age = 6.35) with varying levels of callous-unemotional (CU) traits with the use of eye-tracker methodology, as well as their ability to accurately identify emotional expressions. Participating children were selected from a large screening sample (N = 1283). Main findings supported a reduced fixation rate to the eye-region and an increased fixation in the mouth area of emotional faces among children high on callous-unemotional traits (HCU), irrespective of emotion expressed (i.e., fear, sad, angry and happy) and age of individuals portrayed in images (adult versus child faces). Further, findings suggested that HCU children were less likely to accurately identify facial emotional expressions, which might be due to the identified attentional neglect to the eye region of emotional faces. Current findings support the importance of early prevention and intervention programs that can enhance the emotional development and social adjustment of HCU children.


Assuntos
Transtorno da Conduta , Adulto , Criança , Transtorno da Conduta/diagnóstico , Transtorno da Conduta/psicologia , Emoções , Expressão Facial , Medo/psicologia , Fixação Ocular , Humanos
14.
Behav Res Methods ; 2022 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-35948762

RESUMO

Eye tracking accuracy is affected in individuals with vision and oculomotor deficits, impeding our ability to answer important scientific and clinical questions about these disorders. It is difficult to disambiguate decreases in eye movement accuracy and changes in accuracy of the eye tracking itself. We propose the EyeRobot-a low-cost, robotic oculomotor simulator capable of emulating healthy and compromised eye movements to provide ground truth assessment of eye tracker performance, and how different aspects of oculomotor deficits might affect tracking accuracy and performance. The device can operate with eccentric optical axes or large deviations between the eyes, as well as simulate oculomotor pathologies, such as large fixational instabilities. We find that our design can provide accurate eye movements for both central and eccentric viewing conditions, which can be tracked by using a head-mounted eye tracker, Pupil Core. As proof of concept, we examine the effects of eccentric fixation on calibration accuracy and find that Pupil Core's existing eye tracking algorithm is robust to large fixation offsets. In addition, we demonstrate that the EyeRobot can simulate realistic eye movements like saccades and smooth pursuit that can be tracked using video-based eye tracking. These tests suggest that the EyeRobot, an easy to build and flexible tool, can aid with eye tracking validation and future algorithm development in healthy and compromised vision.

15.
Behav Res Methods ; 54(2): 845-863, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34357538

RESUMO

We empirically investigate the role of small, almost imperceptible balance and breathing movements of the head on the level and colour of noise in data from five commercial video-based P-CR eye trackers. By comparing noise from recordings with completely static artificial eyes to noise from recordings where the artificial eyes are worn by humans, we show that very small head movements increase levels and colouring of the noise in data recorded from all five eye trackers in this study. This increase of noise levels is seen not only in the gaze signal, but also in the P and CR signals of the eye trackers that provide these camera image features. The P and CR signals of the SMI eye trackers correlate strongly during small head movements, but less so or not at all when the head is completely still, indicating that head movements are registered by the P and CR images in the eye camera. By recording with artificial eyes, we can also show that the pupil size artefact has no major role in increasing and colouring noise. Our findings add to and replicate the observation by Niehorster et al., (2021) that lowpass filters in video-based P-CR eye trackers colour the data. Irrespective of source, filters or head movements, coloured noise can be confused for oculomotor drift. We also find that usage of the default head restriction in the EyeLink 1000+, the EyeLink II and the HiSpeed240 result in noisier data compared to less head restriction. Researchers investigating data quality in eye trackers should consider not using the Gen 2 artificial eye from SR Research / EyeLink. Data recorded with this artificial eye are much noisier than data recorded with other artificial eyes, on average 2.2-14.5 times worse for the five eye trackers.


Assuntos
Movimentos Oculares , Movimentos da Cabeça , Cor , Confiabilidade dos Dados , Olho Artificial , Humanos
16.
Sensors (Basel) ; 21(12)2021 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-34200616

RESUMO

Cybersickness is one of the major roadblocks in the widespread adoption of mixed reality devices. Prolonged exposure to these devices, especially virtual reality devices, can cause users to feel discomfort and nausea, spoiling the immersive experience. Incorporating spatial blur in stereoscopic 3D stimuli has shown to reduce cybersickness. In this paper, we develop a technique to incorporate spatial blur in VR systems inspired by the human physiological system. The technique makes use of concepts from foveated imaging and depth-of-field. The developed technique can be applied to any eye tracker equipped VR system as a post-processing step to provide an artifact-free scene. We verify the usefulness of the proposed system by conducting a user study on cybersickness evaluation. We used a custom-built rollercoaster VR environment developed in Unity and an HTC Vive Pro Eye headset to interact with the user. A Simulator Sickness Questionnaire was used to measure the induced sickness while gaze and heart rate data were recorded for quantitative analysis. The experimental analysis highlighted the aptness of our foveated depth-of-field effect in reducing cybersickness in virtual environments by reducing the sickness scores by approximately 66%.


Assuntos
Enjoo devido ao Movimento , Realidade Virtual , Emoções , Humanos , Inquéritos e Questionários , Interface Usuário-Computador
17.
Sensors (Basel) ; 20(10)2020 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-32408630

RESUMO

This paper presents the use of eye tracking data in Magnetic AngularRate Gravity (MARG)-sensor based head orientation estimation. The approach presented here can be deployed in any motion measurement that includes MARG and eye tracking sensors (e.g., rehabilitation robotics or medical diagnostics). The challenge in these mostly indoor applications is the presence of magnetic field disturbances at the location of the MARG-sensor. In this work, eye tracking data (visual fixations) are used to enable zero orientation change updates in the MARG-sensor data fusion chain. The approach is based on a MARG-sensor data fusion filter, an online visual fixation detection algorithm as well as a dynamic angular rate threshold estimation for low latency and adaptive head motion noise parameterization. In this work we use an adaptation of Madgwicks gradient descent filter for MARG-sensor data fusion, but the approach could be used with any other data fusion process. The presented approach does not rely on additional stationary or local environmental references and is therefore self-contained. The proposed system is benchmarked against a Qualisys motion capture system, a gold standard in human motion analysis, showing improved heading accuracy for the MARG-sensor data fusion up to a factor of 0.5 while magnetic disturbance is present.


Assuntos
Tecnologia de Rastreamento Ocular , Gravitação , Orientação Espacial , Algoritmos , Humanos
18.
Sensors (Basel) ; 20(24)2020 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-33327500

RESUMO

Assistive robots support people with limited mobility in their everyday life activities and work. However, most of the assistive systems and technologies for supporting eating and drinking require a residual mobility in arms or hands. For people without residual mobility, different hands-free controls have been developed. For hands-free control, the combination of different modalities can lead to great advantages and improved control. The novelty of this work is a new concept to control a robot using a combination of head and eye motions. The control unit is a mobile, compact and low-cost multimodal sensor system. A Magnetic Angular Rate Gravity (MARG)-sensor is used to detect head motion and an eye tracker enables the system to capture the user's gaze. To analyze the performance of the two modalities, an experimental evaluation with ten able-bodied subjects and one subject with tetraplegia was performed. To assess discrete control (event-based control), a button activation task was performed. To assess two-dimensional continuous cursor control, a Fitts's Law task was performed. The usability study was related to a use-case scenario with a collaborative robot assisting a drinking action. The results of the able-bodied subjects show no significant difference between eye motions and head motions for the activation time of the buttons and the throughput, while, using the eye tracker in the Fitts's Law task, the error rate was significantly higher. The subject with tetraplegia showed slightly better performance for button activation when using the eye tracker. In the use-case, all subjects were able to use the control unit successfully to support the drinking action. Due to the limited head motion of the subject with tetraplegia, button activation with the eye tracker was slightly faster than with the MARG-sensor. A further study with more subjects with tetraplegia is planned, in order to verify these results.


Assuntos
Movimentos Oculares , Robótica , Mãos , Cabeça , Humanos , Movimento (Física) , Quadriplegia
19.
Sensors (Basel) ; 20(3)2020 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-32046131

RESUMO

Steady-state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain-computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli-responsive hybrid speller by using electroencephalography (EEG) and video-based eye-tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)-based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI-speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI-spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued-spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free-spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI-based system will ultimately enable a truly high-speed communication channel.


Assuntos
Interfaces Cérebro-Computador , Potenciais Evocados Visuais/fisiologia , Movimentos Oculares/fisiologia , Idioma , Adulto , Análise de Dados , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sistemas On-Line , Adulto Jovem
20.
Behav Res Methods ; 52(5): 2098-2121, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32206998

RESUMO

For evaluating whether an eye-tracker is suitable for measuring microsaccades, Poletti & Rucci (2016) propose that a measure called 'resolution' could be better than the more established root-mean-square of the sample-to-sample distances (RMS-S2S). Many open questions exist around the resolution measure, however. Resolution needs to be calculated using data from an artificial eye that can be turned in very small steps. Furthermore, resolution has an unclear and uninvestigated relationship to the RMS-S2S and STD (standard deviation) measures of precision (Holmqvist & Andersson, 2017, p. 159-190), and there is another metric by the same name (Clarke, Ditterich, Drüen, Schönfeld, and Steineke 2002), which instead quantifies the errors of amplitude measurements. In this paper, we present a mechanism, the Stepperbox, for rotating artificial eyes in arbitrary angles from 1' (arcmin) and upward. We then use the Stepperbox to find the minimum reliably detectable rotations in 11 video-based eye-trackers (VOGs) and the Dual Purkinje Imaging (DPI) tracker. We find that resolution correlates significantly with RMS-S2S and, to a lesser extent, with STD. In addition, we find that although most eye-trackers can detect some small rotations of an artificial eye, the rotations of amplitudes up to 2∘ are frequently erroneously measured by video-based eye-trackers. We show evidence that the corneal reflection (CR) feature of these eye-trackers is a major cause of erroneous measurements of small rotations of artificial eyes. Our data strengthen the existing body of evidence that video-based eye-trackers produce errors that may require that we reconsider some results from research on reading, microsaccades, and vergence, where the amplitude of small eye movements have been measured with past or current video-based eye-trackers. In contrast, the DPI reports correct rotation amplitudes down to 1'.


Assuntos
Movimentos Oculares , Olho Artificial , Tecnologia de Rastreamento Ocular , Gravação em Vídeo , Coleta de Dados , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa