Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
NPJ Sci Learn ; 9(1): 41, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38951543

RESUMO

Intelligence and personality are both key drivers of learning. This study extends prior research on intelligence and personality by adopting a behavioral-process-related eye-tracking approach. We tested 182 adults on fluid intelligence and the Big Five personality traits. Eye-tracking information (gaze patterns) was recorded while participants completed the intelligence test. Machine learning models showed that personality explained 3.18% of the variance in intelligence test scores, with Openness and, surprisingly, Agreeableness most meaningfully contributing to the prediction. Facet-level measures of personality explained a larger amount of variance (7.67%) in intelligence test scores than the trait-level measures, with the largest coefficients obtained for Ideas and Values (Openness) and Compliance and Trust (Agreeableness). Gaze patterns explained a substantial amount of variance in intelligence test performance (35.91%). Gaze patterns were unrelated to the Big Five personality traits, but some of the facets (especially Self-Consciousness from Neuroticism and Assertiveness from Extraversion) were related to gaze. Gaze patterns reflected the test-solving strategies described in the literature (constructive matching, response elimination) to some extent. A combined feature vector consisting of gaze-based predictions and personality traits explained 37.50% of the variance in intelligence test performance, with significant unique contributions from both personality and gaze patterns. A model that included personality facets and gaze explained 38.02% of the variance in intelligence test performance. Although behavioral data thus clearly outperformed "traditional" psychological measures (Big Five personality) in predicting intelligence test performance, our results also underscore the independent contributions of personality and gaze patterns in predicting intelligence test performance.

2.
NPJ Digit Med ; 7(1): 199, 2024 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-39068241

RESUMO

Given the current state of medical artificial intelligence (AI) and perceptions towards it, collaborative systems are becoming the preferred choice for clinical workflows. This work aims to address expert interaction with medical AI support systems to gain insight towards how these systems can be better designed with the user in mind. As eye tracking metrics have been shown to be robust indicators of usability, we employ them for evaluating the usability and user interaction with medical AI support systems. We use expert gaze to assess experts' interaction with an AI software for caries detection in bitewing x-ray images. We compared standard viewing of bitewing images without AI support versus viewing where AI support could be freely toggled on and off. We found that experts turned the AI on for roughly 25% of the total inspection task, and generally turned it on halfway through the course of the inspection. Gaze behavior showed that when supported by AI, more attention was dedicated to user interface elements related to the AI support, with more frequent transitions from the image itself to these elements. When considering that expert visual strategy is already optimized for fast and effective image inspection, such interruptions in attention can lead to increased time needed for the overall assessment. Gaze analysis provided valuable insights into an AI's usability for medical image inspection. Further analyses of these tools and how to delineate metrical measures of usability should be developed.

3.
Sci Rep ; 14(1): 12329, 2024 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-38811593

RESUMO

Mental rotation is the ability to rotate mental representations of objects in space. Shepard and Metzler's shape-matching tasks, frequently used to test mental rotation, involve presenting pictorial representations of 3D objects. This stimulus material has raised questions regarding the ecological validity of the test for mental rotation with actual visual 3D objects. To systematically investigate differences in mental rotation with pictorial and visual stimuli, we compared data of N = 54 university students from a virtual reality experiment. Comparing both conditions within subjects, we found higher accuracy and faster reaction times for 3D visual figures. We expected eye tracking to reveal differences in participants' stimulus processing and mental rotation strategies induced by the visual differences. We statistically compared fixations (locations), saccades (directions), pupil changes, and head movements. Supplementary Shapley values of a Gradient Boosting Decision Tree algorithm were analyzed, which correctly classified the two conditions using eye and head movements. The results indicated that with visual 3D figures, the encoding of spatial information was less demanding, and participants may have used egocentric transformations and perspective changes. Moreover, participants showed eye movements associated with more holistic processing for visual 3D figures and more piecemeal processing for pictorial 2D figures.


Assuntos
Movimentos Oculares , Humanos , Feminino , Masculino , Movimentos Oculares/fisiologia , Adulto Jovem , Adulto , Rotação , Tempo de Reação/fisiologia , Estimulação Luminosa/métodos , Percepção Espacial/fisiologia , Realidade Virtual , Percepção Visual/fisiologia , Movimentos da Cabeça/fisiologia , Movimentos Sacádicos/fisiologia
4.
MethodsX ; 12: 102662, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38577409

RESUMO

This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye-tracking data. We propose a solution to the challenges of obtaining relevant eye-tracking information in a dynamic 3D virtual environment and calculating interpretable indicators of learning and social behavior. With a method called "gaze-ray casting," we simulated 3D-gaze movements to obtain information about the gazed objects. This information was used to create graphical models of visual attention, establishing attention networks. These networks represented participants' gaze transitions between different entities in the VR environment over time. Measures of centrality, distribution, and interconnectedness of the networks were calculated to describe the network structure. The measures, derived from graph theory, allowed for statistical inference testing and the interpretation of participants' visual attention in 3D VR environments. Our method provides useful insights when analyzing students' learning in a VR classroom, as reported in a corresponding evaluation article with N = 274 participants. •Guidelines on implementing gaze-ray casting in VR using the Unreal Engine and the HTC VIVE Pro Eye.•Creating gaze-based attention networks and analyzing their network structure.•Implementation tutorials and the Open Source software code are provided via OSF: https://osf.io/pxjrc/?view_only=1b6da45eb93e4f9eb7a138697b941198.

5.
IEEE Trans Pattern Anal Mach Intell ; 46(4): 2104-2122, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37956008

RESUMO

Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how human-computer interaction (HCI) and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.


Assuntos
Algoritmos , Humanos
7.
J Dent ; 140: 104793, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38016620

RESUMO

OBJECTIVES: We aimed to understand how artificial intelligence (AI) influences dentists by comparing their gaze behavior when using versus not using an AI software to detect primary proximal carious lesions on bitewing radiographs. METHODS: 22 dentists assessed a median of 18 bitewing images resulting in 170 datasets from dentists without AI and 179 datasets from dentists with AI, after excluding data with poor gaze recording quality. We compared time to first fixation, fixation count, average fixation duration, and fixation frequency between both trial groups. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3 outer-inner third of dentin). We also compared the transitional pattern of the dentists' gaze between the trial groups. RESULTS: Median time to first fixation was shorter in all groups of teeth for dentists with AI versus without AI, although p>0.05. Dentists with AI had more fixations (median=68, IQR=31, 116) on teeth with restorations compared to dentists without AI (median=47, IQR=19, 100), p = 0.01. In turn, average fixation duration was longer on teeth with caries for the dentists with AI than those without AI; although p>0.05. The visual search strategy employed by dentists with AI was less systematic with a lower proportion of lateral tooth-wise transitions compared to dentists without AI. CONCLUSIONS: Dentists with AI exhibited more efficient viewing behavior compared to dentists without AI, e.g., lesser time taken to notice caries and/or restorations, more fixations on teeth with restorations, and fixating for shorter durations on teeth without carious lesions and/or restorations. CLINICAL SIGNIFICANCE: Analysis of dentists' gaze patterns while using AI-generated annotations of carious lesions demonstrates how AI influences their data extraction methods for dental images. Such insights can be exploited to improve, and even customize, AI-based diagnostic tools, thus reducing the dentists' extraneous attentional processing and allowing for more thorough examination of other image areas.


Assuntos
Inteligência Artificial , Cárie Dentária , Humanos , Suscetibilidade à Cárie Dentária , Restauração Dentária Permanente , Padrões de Prática Odontológica , Cárie Dentária/diagnóstico por imagem , Cárie Dentária/patologia , Odontólogos
8.
Behav Res Methods ; 56(4): 3226-3241, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38114880

RESUMO

We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using synthetic data. Using only synthetic data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with synthetic CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on two datasets consisting of high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 3-41.5% reduction in terms of spatial precision across data sets, and performed on par with state-of-the-art on synthetic images in terms of spatial accuracy. We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem, which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers.


Assuntos
Córnea , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Córnea/diagnóstico por imagem , Córnea/fisiologia , Algoritmos
9.
Sci Rep ; 13(1): 14672, 2023 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-37673939

RESUMO

Higher-achieving peers have repeatedly been found to negatively impact students' evaluations of their own academic abilities (i.e., Big-Fish-Little-Pond Effect). Building on social comparison theory, this pattern is assumed to result from students comparing themselves to their classmates; however, based on existing research designs, it remains unclear how exactly students make use of social comparison information in the classroom. To determine the extent to which students (N = 353 sixth graders) actively attend and respond to social comparison information in the form of peers' achievement-related behaviour, we used eye-tracking data from an immersive virtual reality (IVR) classroom. IVR classrooms offer unprecedented opportunities for psychological classroom research as they allow to integrate authentic classroom scenarios with maximum experimental control. In the present study, we experimentally varied virtual classmates' achievement-related behaviour (i.e., their hand-raising in response to the teacher's questions) during instruction, and students' eye and gaze data showed that they actively processed this social comparison information. Students who attended more to social comparison information (as indicated by more frequent and longer gaze durations at peer learners) had less favourable self-evaluations. We discuss implications for the future use of IVR environments to study behaviours in the classroom and beyond.


Assuntos
Comparação Social , Realidade Virtual , Animais , Humanos , Comportamento Social , Relações Interpessoais , Estudantes
10.
J Dent ; 135: 104585, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37301462

RESUMO

OBJECTIVES: Understanding dentists' gaze patterns on radiographs may allow to unravel sources of their limited accuracy and develop strategies to mitigate them. We conducted an eye tracking experiment to characterize dentists' scanpaths and thus their gaze patterns when assessing bitewing radiographs to detect primary proximal carious lesions. METHODS: 22 dentists assessed a median of nine bitewing images each, resulting in 170 datasets after excluding data with poor quality of gaze recording. Fixation was defined as an area of attentional focus related to visual stimuli. We calculated time to first fixation, fixation count, average fixation duration, and fixation frequency. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3: outer-inner third of dentin). We also examined the transitional nature of the dentists' gaze. RESULTS: Dentists had more fixations on teeth with lesions and/or restorations (median=138 [interquartile range=87, 204]) than teeth without them (32 [15, 66]), p<0.001. Notably, teeth with lesions had longer fixation durations (407 milliseconds [242, 591]) than those with restorations (289 milliseconds [216, 337]), p<0.001. Time to first fixation was longer for teeth with E1 lesions (17,128 milliseconds [8813, 21,540]) than lesions of other depths (p = 0.049). The highest number of fixations were on teeth with D2 lesions (43 [20, 51]) and lowest on teeth with E1 lesions (5 [1, 37]), p<0.001. Generally, a systematic tooth-by-tooth gaze pattern was observed. CONCLUSIONS: As hypothesized, while visually inspecting bitewing radiographic images, dentists employed a heightened focus on certain image features/areas, relevant to the assigned task. Also, they generally examined the entire image in a systematic tooth-by-tooth pattern.


Assuntos
Cárie Dentária , Dentina , Humanos , Dentina/patologia , Radiografia Interproximal , Cárie Dentária/patologia , Esmalte Dentário/patologia , Odontólogos , Padrões de Prática Odontológica
11.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35384605

RESUMO

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Humanos , Pesquisa Empírica
12.
J Eye Mov Res ; 16(4)2023.
Artigo em Inglês | MEDLINE | ID: mdl-38544928

RESUMO

During calibration, an eye-tracker fits a mapping function from features to a target gaze point. While there is research on which mapping function to use, little is known about how to best estimate the function's parameters. We investigate how different fitting methods impact accuracy under different noise factors, such as mobile eye-tracker imprecision or detection errors in feature extraction during calibration. For this purpose, a simulation of binocular gaze was developed for a) different calibration patterns and b) different noise characteristics. We found the commonly used polynomial regression via least-squares-error fit often lacks to find good mapping functions when compared to ridge regression. Especially as data becomes noisier, outlier-tolerant fitting methods are of importance. We demonstrate a reduction in mean MSE of 20% by simply using ridge over polynomial fit in a mobile eye-tracking experiment.

13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2976-2982, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085677

RESUMO

In modern psychotherapy, digital health technology offers advanced and personalized therapy options, increasing availability as well as ecological validity. These aspects have proven to be highly relevant for children and adolescents with obsessive-compulsive disorder (OCD). Exposure and Response Prevention therapy, which is the state-of-the-art treatment for OCD, builds on the reconstruction of everyday life exposure to anxious situations. However, while compulsive behavior pre-dominantly occurs in home environments, exposure situations during therapy are limited to clinical settings. Telemedical treatment allows to shift from this limited exposure reconstruction to exposure situations in real life. In the SSTeP KiZ study (smart sensor technology in telepsychotherapy for children and adolescents with OCD), we combine video therapy with wearable sensors delivering physiological and behavioral measures to objectively determine the stress level of patients. The setup allows to gain information from exposure to stress in a realistic environment both during and outside of therapy sessions. In a first pilot study, we explored the sensitivity of individual sensor modalities to different levels of stress and anxiety. For this, we captured the obsessive-compulsive behavior of five adolescents with an ECG chest belt, inertial sensors capturing hand movements, and an eye tracker. Despite their prototypical nature, our results deliver strong evidence that the examined sensor modalities yield biomarkers allowing for personalized detection and quantification of stress and anxiety. This opens up future possibilities to evaluate the severity of individual compulsive behavior based on multi-variate state classification in real-life situations. Clinical Relevance- Our results demonstrate the potential for efficient personalized psychotherapy by monitoring physiological and behavioral changes with multiple sensor modalities in ecologically valid real-life scenarios.


Assuntos
Transtorno Obsessivo-Compulsivo , Telemedicina , Adolescente , Transtornos de Ansiedade , Proteínas de Ciclo Celular , Criança , Humanos , Transtorno Obsessivo-Compulsivo/diagnóstico , Transtorno Obsessivo-Compulsivo/terapia , Projetos Piloto , Psicoterapia
14.
PLoS One ; 17(3): e0264316, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35349582

RESUMO

Understanding the main factors contributing to individual differences in fluid intelligence is one of the main challenges of psychology. A vast body of research has evolved from the theoretical framework put forward by Cattell, who developed the Culture-Fair IQ Test (CFT 20-R) to assess fluid intelligence. In this work, we extend and complement the current state of research by analysing the differential and combined relationship between eye-movement patterns and socio-demographic information and the ability of a participant to correctly solve a CFT item. Our work shows that a participant's eye movements while solving a CFT item contain discriminative information and can be used to predict whether the participant will succeed in solving the test item. Moreover, the information related to eye movements complements the information provided by socio-demographic data when it comes to success prediction. In combination, both types of information yield a significantly higher predictive performance than each information type individually. To better understand the contributions of features related to eye movements and socio-demographic information to predict a participant's success in solving a CFT item, we employ state-of-the-art explainability techniques and show that, along with socio-demographic variables, eye-movement data. Especially the number of saccades and the mean pupil diameter, significantly increase the discriminating power. The eye-movement features are likely indicative of processing efficiency and invested mental effort. Beyond the specific contribution to research on how eye movements can serve as a means to uncover mechanisms underlying cognitive processes, the findings presented in this work pave the way for further in-depth investigations of factors predicting individual differences in fluid intelligence.


Assuntos
Movimentos Oculares , Movimentos Sacádicos , Demografia , Humanos , Inteligência , Testes de Inteligência
15.
PLoS One ; 16(8): e0255979, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34403454

RESUMO

New generation head-mounted displays, such as VR and AR glasses, are coming into the market with already integrated eye tracking and are expected to enable novel ways of human-computer interaction in numerous applications. However, since eye movement properties contain biometric information, privacy concerns have to be handled properly. Privacy-preservation techniques such as differential privacy mechanisms have recently been applied to eye movement data obtained from such displays. Standard differential privacy mechanisms; however, are vulnerable due to temporal correlations between the eye movement observations. In this work, we propose a novel transform-coding based differential privacy mechanism to further adapt it to the statistics of eye movement feature data and compare various low-complexity methods. We extend the Fourier perturbation algorithm, which is a differential privacy mechanism, and correct a scaling mistake in its proof. Furthermore, we illustrate significant reductions in sample correlations in addition to query sensitivities, which provide the best utility-privacy trade-off in the eye tracking literature. Our results provide significantly high privacy without any essential loss in classification accuracies while hiding personal identifiers.


Assuntos
Algoritmos , Movimentos Oculares/fisiologia , Tecnologia de Rastreamento Ocular/estatística & dados numéricos , Privacidade , Óculos Inteligentes/estatística & dados numéricos , Feminino , Humanos , Masculino
16.
Front Sports Act Living ; 3: 692526, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34381997

RESUMO

The focus of expertise research moves constantly forward and includes cognitive factors, such as visual information perception and processing. In highly dynamic tasks, such as decision making in sports, these factors become more important to build a foundation for diagnostic systems and adaptive learning environments. Although most recent research focuses on behavioral features, the underlying cognitive mechanisms have been poorly understood, mainly due to a lack of adequate methods for the analysis of complex eye tracking data that goes beyond aggregated fixations and saccades. There are no consistent statements about specific perceptual features that explain expertise. However, these mechanisms are an important part of expertise, especially in decision making in sports games, as highly trained perceptual cognitive abilities can provide athletes with some advantage. We developed a deep learning approach that independently finds latent perceptual features in fixation image patches. It then derives expertise based solely on these fixation patches, which encompass the gaze behavior of athletes in an elaborately implemented virtual reality setup. We present a CNN-BiLSTM based model for expertise assessment in goalkeeper-specific decision tasks on initiating passes in build-up situations. The empirical validation demonstrated that our model has the ability to find valuable latent features that detect the expertise level of 33 athletes (novice, advanced, and expert) with 73.11% accuracy. This model is a first step in the direction of generalizable expertise recognition based on eye movements.

17.
Sci Data ; 8(1): 154, 2021 06 16.
Artigo em Inglês | MEDLINE | ID: mdl-34135342

RESUMO

We present the TüEyeQ data set - to the best of our knowledge - the most comprehensive data set generated on a culture fair intelligence test (CFT 20-R), i.e., an IQ Test, consisting of 56 single tasks, taken by 315 individuals aged between 18 and 30 years. In addition to socio-demographic and educational information, the data set also includes the eye movements of the individuals while taking the IQ test. Along with distributional information we also highlight the potential for predictive analysis on the TüEyeQ data set and report the most important covariates for predicting the performance of a participant on a given task along with their influence on the prediction.


Assuntos
Movimentos Oculares , Testes de Inteligência , Adolescente , Adulto , Demografia , Escolaridade , Feminino , Alemanha , Humanos , Atividades de Lazer , Masculino , Distância Psicológica , Adulto Jovem
18.
J Eye Mov Res ; 12(3)2021 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-34122742

RESUMO

The control of technological systems by human operators has been the object of study for many decades. The increasing complexity in the digital age has made the optimization of the interaction between system and human operator particularly necessary. In the present thematic issue, ten exemplary articles are presented, ranging from observational field studies to experimental work in highly complex navigation simulators. For the human operator, the processes of attention play a crucial role, which are captured in the contributions listed in this thematic issue by eye-tracking devices. For many decades, eye tracking during car driving has been investigated extensively (e.g. 6; 5). In the present special issue, Cvahte Ojstersek & Topolsek (4) provide a literature review and scientometric analysis of 139 eye-tracking studies investigating driver distraction. For future studies, the authors recommend a wider variety of distractor stimuli, a larger number of tested participants, and an increasing interdisciplinarity of researchers. In addition to most studies investigating bottom-up processes of covered attention, Tuhkanen, Pekkanen, Lehtonen & Lappi (10) include the experimental control of top-down processes of overt attention in an active visuomotor steering task. The results indicate a bottom-up process of biasing the optic flow of the stimulus input in interaction with the top-down saccade planning induced by the steering task. An expanding area of technological development involves autonomous driving where actions of the human operator directly interact with the programmed reactions of the vehicle. Autonomous driving requires, however, a broader exploration of the entire visual input and less gaze directed towards the road centre. Schnebelen, Charron & Mars (9) conducted experimental research in this area and concluded that gaze dynamics played the most important role in distinguishing between manual and automated driving. Through a combination of advanced gaze tracking systems with the latest vehicle environment sensors, Bickerdt, Wendland, Geisler, Sonnenberg & Kasneci (2021) conducted a study with 50 participants in a driving simulator and propose a novel way to determine perceptual limits which are applicable to realistic driving scenarios. Eye-Computer-Interaction (ECI) is an interactive method of directly controlling a technological device by means of ocular parameters. In this context, Niu, Gao, Xue, Zhang & Yang (8) conducted two experiments to explore the optimum target size and gaze-triggering dwell time in ECI. Their results have an exemplary application value for future interface design. Aircraft training and pilot selection is commonly performed on simulators. This makes it possible to study human capabilities and their limitation in interaction with the simulated technological system. Based on their methodological developments and experimental results, Vlacic, Knezevic, Mandal, Rodenkov & Vitsas (11) propose a network approach with three target measures describing the individual saccade strategy of the participants in this study. In their analysis of the cognitive load of pilots, Babu, Jeevitha Shree, Prabhakar, Saluja, Pashilkar & Biswas (3) investigated the ocular parameters of 14 pilots in a simulator and during test flights in an aircraft during air to ground attack training. Their results showed that ocular parameters are significantly different in different flying conditions and significantly correlate with altitude gradients during air to ground dive training tasks. In maritime training the use of simulations is per international regulations mandatory. Mao, Hildre & Zhang (7) performed a study of crane lifting and compared novice and expert operators. Similarities and dissimilarities of eye behavior between novice and expert are outlined and discussed. The study of Atik & Arslan (2) involves capturing and analyzing eye movement data of ship officers with sea experience in simulation exercises for assessing competency. Significant differences were found between electronic navigation competencies of expert and novice ship officers. The authors demonstrate that the eye tracking technology is a valuable tool for the assessment of electronic navigation competency. The focus of the study by Atik (1) is the assessment and training of situational awareness of ship officers in naval Bridge Resource Management. The study shows that eye tracking provides the assessor with important novel data in simulator based maritime training, such as focus of attention, which is a decisive factor for the effectiveness of Bridge Resource Management training. The research presented in the different articles of this special thematic issue cover many different areas of application and involve specialists from different fields, but they converge on repeated demonstrations of the usefulness of measuring attentional processes by eye movements or using gaze parameters for controlling complex technological devices. Together, they share the common goal of improving the potential and safety of technology in the digital age by fitting it to human capabilities and limitations.

19.
J Eye Mov Res ; 12(3)2021 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-34122743

RESUMO

Combining advanced gaze tracking systems with the latest vehicle environment sensors opens up new fields of applications for driver assistance. Gaze tracking enables researchers to determine the location of a fixation, and under consideration of the visual saliency of the scene, to predict visual perception of objects. The perceptual limits, for stimulus identification, found in literature have mostly been determined in laboratory conditions using isolated stimuli, with a fixed gaze point, on a single screen with limited coverage of the field of view. The found limits are usually reported as hard limits. Such commonly used limits are therefore not applicable to settings with a wide field of view, natural viewing behavior and multi-stimuli. As handling of sudden, potentially critical driving maneuvers heavily relies on peripheral vision, the peripheral limits for feature perception need to be included in the determined perceptual limits. To analyze the human visual perception of different, simultaneously occurring, object changes (shape, color, movement) we conducted a study with 50 participants, in a driving simulator and we propose a novel way to determine perceptual limits, which is more applicable to driving scenarios.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...