Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Behav Res Methods ; 56(4): 3226-3241, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38114880

RESUMO

We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using synthetic data. Using only synthetic data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with synthetic CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on two datasets consisting of high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 3-41.5% reduction in terms of spatial precision across data sets, and performed on par with state-of-the-art on synthetic images in terms of spatial accuracy. We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem, which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers.


Assuntos
Córnea , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Córnea/diagnóstico por imagem , Córnea/fisiologia , Algoritmos
2.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35384605

RESUMO

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Humanos , Pesquisa Empírica
3.
Adv Health Sci Educ Theory Pract ; 26(1): 159-181, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-32488458

RESUMO

In dental medicine, interpreting radiographs (i.e., orthopantomograms, OPTs) is an error-prone process, even in experts. Effective intervention methods are therefore needed to support students in improving their image reading skills for OPTs. To this end, we developed a compare-and-contrast intervention, which aimed at supporting students in achieving full coverage when visually inspecting OPTs and, consequently, obtaining a better diagnostic performance. The comparison entailed a static eye movement visualization (heat map) on an OPT showing full gaze coverage from a peer-model (other student) and another heat map showing a student's own gaze behavior. The intervention group (N = 38) compared five such heat map combinations, whereas the control group (N = 23) diagnosed five OPTs. Prior to the experimental variation (pre-test) and after it (post-test), students in both conditions searched for anomalies in OPTs while their gaze was recorded. Results showed that students in the intervention group covered more areas of the OPTs and looked less often and for a shorter amount of time at anomalies after the intervention. Furthermore, they fixated on low-prevalence anomalies earlier and high-prevalence anomalies later during the inspection. However, the students in the intervention group did not show any meaningful improvement in detection rate and made more false positive errors compared to the control group. Thus, the intervention guided visual attention but did not improve diagnostic performance substantially. Exploratory analyses indicated that further interventions should teach knowledge about anomalies rather than focusing on full coverage of radiographs.


Assuntos
Educação em Odontologia/métodos , Movimentos Oculares/fisiologia , Radiologia/educação , Estudantes de Odontologia , Adulto , Competência Clínica , Feminino , Humanos , Masculino , Radiografia Panorâmica
4.
Behav Res Methods ; 52(3): 1387-1401, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32212086

RESUMO

The increasing employment of eye-tracking technology in different application areas and in vision research has led to an increased need to measure fast eye-movement events. Whereas the cost of commercial high-speed eye trackers (above 300 Hz) is usually in the tens of thousands of EUR, to date, only a small number of studies have proposed low-cost solutions. Existing low-cost solutions however, focus solely on lower frame rates (up to 120 Hz) that might suffice for basic eye tracking, leaving a gap when it comes to the investigation of high-speed saccadic eye movements. In this paper, we present and evaluate a system designed to track such high-speed eye movements, achieving operating frequencies well beyond 500 Hz. This includes methods to effectively and robustly detect and track glints and pupils in the context of high-speed remote eye tracking, which, paired with a geometric eye model, achieved an average gaze estimation error below 1 degree and average precision of 0.38 degrees. Moreover, average undetection rate was only 0.33%. At a total investment of less than 600 EUR, the proposed system represents a competitive and suitable alternative to commercial systems at a tiny fraction of the cost, with the additional advantage that it can be freely tuned by investigators to fit their requirements independent of eye-tracker vendors.


Assuntos
Movimentos Oculares , Pupila , Algoritmos , Medições dos Movimentos Oculares
5.
Behav Res Methods ; 52(3): 1140-1160, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31898290

RESUMO

Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant's head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs' Pupil in 3D mode, and (iv) Pupil-Labs' Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8-3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.


Assuntos
Confiabilidade dos Dados , Movimentos Oculares , Adulto , Medições dos Movimentos Oculares , Feminino , Movimentos da Cabeça , Humanos , Masculino , Pupila
7.
Graefes Arch Clin Exp Ophthalmol ; 256(12): 2429-2435, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30251198

RESUMO

PURPOSE: On-road testing is considered the standard for assessment of driving performance; however, it lacks standardization. In contrast, driving simulators provide controlled experimental settings in a virtual reality environment. This study compares both testing conditions in patients with binocular visual field defects due to bilateral glaucomatous optic neuropathy or due to retro-chiasmal visual pathway lesions. METHODS: Ten glaucoma patients (PG), ten patients with homonymous visual field defects (PH), and 20 age- and gender-matched ophthalmologically normal control subjects (CG and CH, respectively) participated in a 40-min on-road driving task using a dual brake vehicle. A subset of this sample (8 PG, 8 PH, 8 CG, and 7 CH) underwent a subsequent driving simulator test of similar duration. For both settings, pass/fail rates were assessed by a masked driving instructor. RESULTS: For on-road driving, hemianopia patients (PH) and glaucoma patients (PG) showed worse performance than their controls (CH and CG groups): PH 40%, CH 30%, PG 60%, CG 0%, failure rate. Similar results were obtained for the driving simulator test: PH 50%, CH 29%, PG 38%, CG 0%, failure rate. Twenty-four out of 31 participants (77%) showed concordant results with regard to pass/fail under both test conditions (p > 0.05; McNemar test). CONCLUSIONS: Driving simulator testing leads to results comparable to on-road driving, in terms of pass/fail rates in subjects with binocular (glaucomatous or retro-chiasmal lesion-induced) visual field defects. Driving simulator testing seems to be a well-standardized method, appropriate for assessment of driving performance in individuals with binocular visual field loss.


Assuntos
Condução de Veículo , Simulação por Computador , Hemianopsia/reabilitação , Visão Ocular , Campos Visuais/fisiologia , Adulto , Idoso , Feminino , Hemianopsia/diagnóstico , Hemianopsia/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Testes de Campo Visual
8.
Acta Neurochir (Wien) ; 159(6): 959-966, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28424915

RESUMO

BACKGROUND: Previous studies have consistently demonstrated gaze behaviour differences related to expertise during various surgical procedures. In micro-neurosurgery, however, there is a lack of evidence of empirically demonstrated individual differences associated with visual attention. It is unknown exactly how neurosurgeons see a stereoscopic magnified view in the context of micro-neurosurgery and what this implies for medical training. METHOD: We report on an investigation of the eye movement patterns in micro-neurosurgery using a state-of-the-art eye tracker. We studied the eye movements of nine neurosurgeons while performing cutting and suturing tasks under a surgical microscope. Eye-movement characteristics, such as fixation (focus level) and saccade (visual search pattern), were analysed. RESULTS: The results show a strong relationship between the level of microsurgical skill and the gaze pattern, whereas more expertise is associated with greater eye control, stability, and focusing in eye behaviour. For example, in the cutting task, well-trained surgeons increased their fixation durations on the operating field twice as much as the novices (expert, 848 ms; novice, 402 ms). CONCLUSIONS: Maintaining steady visual attention on the target (fixation), as well as being able to quickly make eye jumps from one target to another (saccades) are two important elements for the success of neurosurgery. The captured gaze patterns can be used to improve medical education, as part of an assessment system or in a gaze-training application.


Assuntos
Microcirurgia/normas , Neurocirurgiões/normas , Neurocirurgia/normas , Movimentos Sacádicos , Adulto , Atenção , Feminino , Humanos , Masculino , Microcirurgia/educação , Microcirurgia/métodos , Neurocirurgiões/educação , Neurocirurgia/educação , Neurocirurgia/métodos
9.
Behav Res Methods ; 49(3): 1048-1064, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27443354

RESUMO

Our eye movements are driven by a continuous trade-off between the need for detailed examination of objects of interest and the necessity to keep an overview of our surrounding. In consequence, behavioral patterns that are characteristic for our actions and their planning are typically manifested in the way we move our eyes to interact with our environment. Identifying such patterns from individual eye movement measurements is however highly challenging. In this work, we tackle the challenge of quantifying the influence of experimental factors on eye movement sequences. We introduce an algorithm for extracting sequence-sensitive features from eye movements and for the classification of eye movements based on the frequencies of small subsequences. Our approach is evaluated against the state-of-the art on a novel and a very rich collection of eye movements data derived from four experimental settings, from static viewing tasks to highly dynamic outdoor settings. Our results show that the proposed method is able to classify eye movement sequences over a variety of experimental designs. The choice of parameters is discussed in detail with special focus on highlighting different aspects of general scanpath shape. Algorithms and evaluation data are available at: http://www.ti.uni-tuebingen.de/scanpathcomparison.html .


Assuntos
Algoritmos , Medições dos Movimentos Oculares/classificação , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa
11.
Optom Vis Sci ; 92(11): 1037-46, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26501733

RESUMO

PURPOSE: The aim of this pilot study was to assess the driving performance and the visual search behavior, that is, eye and head movements, of patients with glaucoma in comparison to healthy-sighted subjects during a simulated driving test. METHODS: Driving performance and gaze behavior of six glaucoma patients and eight healthy-sighted age- and sex-matched control subjects were compared in an advanced driving simulator. All subjects underwent a 40-minute driving test including nine hazardous situations on city and rural roads. Fitness to drive was assessed by a masked driving instructor according to the requirements of the official German driving test. Several driving performance measures were investigated: lane position, time to line crossing, and speed. Additionally, eye and head movements were tracked and analyzed. RESULTS: Three out of six glaucoma patients passed the driving test and their driving performance was indistinguishable from that of the control group. Patients who passed the test showed an increased visual exploration in comparison to patients who failed; that is, they showed increased number of head and gaze movements toward eccentric regions. Furthermore, patients who failed the test showed a rightward bias in average lane position, probably in an attempt to maximize the safety margin to oncoming traffic. CONCLUSIONS: Our study suggests that a considerable subgroup of subjects with binocular glaucomatous visual field loss shows a safe driving behavior in a virtual reality environment, because they adapt their viewing behavior by increasing their visual scanning. Hence, binocular visual field loss does not necessarily influence driving safety. We recommend that more individualized driving assessments, which will take into account the patient's ability to compensate, are required.


Assuntos
Condução de Veículo , Fixação Ocular/fisiologia , Glaucoma/fisiopatologia , Análise e Desempenho de Tarefas , Transtornos da Visão/fisiopatologia , Visão Binocular/fisiologia , Campos Visuais/fisiologia , Idoso , Exame para Habilitação de Motoristas , Simulação por Computador , Movimentos Oculares/fisiologia , Feminino , Movimentos da Cabeça/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Projetos Piloto , Segurança , Percepção Visual/fisiologia
12.
MethodsX ; 12: 102662, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38577409

RESUMO

This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye-tracking data. We propose a solution to the challenges of obtaining relevant eye-tracking information in a dynamic 3D virtual environment and calculating interpretable indicators of learning and social behavior. With a method called "gaze-ray casting," we simulated 3D-gaze movements to obtain information about the gazed objects. This information was used to create graphical models of visual attention, establishing attention networks. These networks represented participants' gaze transitions between different entities in the VR environment over time. Measures of centrality, distribution, and interconnectedness of the networks were calculated to describe the network structure. The measures, derived from graph theory, allowed for statistical inference testing and the interpretation of participants' visual attention in 3D VR environments. Our method provides useful insights when analyzing students' learning in a VR classroom, as reported in a corresponding evaluation article with N = 274 participants. •Guidelines on implementing gaze-ray casting in VR using the Unreal Engine and the HTC VIVE Pro Eye.•Creating gaze-based attention networks and analyzing their network structure.•Implementation tutorials and the Open Source software code are provided via OSF: https://osf.io/pxjrc/?view_only=1b6da45eb93e4f9eb7a138697b941198.

13.
Sci Rep ; 14(1): 12329, 2024 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-38811593

RESUMO

Mental rotation is the ability to rotate mental representations of objects in space. Shepard and Metzler's shape-matching tasks, frequently used to test mental rotation, involve presenting pictorial representations of 3D objects. This stimulus material has raised questions regarding the ecological validity of the test for mental rotation with actual visual 3D objects. To systematically investigate differences in mental rotation with pictorial and visual stimuli, we compared data of N = 54 university students from a virtual reality experiment. Comparing both conditions within subjects, we found higher accuracy and faster reaction times for 3D visual figures. We expected eye tracking to reveal differences in participants' stimulus processing and mental rotation strategies induced by the visual differences. We statistically compared fixations (locations), saccades (directions), pupil changes, and head movements. Supplementary Shapley values of a Gradient Boosting Decision Tree algorithm were analyzed, which correctly classified the two conditions using eye and head movements. The results indicated that with visual 3D figures, the encoding of spatial information was less demanding, and participants may have used egocentric transformations and perspective changes. Moreover, participants showed eye movements associated with more holistic processing for visual 3D figures and more piecemeal processing for pictorial 2D figures.


Assuntos
Movimentos Oculares , Humanos , Feminino , Masculino , Movimentos Oculares/fisiologia , Adulto Jovem , Adulto , Rotação , Tempo de Reação/fisiologia , Estimulação Luminosa/métodos , Percepção Espacial/fisiologia , Realidade Virtual , Percepção Visual/fisiologia , Movimentos da Cabeça/fisiologia , Movimentos Sacádicos/fisiologia
14.
NPJ Sci Learn ; 9(1): 41, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38951543

RESUMO

Intelligence and personality are both key drivers of learning. This study extends prior research on intelligence and personality by adopting a behavioral-process-related eye-tracking approach. We tested 182 adults on fluid intelligence and the Big Five personality traits. Eye-tracking information (gaze patterns) was recorded while participants completed the intelligence test. Machine learning models showed that personality explained 3.18% of the variance in intelligence test scores, with Openness and, surprisingly, Agreeableness most meaningfully contributing to the prediction. Facet-level measures of personality explained a larger amount of variance (7.67%) in intelligence test scores than the trait-level measures, with the largest coefficients obtained for Ideas and Values (Openness) and Compliance and Trust (Agreeableness). Gaze patterns explained a substantial amount of variance in intelligence test performance (35.91%). Gaze patterns were unrelated to the Big Five personality traits, but some of the facets (especially Self-Consciousness from Neuroticism and Assertiveness from Extraversion) were related to gaze. Gaze patterns reflected the test-solving strategies described in the literature (constructive matching, response elimination) to some extent. A combined feature vector consisting of gaze-based predictions and personality traits explained 37.50% of the variance in intelligence test performance, with significant unique contributions from both personality and gaze patterns. A model that included personality facets and gaze explained 38.02% of the variance in intelligence test performance. Although behavioral data thus clearly outperformed "traditional" psychological measures (Big Five personality) in predicting intelligence test performance, our results also underscore the independent contributions of personality and gaze patterns in predicting intelligence test performance.

15.
IEEE Trans Pattern Anal Mach Intell ; 46(4): 2104-2122, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37956008

RESUMO

Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how human-computer interaction (HCI) and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.


Assuntos
Algoritmos , Humanos
16.
J Dent ; 140: 104793, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38016620

RESUMO

OBJECTIVES: We aimed to understand how artificial intelligence (AI) influences dentists by comparing their gaze behavior when using versus not using an AI software to detect primary proximal carious lesions on bitewing radiographs. METHODS: 22 dentists assessed a median of 18 bitewing images resulting in 170 datasets from dentists without AI and 179 datasets from dentists with AI, after excluding data with poor gaze recording quality. We compared time to first fixation, fixation count, average fixation duration, and fixation frequency between both trial groups. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3 outer-inner third of dentin). We also compared the transitional pattern of the dentists' gaze between the trial groups. RESULTS: Median time to first fixation was shorter in all groups of teeth for dentists with AI versus without AI, although p>0.05. Dentists with AI had more fixations (median=68, IQR=31, 116) on teeth with restorations compared to dentists without AI (median=47, IQR=19, 100), p = 0.01. In turn, average fixation duration was longer on teeth with caries for the dentists with AI than those without AI; although p>0.05. The visual search strategy employed by dentists with AI was less systematic with a lower proportion of lateral tooth-wise transitions compared to dentists without AI. CONCLUSIONS: Dentists with AI exhibited more efficient viewing behavior compared to dentists without AI, e.g., lesser time taken to notice caries and/or restorations, more fixations on teeth with restorations, and fixating for shorter durations on teeth without carious lesions and/or restorations. CLINICAL SIGNIFICANCE: Analysis of dentists' gaze patterns while using AI-generated annotations of carious lesions demonstrates how AI influences their data extraction methods for dental images. Such insights can be exploited to improve, and even customize, AI-based diagnostic tools, thus reducing the dentists' extraneous attentional processing and allowing for more thorough examination of other image areas.


Assuntos
Inteligência Artificial , Cárie Dentária , Humanos , Suscetibilidade à Cárie Dentária , Restauração Dentária Permanente , Padrões de Prática Odontológica , Cárie Dentária/diagnóstico por imagem , Cárie Dentária/patologia , Odontólogos
17.
J Eye Mov Res ; 16(4)2023.
Artigo em Inglês | MEDLINE | ID: mdl-38544928

RESUMO

During calibration, an eye-tracker fits a mapping function from features to a target gaze point. While there is research on which mapping function to use, little is known about how to best estimate the function's parameters. We investigate how different fitting methods impact accuracy under different noise factors, such as mobile eye-tracker imprecision or detection errors in feature extraction during calibration. For this purpose, a simulation of binocular gaze was developed for a) different calibration patterns and b) different noise characteristics. We found the commonly used polynomial regression via least-squares-error fit often lacks to find good mapping functions when compared to ridge regression. Especially as data becomes noisier, outlier-tolerant fitting methods are of importance. We demonstrate a reduction in mean MSE of 20% by simply using ridge over polynomial fit in a mobile eye-tracking experiment.

18.
Sci Rep ; 13(1): 14672, 2023 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-37673939

RESUMO

Higher-achieving peers have repeatedly been found to negatively impact students' evaluations of their own academic abilities (i.e., Big-Fish-Little-Pond Effect). Building on social comparison theory, this pattern is assumed to result from students comparing themselves to their classmates; however, based on existing research designs, it remains unclear how exactly students make use of social comparison information in the classroom. To determine the extent to which students (N = 353 sixth graders) actively attend and respond to social comparison information in the form of peers' achievement-related behaviour, we used eye-tracking data from an immersive virtual reality (IVR) classroom. IVR classrooms offer unprecedented opportunities for psychological classroom research as they allow to integrate authentic classroom scenarios with maximum experimental control. In the present study, we experimentally varied virtual classmates' achievement-related behaviour (i.e., their hand-raising in response to the teacher's questions) during instruction, and students' eye and gaze data showed that they actively processed this social comparison information. Students who attended more to social comparison information (as indicated by more frequent and longer gaze durations at peer learners) had less favourable self-evaluations. We discuss implications for the future use of IVR environments to study behaviours in the classroom and beyond.


Assuntos
Comparação Social , Realidade Virtual , Animais , Humanos , Comportamento Social , Relações Interpessoais , Estudantes
19.
J Dent ; 135: 104585, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37301462

RESUMO

OBJECTIVES: Understanding dentists' gaze patterns on radiographs may allow to unravel sources of their limited accuracy and develop strategies to mitigate them. We conducted an eye tracking experiment to characterize dentists' scanpaths and thus their gaze patterns when assessing bitewing radiographs to detect primary proximal carious lesions. METHODS: 22 dentists assessed a median of nine bitewing images each, resulting in 170 datasets after excluding data with poor quality of gaze recording. Fixation was defined as an area of attentional focus related to visual stimuli. We calculated time to first fixation, fixation count, average fixation duration, and fixation frequency. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3: outer-inner third of dentin). We also examined the transitional nature of the dentists' gaze. RESULTS: Dentists had more fixations on teeth with lesions and/or restorations (median=138 [interquartile range=87, 204]) than teeth without them (32 [15, 66]), p<0.001. Notably, teeth with lesions had longer fixation durations (407 milliseconds [242, 591]) than those with restorations (289 milliseconds [216, 337]), p<0.001. Time to first fixation was longer for teeth with E1 lesions (17,128 milliseconds [8813, 21,540]) than lesions of other depths (p = 0.049). The highest number of fixations were on teeth with D2 lesions (43 [20, 51]) and lowest on teeth with E1 lesions (5 [1, 37]), p<0.001. Generally, a systematic tooth-by-tooth gaze pattern was observed. CONCLUSIONS: As hypothesized, while visually inspecting bitewing radiographic images, dentists employed a heightened focus on certain image features/areas, relevant to the assigned task. Also, they generally examined the entire image in a systematic tooth-by-tooth pattern.


Assuntos
Cárie Dentária , Dentina , Humanos , Dentina/patologia , Radiografia Interproximal , Cárie Dentária/patologia , Esmalte Dentário/patologia , Odontólogos , Padrões de Prática Odontológica
20.
PLoS One ; 17(3): e0264316, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35349582

RESUMO

Understanding the main factors contributing to individual differences in fluid intelligence is one of the main challenges of psychology. A vast body of research has evolved from the theoretical framework put forward by Cattell, who developed the Culture-Fair IQ Test (CFT 20-R) to assess fluid intelligence. In this work, we extend and complement the current state of research by analysing the differential and combined relationship between eye-movement patterns and socio-demographic information and the ability of a participant to correctly solve a CFT item. Our work shows that a participant's eye movements while solving a CFT item contain discriminative information and can be used to predict whether the participant will succeed in solving the test item. Moreover, the information related to eye movements complements the information provided by socio-demographic data when it comes to success prediction. In combination, both types of information yield a significantly higher predictive performance than each information type individually. To better understand the contributions of features related to eye movements and socio-demographic information to predict a participant's success in solving a CFT item, we employ state-of-the-art explainability techniques and show that, along with socio-demographic variables, eye-movement data. Especially the number of saccades and the mean pupil diameter, significantly increase the discriminating power. The eye-movement features are likely indicative of processing efficiency and invested mental effort. Beyond the specific contribution to research on how eye movements can serve as a means to uncover mechanisms underlying cognitive processes, the findings presented in this work pave the way for further in-depth investigations of factors predicting individual differences in fluid intelligence.


Assuntos
Movimentos Oculares , Movimentos Sacádicos , Demografia , Humanos , Inteligência , Testes de Inteligência
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA