Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.903
Filter
1.
Sensors (Basel) ; 24(10)2024 May 08.
Article in English | MEDLINE | ID: mdl-38793839

ABSTRACT

Understanding human actions often requires in-depth detection and interpretation of bio-signals. Early eye disengagement from the target (EEDT) represents a significant eye behavior that involves the proactive disengagement of the gazes from the target to gather information on the anticipated pathway, thereby enabling rapid reactions to the environment. It remains unknown how task difficulty and task repetition affect EEDT. We aim to provide direct evidence of how these factors influence EEDT. We developed a visual tracking task in which participants viewed arrow movement videos while their eye movements were tracked. The task complexity was increased by increasing movement steps. Every movement pattern was performed twice to assess the effect of repetition on eye movement. Participants were required to recall the movement patterns for recall accuracy evaluation and complete cognitive load assessment. EEDT was quantified by the fixation duration and frequency within the areas of eye before arrow. When task difficulty increased, we found the recall accuracy score decreased, the cognitive load increased, and EEDT decreased significantly. The EEDT was higher in the second trial, but significance only existed in tasks with lower complexity. EEDT was positively correlated with recall accuracy and negatively correlated with cognitive load. Performing EEDT was reduced by task complexity and increased by task repetition. EEDT may be a promising sensory measure for assessing task performance and cognitive load and can be used for the future development of eye-tracking-based sensors.


Subject(s)
Eye Movements , Eye-Tracking Technology , Humans , Male , Eye Movements/physiology , Female , Adult , Young Adult , Task Performance and Analysis , Cognition/physiology , Fixation, Ocular/physiology
2.
Sci Rep ; 14(1): 11661, 2024 05 22.
Article in English | MEDLINE | ID: mdl-38778122

ABSTRACT

Gaze estimation is long been recognised as having potential as the basis for human-computer interaction (HCI) systems, but usability and robustness of performance remain challenging . This work focuses on systems in which there is a live video stream showing enough of the subjects face to track eye movements and some means to infer gaze location from detected eye features. Currently, systems generally require some form of calibration or set-up procedure at the start of each user session. Here we explore some simple strategies for enabling gaze based HCI to operate immediately and robustly without any explicit set-up tasks. We explore different choices of coordinate origin for combining extracted features from multiple subjects and the replacement of subject specific calibration by system initiation based on prior models. Results show that referencing all extracted features to local coordinate origins determined by subject start position enables robust immediate operation. Combining this approach with an adaptive gaze estimation model using an interactive user interface enables continuous operation with the 75th percentile gaze errors of 0.7 ∘ , and maximum gaze errors of 1.7 ∘ during prospective testing. There constitute state-of-the-art results and have the potential to enable a new generation of reliable gaze based HCI systems.


Subject(s)
Eye Movements , Fixation, Ocular , User-Computer Interface , Humans , Fixation, Ocular/physiology , Eye Movements/physiology , Male , Eye-Tracking Technology , Female , Adult
3.
Opt Lett ; 49(9): 2489-2492, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38691751

ABSTRACT

Point scanning retinal imaging modalities, including confocal scanning light ophthalmoscopy (cSLO) and optical coherence tomography, suffer from fixational motion artifacts. Fixation targets, though effective at reducing eye motion, are infeasible in some applications (e.g., handheld devices) due to their bulk and complexity. Here, we report on a cSLO device that scans the retina in a spiral pattern under pseudo-visible illumination, thus collecting image data while simultaneously projecting, into the subject's vision, the image of a bullseye, which acts as a virtual fixation target. An imaging study of 14 young adult volunteers was conducted to compare the fixational performance of this technique to that of raster scanning, with and without a discrete inline fixation target. Image registration was used to quantify subject eye motion; a strip-wise registration method was used for raster scans, and a novel, to the best of our knowledge, ring-based method was used for spiral scans. Results indicate a statistically significant reduction in eye motion by the use of spiral scanning as compared to raster scanning without a fixation target.


Subject(s)
Fixation, Ocular , Ophthalmoscopy , Retina , Humans , Retina/diagnostic imaging , Fixation, Ocular/physiology , Ophthalmoscopy/methods , Adult , Young Adult , Eye Movements
4.
J Vis ; 24(5): 3, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38709511

ABSTRACT

In everyday life we frequently make simple visual judgments about object properties, for example, how big or wide is a certain object? Our goal is to test whether there are also task-specific oculomotor routines that support perceptual judgments, similar to the well-established exploratory routines for haptic perception. In a first study, observers saw different scenes with two objects presented in a photorealistic virtual reality environment. Observers were asked to judge which of two objects was taller or wider while gaze was tracked. All tasks were performed with the same set of virtual objects in the same scenes, so that we can compare spatial characteristics of exploratory gaze behavior to quantify oculomotor routines for each task. Width judgments showed fixations around the center of the objects with larger horizontal spread. In contrast, for height judgments, gaze was shifted toward the top of the objects with larger vertical spread. These results suggest specific strategies in gaze behavior that presumably are used for perceptual judgments. To test the causal link between oculomotor behavior and perception, in a second study, observers could freely gaze at the object or we introduced a gaze-contingent setup forcing observers to fixate specific positions on the object. Discrimination performance was similar between free-gaze and the gaze-contingent conditions for width and height judgments. These results suggest that although gaze is adapted for different tasks, performance seems to be based on a perceptual strategy, independent of potential cues that can be provided by the oculomotor system.


Subject(s)
Eye Movements , Fixation, Ocular , Judgment , Humans , Judgment/physiology , Male , Female , Adult , Eye Movements/physiology , Young Adult , Fixation, Ocular/physiology , Photic Stimulation/methods , Virtual Reality , Visual Perception/physiology
5.
Rev Paul Pediatr ; 42: e2023017, 2024.
Article in English | MEDLINE | ID: mdl-38716993

ABSTRACT

OBJECTIVE: To evaluate the pattern of eye-gaze of preterm (PT), autism spectrum disorder (ASD) and neurotypical (Ty) children. METHODS: A cross-sectional study with eight preterm (born with ≤2000 g weight), nine ASD and five Ty male children, between six and nine years old, was performed. The eye gaze was evaluated presenting a board with a couple in social interaction, and a video with four children playing with blocks, projected in a screen computer, successively, evaluating the time that the children looked at each stimulus. RESULTS: Although all the groups focus on the central social figure with no significant differences, ASD presented significant differences in time fixation of the objects (p=0.021), while premature children fixated more time in the central social interaction than in the whole scene than typical children. CONCLUSIONS: Although this study found noteworthy differences in the eye-gaze patterns among the three groups, additional research with a more extensive participant pool is necessary to validate these preliminary results.


Subject(s)
Autism Spectrum Disorder , Fixation, Ocular , Infant, Premature , Humans , Autism Spectrum Disorder/psychology , Male , Cross-Sectional Studies , Child , Female , Fixation, Ocular/physiology , Social Interaction
6.
PLoS One ; 19(5): e0293436, 2024.
Article in English | MEDLINE | ID: mdl-38723019

ABSTRACT

BACKGROUND: Free throw is an important means of scoring in basketball games. With the improvement of basketball competition level and the enhancement of confrontation degree, the number of free throws in the game gradually increases, so the score of free throw will have an important impact on the result of the game. The purpose of this study is to explore the relationship between visual attention characteristics and hit rate of basketball players in free throw psychological procedure training, so as to provide scientific basis for basketball teaching and training. METHODS: Forty players with similar free throw abilities were randomly assigned to the experimental group (10 males, 10 females) and control group (10 males, 10 females). The experimental group was free throw psychological procedure training, while the control group was trained with routine training, Eye movement indices (number of fixations, fixation duration, and pupil dilation) and the free throw hit rate and analyzed before and after the experiment. Group differences were examined using t-tests, while paired sample t-tests were conducted to compare pre- and post-test results within each group. The training time and training times of the two groups were the same. RESULTS: There were significant differences in fixation duration, number of fixations, pupil diameter and free throw hit rate between pre-test and post-test in the experimental group (P < 0.05). Post-test, there were significant differences in number of fixations, fixation duration, pupil diameter and free throw hit rate between the two groups (P < 0.05). There was a significant positive correlation between number of fixations and free throw hit rate in top (P < 0.01), and there was a significant positive correlation between fixation duration and hit rate in front (P < 0.01). CONCLUSIONS: The psychological procedure training can improve the visual information search strategy and information processing ability of free throw, and significantly improve the free throw hit rate. There was a positive correlation between the front fixation time and the free throw hit rate, and there was a positive correlation between the top number of fixations and the free throw hit rate.


Subject(s)
Basketball , Fixation, Ocular , Humans , Male , Female , Basketball/psychology , Young Adult , Fixation, Ocular/physiology , Athletic Performance/physiology , Athletic Performance/psychology , Attention/physiology , Eye Movements/physiology , Adult
7.
Invest Ophthalmol Vis Sci ; 65(5): 39, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38787546

ABSTRACT

Purpose: Post-saccadic oscillations (PSOs) reflect movements of gaze that result from motion of the pupil and lens relative to the eyeball rather than eyeball rotations. Here, we analyzed the characteristics of PSOs in subjects with age-related macular degeneration (AMD), retinitis pigmentosa (RP), and normal vision (NV). Our aim was to assess the differences in PSOs between people with vision loss and healthy controls because PSOs affect retinal image stability after each saccade. Methods: Participants completed a horizontal saccade task and their gaze was measured using a pupil-based eye tracker. Oscillations occurring in the 80 to 200 ms post-saccadic period were described with a damped oscillation model. We compared the amplitude, decay time constant, and frequency of the PSOs for the three different groups. We also examined the correlation between these PSO parameters and the amplitude, peak velocity, and final deceleration of the preceding saccades. Results: Subjects with vision loss (AMD, n = 6, and RP, n = 5) had larger oscillation amplitudes, longer decay constants, and lower frequencies than subjects with NV (n = 7). The oscillation amplitudes increased with increases in saccade deceleration in all three groups. The other PSO parameters, however, did not show consistent correlations with either saccade amplitude or peak velocity. Conclusions: Post-saccadic fixation stability in AMD and RP is reduced due to abnormal PSOs. The differences with respect to NV are not due to differences in saccade kinematics, suggesting that anatomic and neuronal variations affect the suspension of the iris and the lens in the patients' eyes.


Subject(s)
Fixation, Ocular , Macular Degeneration , Pupil , Retinitis Pigmentosa , Saccades , Humans , Saccades/physiology , Retinitis Pigmentosa/physiopathology , Female , Male , Fixation, Ocular/physiology , Middle Aged , Macular Degeneration/physiopathology , Aged , Pupil/physiology , Lens, Crystalline/physiopathology , Adult , Visual Acuity/physiology
8.
PLoS One ; 19(5): e0303755, 2024.
Article in English | MEDLINE | ID: mdl-38758747

ABSTRACT

Recent eye tracking studies have linked gaze reinstatement-when eye movements from encoding are reinstated during retrieval-with memory performance. In this study, we investigated whether gaze reinstatement is influenced by the affective salience of information stored in memory, using an adaptation of the emotion-induced memory trade-off paradigm. Participants learned word-scene pairs, where scenes were composed of negative or neutral objects located on the left or right side of neutral backgrounds. This allowed us to measure gaze reinstatement during scene memory tests based on whether people looked at the side of the screen where the object had been located. Across two experiments, we behaviorally replicated the emotion-induced memory trade-off effect, in that negative object memory was better than neutral object memory at the expense of background memory. Furthermore, we found evidence that gaze reinstatement was related to recognition memory for the object and background scene components. This effect was generally comparable for negative and neutral memories, although the effects of valence varied somewhat between the two experiments. Together, these findings suggest that gaze reinstatement occurs independently of the processes contributing to the emotion-induced memory trade-off effect.


Subject(s)
Emotions , Eye Movements , Eye-Tracking Technology , Memory , Humans , Emotions/physiology , Female , Male , Young Adult , Adult , Memory/physiology , Eye Movements/physiology , Fixation, Ocular/physiology , Adolescent , Recognition, Psychology/physiology , Photic Stimulation
9.
Sci Robot ; 9(90): eadj8124, 2024 May 29.
Article in English | MEDLINE | ID: mdl-38809998

ABSTRACT

Neuromorphic vision sensors or event cameras have made the visual perception of extremely low reaction time possible, opening new avenues for high-dynamic robotics applications. These event cameras' output is dependent on both motion and texture. However, the event camera fails to capture object edges that are parallel to the camera motion. This is a problem intrinsic to the sensor and therefore challenging to solve algorithmically. Human vision deals with perceptual fading using the active mechanism of small involuntary eye movements, the most prominent ones called microsaccades. By moving the eyes constantly and slightly during fixation, microsaccades can substantially maintain texture stability and persistence. Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture. In this design, a rotating wedge prism was mounted in front of the aperture of an event camera to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion. The hardware device and software solution are integrated into a system, which we call artificial microsaccade-enhanced event camera (AMI-EV). Benchmark comparisons validated the superior data quality of AMI-EV recordings in scenarios where both standard cameras and event cameras fail to deliver. Various real-world experiments demonstrated the potential of the system to facilitate robotics perception both for low-level and high-level vision tasks.


Subject(s)
Algorithms , Equipment Design , Robotics , Saccades , Visual Perception , Robotics/instrumentation , Humans , Saccades/physiology , Visual Perception/physiology , Motion , Software , Reaction Time/physiology , Biomimetics/instrumentation , Fixation, Ocular/physiology , Eye Movements/physiology , Vision, Ocular/physiology
10.
J Vis ; 24(5): 17, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38819805

ABSTRACT

What is the link between eye movements and sensory learning? Although some theories have argued for an automatic interaction between what we know and where we look that continuously modulates human information gathering behavior during both implicit and explicit learning, there exists limited experimental evidence supporting such an ongoing interplay. To address this issue, we used a visual statistical learning paradigm combined with a gaze-contingent stimulus presentation and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, spatial eye movement patterns systematically and gradually changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount and type of knowledge the observers acquired. This suggests that eye movements are potential indicators of active learning, a process where long-term knowledge, current visual stimuli and an inherent tendency to reduce uncertainty about the visual environment jointly determine where we look.


Subject(s)
Eye Movements , Learning , Photic Stimulation , Humans , Eye Movements/physiology , Learning/physiology , Male , Young Adult , Female , Adult , Photic Stimulation/methods , Visual Perception/physiology , Fixation, Ocular/physiology
11.
Sensors (Basel) ; 24(9)2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38732772

ABSTRACT

In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 ± 0.35 s, our approach stands as a robust solution for automatic fixation annotation.


Subject(s)
Eye-Tracking Technology , Fixation, Ocular , Humans , Fixation, Ocular/physiology , Video Recording/methods , Algorithms , Eye Movements/physiology
12.
PLoS One ; 19(5): e0302459, 2024.
Article in English | MEDLINE | ID: mdl-38809939

ABSTRACT

Saccadic eye movements enable us to search for the target of interest in a crowded scene or, in the case of goal-directed saccades, to simply bring the image of the peripheral target to the very centre of the fovea. This mechanism extends the use of the superior image processing performance of the fovea over a large visual field. We know that visual information is processed quickly at the end of each saccade but estimates of the times involved remain controversial. This study aims to investigate the processing of visual information during post fixation oscillations of the eyeball. A new psychophysical test measures the combined eye movement response latencies, including fixation duration and visual processing times. When the test is used in conjunction with an eye tracker, each component that makes up the 'integrated saccade latency' time, from the onset of the peripheral stimulus to the correct interpretation of the information carried by the stimulus, can be measured and the discrete components delineated. The results show that the time required to process and encode the stimulus attribute of interest at the end of a saccade is longer than the time needed to carry out the same task in the absence of an eye movement. We propose two principal hypotheses, each of which can account for this finding. 1. The known inhibition of afferent retinal signals during fast eye movements extends beyond the end point of the saccade. 2. The extended visual processing times measured when saccades are involved are caused by the transient loss of spatial resolution due to eyeball instability during post-saccadic oscillations. The latter can best be described as retinal image smear with greater loss of spatial resolution expected for stimuli of low luminance contrast.


Subject(s)
Fixation, Ocular , Reaction Time , Saccades , Visual Perception , Humans , Saccades/physiology , Adult , Male , Female , Reaction Time/physiology , Visual Perception/physiology , Fixation, Ocular/physiology , Young Adult , Photic Stimulation , Visual Fields/physiology , Time Factors
13.
Sci Rep ; 14(1): 12056, 2024 05 31.
Article in English | MEDLINE | ID: mdl-38821979

ABSTRACT

During the pandemic, digital communication became paramount. Due to the discrepancy between the placement of the camera and the screen in typical smartphones, tablets and laptops, mutual eye contact cannot be made in standard video communication. Although the positive effect of eye contact in traditional communication has been well-documented, its role in virtual contexts remains less explored. In this study, we conducted experiments to gauge the impact of gaze direction during a simulated online job interview. Twelve university students were recruited as interviewees. The interview consisted of two recording sessions where they delivered the same prepared speech: in the first session, they faced the camera, and in the second, they directed their gaze towards the screen. Based on the recorded videos, we created three stimuli: one where the interviewee's gaze was directed at the camera (CAM), one where the interviewee's gaze was skewed downward (SKW), and a voice-only stimulus without camera recordings (VO). Thirty-eight full-time workers participated in the study and evaluated the stimuli. The results revealed that the SKW condition garnered significantly less favorable evaluations than the CAM condition and the VO condition. Moreover, a secondary analysis indicated a potential gender bias in evaluations: the female evaluators evaluated the interviewees of SKW condition more harshly than the male evaluators did, and the difference in some evaluation criteria between the CAM and SKW conditions was larger for the female interviewees than for the male interviewees. Our findings emphasize the significance of gaze direction and potential gender biases in online interactions.


Subject(s)
Fixation, Ocular , Humans , Female , Male , Adult , Fixation, Ocular/physiology , Young Adult , Video Recording , Eye Movements/physiology , Interviews as Topic , COVID-19/prevention & control , COVID-19/epidemiology
14.
J Vis ; 24(4): 20, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38656530

ABSTRACT

We obtain large amounts of external information through our eyes, a process often considered analogous to picture mapping onto a camera lens. However, our eyes are never as still as a camera lens, with saccades occurring between fixations and microsaccades occurring within a fixation. Although saccades are agreed to be functional for information sampling in visual perception, it remains unknown if microsaccades have a similar function when eye movement is restricted. Here, we demonstrated that saccades and microsaccades share common spatiotemporal structures in viewing visual objects. Twenty-seven adults viewed faces and houses in free-viewing and fixation-controlled conditions. Both saccades and microsaccades showed distinctive spatiotemporal patterns between face and house viewing that could be discriminated by pattern classifications. The classifications based on saccades and microsaccades could also be mutually generalized. Importantly, individuals who showed more distinctive saccadic patterns between faces and houses also showed more distinctive microsaccadic patterns. Moreover, saccades and microsaccades showed a higher structure similarity for face viewing than house viewing and a common orienting preference for the eye region over the mouth region. These findings suggested a common oculomotor program that is used to optimize information sampling during visual object perception.


Subject(s)
Fixation, Ocular , Saccades , Visual Perception , Humans , Saccades/physiology , Male , Female , Adult , Fixation, Ocular/physiology , Young Adult , Visual Perception/physiology , Photic Stimulation/methods , Pattern Recognition, Visual/physiology
15.
Sci Rep ; 14(1): 9433, 2024 04 24.
Article in English | MEDLINE | ID: mdl-38658592

ABSTRACT

Selective retrieval of context-relevant memories is critical for animal survival. A behavioral index that captures its dynamic nature in real time is necessary to investigate this retrieval process. Here, we found a bias in eye gaze towards the locations previously associated with individual objects during retrieval. Participants learned two locations associated with each visual object and recalled one of them indicated by a contextual cue in the following days. Before the contextual cue presentation, participants often gazed at both locations associated with the given object on the background screen (look-at-both), and the frequency of look-at-both gaze pattern increased as learning progressed. Following the cue presentation, their gaze shifted toward the context-appropriate location. Interestingly, participants showed a higher accuracy of memory retrieval in trials where they gazed at both object-associated locations, implying functional advantage of the look-at-both gaze patterns. Our findings indicate that naturalistic eye movements reflect the dynamic process of memory retrieval and selection, highlighting the potential of eye gaze as an indicator for studying these cognitive processes.


Subject(s)
Eye Movements , Fixation, Ocular , Mental Recall , Humans , Male , Female , Mental Recall/physiology , Young Adult , Fixation, Ocular/physiology , Adult , Eye Movements/physiology , Cues , Memory/physiology , Learning/physiology
16.
Ophthalmic Physiol Opt ; 44(4): 774-786, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38578134

ABSTRACT

PURPOSE: To investigate gaze and behavioural metrics at different viewing distances with multifocal contact lenses (MFCLs), single vision contact lenses (SVCLs) and progressive addition lenses (PALs). METHODS: Fifteen presbyopic contact lens wearers participated over five separate study visits. At each visit, participants were randomly assigned to wear one of five refractive corrections: habitual PAL spectacles, delefilcon A (Alcon Inc.) MFCLs and three separate pairs of delefilcon A single vision lenses worn as distance, intermediate and near corrections. Participants wore a Pupil Core headset to record eye and head movements while performing three visual tasks: reading, visual search and scene observation. Data were investigated using linear regression and post-hoc testing. Parameters of interest included gaze (fixation duration, head movement) and behavioural (reading speed, reading accuracy, visual search time) metrics. RESULTS: Reading speed in SVCLs was significantly faster than in MFCLs and PAL spectacles (F = 16.3, p < 0.0001). Refractive correction worn did not influence visual search times (F = 0.16, p = 0.85). Fixation duration was significantly affected by the type of visual task (F = 60.2, p < 0.001), and an interaction effect was observed between viewing distance and refractive correction (F = 4.3, p = 0.002). There was significantly more horizontal and vertical head movement (F = 3.2, p = 0.01 and F = 3.3, p = 0.01, respectively) during visual search tasks when wearing PAL spectacles compared to SVCLs or MFCLs. CONCLUSION: This work showed that the type of refractive correction affects behavioural metrics such as reading speed and gaze behaviour by affecting horizontal and vertical head movements. The findings of this study suggest that under certain conditions, wearers of MFCLs make fewer head movements compared to PAL spectacles. Gaze behaviour metrics offer a new approach to compare and understand contact lens and spectacle performance, with potential applications including peripheral optical designs for myopia management.


Subject(s)
Contact Lenses , Eyeglasses , Fixation, Ocular , Presbyopia , Reading , Refraction, Ocular , Visual Acuity , Adult , Female , Humans , Male , Middle Aged , Eye Movements/physiology , Fixation, Ocular/physiology , Head Movements/physiology , Presbyopia/physiopathology , Presbyopia/therapy , Refraction, Ocular/physiology , Visual Acuity/physiology , Cross-Over Studies , Prospective Studies
17.
Atten Percept Psychophys ; 86(4): 1318-1329, 2024 May.
Article in English | MEDLINE | ID: mdl-38594445

ABSTRACT

Competing theories attempt to explain what guides eye movements when exploring natural scenes: bottom-up image salience and top-down semantic salience. In one study, we apply language-based analyses to quantify the well-known observation that task influences gaze in natural scenes. Subjects viewed ten scenes as if they were performing one of two tasks. We found that the semantic similarity between the task and the labels of objects in the scenes captured the task-dependence of gaze (t(39) = 13.083; p < 0.001). In another study, we examined whether image salience or semantic salience better predicts gaze during a search task, and if viewing strategies are affected by searching for targets of high or low semantic relevance to the scene. Subjects searched 100 scenes for a high- or low-relevance object. We found that image salience becomes a worse predictor of gaze across successive fixations, while semantic salience remains a consistent predictor (X2(1, N=40) = 75.148, p < .001). Furthermore, we found that semantic salience decreased as object relevance decreased (t(39) = 2.304; p = .027). These results suggest that semantic salience is a useful predictor of gaze during task-related scene viewing, and that even in target-absent trials, gaze is modulated by the relevance of a search target to the scene in which it might be located.


Subject(s)
Attention , Fixation, Ocular , Semantics , Humans , Fixation, Ocular/physiology , Attention/physiology , Male , Female , Young Adult , Adult , Pattern Recognition, Visual/physiology , Eye Movements/physiology
18.
Sensors (Basel) ; 24(8)2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38676162

ABSTRACT

Pupil size is a significant biosignal for human behavior monitoring and can reveal much underlying information. This study explored the effects of task load, task familiarity, and gaze position on pupil response during learning a visual tracking task. We hypothesized that pupil size would increase with task load, up to a certain level before decreasing, decrease with task familiarity, and increase more when focusing on areas preceding the target than other areas. Fifteen participants were recruited for an arrow tracking learning task with incremental task load. Pupil size data were collected using a Tobii Pro Nano eye tracker. A 2 × 3 × 5 three-way factorial repeated measures ANOVA was conducted using R (version 4.2.1) to evaluate the main and interactive effects of key variables on adjusted pupil size. The association between individuals' cognitive load, assessed by NASA-TLX, and pupil size was further analyzed using a linear mixed-effect model. We found that task repetition resulted in a reduction in pupil size; however, this effect was found to diminish as the task load increased. The main effect of task load approached statistical significance, but different trends were observed in trial 1 and trial 2. No significant difference in pupil size was detected among the three gaze positions. The relationship between pupil size and cognitive load overall followed an inverted U curve. Our study showed how pupil size changes as a function of task load, task familiarity, and gaze scanning. This finding provides sensory evidence that could improve educational outcomes.


Subject(s)
Eye-Tracking Technology , Pupil , Humans , Pupil/physiology , Male , Female , Adult , Young Adult , Fixation, Ocular/physiology , Eye Movements/physiology
19.
Dev Neurorehabil ; 27(1-2): 27-33, 2024.
Article in English | MEDLINE | ID: mdl-38676395

ABSTRACT

This paper explores whether a structured history-taking tool yields useful descriptions of children's looking skills. Parents of 32 children referred to a specialist communication clinic reported their child's looking skills using the Functional Vision for Communication Questionnaire (FVC-Q), providing descriptions of single object fixation, fixation shifts between objects and fixation shifts from object to person. Descriptions were compared with clinical assessment. 24/32 children were reported to have some limitation in fixation. Limitation was subsequently seen in 30/32 children. Parental report and assessment agreed fully in 23/32 (72%). The largest area of discrepancy was object-person fixation shifts, with five children not observed to show this behavior despite its being reported. Findings indicate a structured questionnaire yields description of fixations, which correspond well with clinical assessment. Descriptions supported discussion between parents and clinicians. It is proposed that the FVC-Q is a valuable tool in supporting clinicians in eliciting information about fixation skills.


Subject(s)
Communication , Parents , Humans , Female , Child , Surveys and Questionnaires , Male , Child, Preschool , Fixation, Ocular/physiology , Adolescent , Medical History Taking
20.
Neuropsychologia ; 199: 108883, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38599567

ABSTRACT

Left smooth pursuit eye movement training in response to large-field visual motion (optokinetic stimulation) has become a promising rehabilitation method in left spatial inattention or neglect. The mechanisms underlying the therapeutic effect, however, remain unknown. During optokinetic stimulation, there is an error in visual localisation ahead of the line of sight. This could indicate a change in the brain's estimate of one's own direction of gaze. We hypothesized that optokinetic stimulation changes the brain's estimate of gaze. Because this estimate is critical for coding the locus of attention in the visual space relative to the body and across sensory modalities, its change might underlie the change in spatial attention. Here, we report that in healthy participants optokinetic stimulation causes not only a directional bias in the proprioceptive signal from the extraocular muscles, but also a corresponding shift of the locus of attention. Both changes outlasted the period of stimulation. This result forms a step in investigating a causal link between the adaptation in the sensorimotor gaze signals and the recovery in spatial neglect.


Subject(s)
Attention , Fixation, Ocular , Perceptual Disorders , Humans , Attention/physiology , Male , Perceptual Disorders/rehabilitation , Perceptual Disorders/physiopathology , Perceptual Disorders/etiology , Female , Adult , Fixation, Ocular/physiology , Photic Stimulation , Space Perception/physiology , Young Adult , Motion Perception/physiology , Proprioception/physiology , Pursuit, Smooth/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...