Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.929
Filter
1.
J Vis ; 24(7): 1, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38953861

ABSTRACT

Applications for eye-tracking-particularly in the clinic-are limited by a reliance on dedicated hardware. Here we compare eye-tracking implemented on an Apple iPad Pro 11" (third generation)-using the device's infrared head-tracking and front-facing camera-with a Tobii 4c infrared eye-tracker. We estimated gaze location using both systems while 28 observers performed a variety of tasks. For estimating fixation, gaze position estimates from the iPad were less accurate and precise than the Tobii (mean absolute error of 3.2° ± 2.0° compared with 0.75° ± 0.43°), but fixation stability estimates were correlated across devices (r = 0.44, p < 0.05). For tasks eliciting saccades >1.5°, estimated saccade counts (r = 0.4-0.73, all p < 0.05) were moderately correlated across devices. For tasks eliciting saccades >8° we observed moderate correlations in estimated saccade speed and amplitude (r = 0.4-0.53, all p < 0.05). We did, however, note considerable variation in the vertical component of estimated smooth pursuit speed from the iPad and a catastrophic failure of tracking on the iPad in 5% to 20% of observers (depending on the test). Our findings sound a note of caution to researchers seeking to use iPads for eye-tracking and emphasize the need to properly examine their eye-tracking data to remove artifacts and outliers.


Subject(s)
Eye-Tracking Technology , Fixation, Ocular , Saccades , Humans , Fixation, Ocular/physiology , Saccades/physiology , Male , Adult , Female , Young Adult , Pursuit, Smooth/physiology , Computers, Handheld , Eye Movements/physiology
2.
J Neural Eng ; 21(4)2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38959876

ABSTRACT

Objective.Patients suffering from heavy paralysis or Locked-in-Syndrome can regain communication using a Brain-Computer Interface (BCI). Visual event-related potential (ERP) based BCI paradigms exploit visuospatial attention (VSA) to targets laid out on a screen. However, performance drops if the user does not direct their eye gaze at the intended target, harming the utility of this class of BCIs for patients suffering from eye motor deficits. We aim to create an ERP decoder that is less dependent on eye gaze.Approach.ERP component latency jitter plays a role in covert visuospatial attention (VSA) decoding. We introduce a novel decoder which compensates for these latency effects, termed Woody Classifier-based Latency Estimation (WCBLE). We carried out a BCI experiment recording ERP data in overt and covert visuospatial attention (VSA), and introduce a novel special case of covert VSA termed split VSA, simulating the experience of patients with severely impaired eye motor control. We evaluate WCBLE on this dataset and the BNCI2014-009 dataset, within and across VSA conditions to study the dependency on eye gaze and the variation thereof during the experiment.Main results.WCBLE outperforms state-of-the-art methods in the VSA conditions of interest in gaze-independent decoding, without reducing overt VSA performance. Results from across-condition evaluation show that WCBLE is more robust to varying VSA conditions throughout a BCI operation session.Significance. Together, these results point towards a pathway to achieving gaze independence through suited ERP decoding. Our proposed gaze-independent solution enhances decoding performance in those cases where performing overt VSA is not possible.


Subject(s)
Attention , Brain-Computer Interfaces , Electroencephalography , Fixation, Ocular , Humans , Male , Female , Adult , Fixation, Ocular/physiology , Attention/physiology , Electroencephalography/methods , Young Adult , Photic Stimulation/methods , Reaction Time/physiology , Evoked Potentials, Visual/physiology
3.
BMC Ophthalmol ; 24(1): 278, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38982388

ABSTRACT

OBJECTIVE: To investigate the characteristics of eye movement in children with anisometropic amblyopia, and to compare those characteristics with eye movement in a control group. METHODS: 31 children in the anisometropic amblyopia group (31 amblyopic eyes in group A, 31 contralateral eyes in group B) and 24 children in the control group (48 eyes in group C). Group A was subdivided into groups Aa (severe amblyopia) and Ab (mild-moderate amblyopia). The overall age range was 6-12 years (mean, 7.83 ± 1.79 years). All children underwent ophthalmic examinations; eye movement parameters including saccade latency and amplitude were evaluated using an Eyelink1000 eye tracker. Data Viewer and MATLAB software were used for data analysis. RESULTS: Mean and maximum saccade latencies, as well as mean and maximum saccade amplitudes, were significantly greater in group A than in groups B and C before and after treatment (P < 0.05). Mean and maximum saccade latencies were significantly different among groups Aa, Ab, and C (P < 0.05). Pupil trajectories in two detection modes suggested that binocular fixation was better than monocular fixation. CONCLUSIONS: Eye movement parameters significantly differed between contralateral normal eyes and control eyes. Clinical evaluation of children with anisometropic amblyopia should not focus only on static visual acuity, but also on the assessment of eye movement.


Subject(s)
Amblyopia , Vision, Binocular , Visual Acuity , Humans , Amblyopia/physiopathology , Child , Male , Female , Visual Acuity/physiology , Vision, Binocular/physiology , Saccades/physiology , Eye Movements/physiology , Anisometropia/physiopathology , Anisometropia/complications , Fixation, Ocular/physiology
4.
PLoS One ; 19(7): e0292200, 2024.
Article in English | MEDLINE | ID: mdl-38968181

ABSTRACT

Postural instability is a common symptom of vestibular dysfunction that impacts a person's day-to-day activities. Vestibular rehabilitation is effective in decreasing dizziness, visual symptoms and improving postural control through several mechanisms including sensory reweighting of the vestibular, visual and somatosensory systems. As part of the sensory reweighting mechanisms, vestibular activation exercises with headshaking influence vestibular-ocular reflex (VOR). However, combining challenging vestibular and postural tasks to facilitate more effective rehabilitation outcomes is under-utilized. Understanding how and why this may work is unknown. The aim of the study was to assess sensory reweighting of postural control processing and VOR after concurrent vestibular activation and weight shift training (WST) in healthy young adults. Forty-two participants (18-35years) were randomly assigned into four groups: No training/control (CTL), a novel visual feedback WST coupled with a concurrent, rhythmic active horizontal or vertical headshake activity (HHS and VHS), or the same WST with no headshake (NHS). Training was performed for five days. All groups performed baseline- and post-assessments using the video head impulse test, sensory organization test, force platform rotations and electro-oculography. Significantly decreased horizontal eye movement variability in the HHS group compared to the other groups suggests improved gaze stabilization (p = .024). Significantly decreased horizontal VOR gain (p = .040) and somatosensory downweighting (p = .050) were found in the combined headshake groups (HHS and VHS) compared to the other two groups (NHS and CTL). The training also showed a significantly faster automatic postural response (p = .003) with improved flexibility (p = .010) in the headshake groups. The concurrent training influences oculomotor function and suggests improved gaze stabilization through vestibular recalibration due to adaptation and possibly habituation. The novel protocol could be modified into progressive functional activities that would incorporate gaze stabilization exercises. The findings may have implications for future development of vestibular rehabilitation protocols.


Subject(s)
Postural Balance , Reflex, Vestibulo-Ocular , Vestibule, Labyrinth , Humans , Female , Male , Adult , Postural Balance/physiology , Reflex, Vestibulo-Ocular/physiology , Vestibule, Labyrinth/physiology , Young Adult , Adolescent , Fixation, Ocular/physiology
5.
Elife ; 122024 Jul 05.
Article in English | MEDLINE | ID: mdl-38968325

ABSTRACT

Humans can read and comprehend text rapidly, implying that readers might process multiple words per fixation. However, the extent to which parafoveal words are previewed and integrated into the evolving sentence context remains disputed. We investigated parafoveal processing during natural reading by recording brain activity and eye movements using MEG and an eye tracker while participants silently read one-line sentences. The sentences contained an unpredictable target word that was either congruent or incongruent with the sentence context. To measure parafoveal processing, we flickered the target words at 60 Hz and measured the resulting brain responses (i.e. Rapid Invisible Frequency Tagging, RIFT) during fixations on the pre-target words. Our results revealed a significantly weaker tagging response for target words that were incongruent with the previous context compared to congruent ones, even within 100ms of fixating the word immediately preceding the target. This reduction in the RIFT response was also found to be predictive of individual reading speed. We conclude that semantic information is not only extracted from the parafovea but can also be integrated with the previous context before the word is fixated. This early and extensive parafoveal processing supports the rapid word processing required for natural reading. Our study suggests that theoretical frameworks of natural reading should incorporate the concept of deep parafoveal processing.


Subject(s)
Eye Movements , Reading , Semantics , Humans , Female , Male , Adult , Young Adult , Eye Movements/physiology , Fovea Centralis/physiology , Fixation, Ocular/physiology , Magnetoencephalography , Brain/physiology , Comprehension/physiology
6.
Invest Ophthalmol Vis Sci ; 65(8): 13, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38975944

ABSTRACT

Purpose: This study aims at linking subtle changes of fixational eye movements (FEM) in controls and in patients with foveal drusen using adaptive optics retinal imaging in order to find anatomo-functional markers for pre-symptomatic age-related macular degeneration (AMD). Methods: We recruited 7 young controls, 4 older controls, and 16 patients with presymptomatic AMD with foveal drusen from the Silversight Cohort. A high-speed research-grade adaptive optics flood illumination ophthalmoscope (AO-FIO) was used for monocular retinal tracking of fixational eye movements. The system allows for sub-arcminute resolution, and high-speed and distortion-free imaging of the foveal area. Foveal drusen position and size were documented using gaze-dependent imaging on a clinical-grade AO-FIO. Results: FEM were measured with high precision (RMS-S2S = 0.0015 degrees on human eyes) and small foveal drusen (median diameter = 60 µm) were detected with high contrast imaging. Microsaccade amplitude, drift diffusion coefficient, and ISOline area (ISOA) were significantly larger for patients with foveal drusen compared with controls. Among the drusen participants, microsaccade amplitude was correlated to drusen eccentricity from the center of the fovea. Conclusions: A novel high-speed high-precision retinal tracking technique allowed for the characterization of FEM at the microscopic level. Foveal drusen altered fixation stability, resulting in compensatory FEM changes. Particularly, drusen at the foveolar level seemed to have a stronger impact on microsaccade amplitudes and ISOA. The unexpected anatomo-functional link between small foveal drusen and fixation stability opens up a new perspective of detecting oculomotor signatures of eye diseases at the presymptomatic stage.


Subject(s)
Fixation, Ocular , Fovea Centralis , Macular Degeneration , Retinal Drusen , Humans , Female , Retinal Drusen/physiopathology , Retinal Drusen/diagnosis , Male , Fixation, Ocular/physiology , Fovea Centralis/diagnostic imaging , Fovea Centralis/physiopathology , Fovea Centralis/pathology , Aged , Middle Aged , Macular Degeneration/physiopathology , Macular Degeneration/diagnosis , Adult , Tomography, Optical Coherence/methods , Ophthalmoscopy/methods , Visual Acuity/physiology , Saccades/physiology , Prodromal Symptoms
7.
Sci Rep ; 14(1): 16193, 2024 Jul 13.
Article in English | MEDLINE | ID: mdl-39003314

ABSTRACT

Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.


Subject(s)
Emotions , Facial Expression , Facial Recognition , Fixation, Ocular , Humans , Facial Recognition/physiology , Female , Male , Adult , Fixation, Ocular/physiology , Emotions/physiology , Young Adult , Eye Movements/physiology , Photic Stimulation/methods
8.
J Vis ; 24(7): 7, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38984898

ABSTRACT

Decisions about where to move occur throughout the day and are essential to life. Different movements may present different challenges and affect the likelihood of achieving a goal. Certain choices may have unintended consequences, some of which may cause harm and bias the decision. Movement decisions rely on a person gathering necessary visual information via shifts in gaze. Here we sought to understand what influences this information-seeking gaze behavior. Participants chose between walking across one of two paths that consisted of terrain images found in either hiking or urban environments. We manipulated the number and type of terrain of each path, which altered the amount of available visual information. We recorded gaze behavior during the approach to the paths and had participants rate the confidence in their ability to walk across each terrain type (i.e., self-efficacy) as though it was real. Participants did not direct gaze more to the path with greater visual information, regardless of how we quantified information. Rather, we show that a person's perception of their motor abilities predicts how they visually explore the environment with their eyes as well as their choice of action. The greater the self-efficacy in walking across one path, the more they directed gaze to it and the more likely they chose to walk across it.


Subject(s)
Choice Behavior , Fixation, Ocular , Self Efficacy , Walking , Humans , Male , Walking/physiology , Walking/psychology , Female , Fixation, Ocular/physiology , Young Adult , Adult , Choice Behavior/physiology , Eye Movements/physiology , Visual Perception/physiology
9.
J Vis ; 24(7): 6, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38984899

ABSTRACT

It is reasonable to assume that where people look in the world is largely determined by what they are doing. The reasoning is that the activity determines where it is useful to look at each moment in time. Assuming that it is vital to accurately judge the positions of the steps when navigating a staircase, it is surprising that people differ a lot in the extent to which they look at the steps. Apparently, some people consider the accuracy of peripheral vision, predictability of the step size, and feeling the edges of the steps with their feet to be good enough. If so, occluding part of the view of the staircase and making it more important to place one's feet gently might make it more beneficial to look directly at the steps before stepping onto them, so that people will more consistently look at many steps. We tested this idea by asking people to walk on staircases, either with or without a tray with two cups of water on it. When carrying the tray, people walked more slowly, but they shifted their gaze across steps in much the same way as they did when walking without the tray. They did not look at more steps. There was a clear positive correlation between the fraction of steps that people looked at when walking with and without the tray. Thus, the variability in the extent to which people look at the steps persists when one makes walking on the staircase more challenging.


Subject(s)
Fixation, Ocular , Walking , Humans , Walking/physiology , Fixation, Ocular/physiology , Male , Adult , Female , Young Adult , Eye Movements/physiology , Visual Perception/physiology
10.
J Vis ; 24(6): 4, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38842836

ABSTRACT

The interception (or avoidance) of moving objects is a common component of various daily living tasks; however, it remains unclear whether precise alignment of foveal vision with a target is important for motor performance. Furthermore, there has also been little examination of individual differences in visual tracking strategy and the use of anticipatory gaze adjustments. We examined the importance of in-flight tracking and predictive visual behaviors using a virtual reality environment that required participants (n = 41) to intercept tennis balls projected from one of two possible locations. Here, we explored whether different tracking strategies spontaneously arose during the task, and which were most effective. Although indices of closer in-flight tracking (pursuit gain, tracking coherence, tracking lag, and saccades) were predictive of better interception performance, these relationships were rather weak. Anticipatory gaze shifts toward the correct release location of the ball provided no benefit for subsequent interception. Nonetheless, two interceptive strategies were evident: 1) early anticipation of the ball's onset location followed by attempts to closely track the ball in flight (i.e., predictive strategy); or 2) positioning gaze between possible onset locations and then using peripheral vision to locate the moving ball (i.e., a visual pivot strategy). Despite showing much poorer in-flight foveal tracking of the ball, participants adopting a visual pivot strategy performed slightly better in the task. Overall, these results indicate that precise alignment of the fovea with the target may not be critical for interception tasks, but that observers can adopt quite varied visual guidance approaches.


Subject(s)
Individuality , Motion Perception , Humans , Male , Female , Young Adult , Motion Perception/physiology , Adult , Psychomotor Performance/physiology , Fixation, Ocular/physiology , Virtual Reality , Saccades/physiology , Fovea Centralis/physiology , Eye Movements/physiology
11.
J Vis ; 24(6): 8, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38856982

ABSTRACT

When interacting with the environment, humans typically shift their gaze to where information is to be found that is useful for the upcoming action. With increasing age, people become slower both in processing sensory information and in performing their movements. One way to compensate for this slowing down could be to rely more on predictive strategies. To examine whether we could find evidence for this, we asked younger (19-29 years) and older (55-72 years) healthy adults to perform a reaching task wherein they hit a visual target that appeared at one of two possible locations. In separate blocks of trials, the target could appear always at the same location (predictable), mainly at one of the locations (biased), or at either location randomly (unpredictable). As one might expect, saccades toward predictable targets had shorter latencies than those toward less predictable targets, irrespective of age. Older adults took longer to initiate saccades toward the target location than younger adults, even when the likely target location could be deduced. Thus we found no evidence of them relying more on predictive gaze. Moreover, both younger and older participants performed more saccades when the target location was less predictable, but again no age-related differences were found. Thus we found no tendency for older adults to rely more on prediction.


Subject(s)
Aging , Fixation, Ocular , Saccades , Humans , Aged , Middle Aged , Adult , Male , Female , Saccades/physiology , Aging/physiology , Young Adult , Fixation, Ocular/physiology , Reaction Time/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Eye Movements/physiology , Age Factors
12.
Exp Brain Res ; 242(7): 1797-1806, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38839617

ABSTRACT

People with multiple sclerosis (PwMS) who report dizziness often have gaze instability due to vestibulo-ocular reflex (VOR) deficiencies and compensatory saccade (CS) abnormalities. Herein, we aimed to describe and compare the gaze stabilization mechanisms for yaw and pitch head movements in PwMS. Thirty-seven PwMS (27 female, mean ± SD age = 53.4 ± 12.4 years old, median [IQR] Expanded Disability Status Scale Score = 3.5, [1.0]. We analyzed video head impulse test results for VOR gain, CS frequency, CS latency, gaze position error (GPE) at impulse end, and GPE at 400 ms after impulse start. Discrepancies were found for median [IQR] VOR gain in yaw (0.92 [0.14]) versus pitch-up (0.71 [0.44], p < 0.001) and pitch-down (0.81 [0.44], p = 0.014]), CS latency in yaw (258.13 [76.8]) ms versus pitch-up (208.78 [65.97]) ms, p = 0.001] and pitch-down (132.17 [97.56] ms, p = 0.006), GPE at impulse end in yaw (1.15 [1.85] degs versus pitch-up (2.71 [3.9] degs, p < 0.001), and GPE at 400 ms in yaw (-0.25 [0.98] degs) versus pitch-up (1.53 [1.07] degs, p < 0.001) and pitch-down (1.12 [1.82] degs, p = 0.001). Compared with yaw (0.91 [0.75]), CS frequency was similar for pitch-up (1.03 [0.93], p = 0.999) but lower for pitch-down (0.65 [0.64], p = 0.023). GPE at 400 ms was similar for yaw and pitch-down (1.88 [2.76] degs, p = 0.400). We postulate that MS may have preferentially damaged the vertical VOR and saccade pathways in this cohort.


Subject(s)
Multiple Sclerosis , Reflex, Vestibulo-Ocular , Humans , Female , Male , Middle Aged , Multiple Sclerosis/physiopathology , Multiple Sclerosis/complications , Adult , Reflex, Vestibulo-Ocular/physiology , Aged , Fixation, Ocular/physiology , Head Movements/physiology , Saccades/physiology , Head Impulse Test/methods
13.
J Vis ; 24(6): 11, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38869372

ABSTRACT

Microsaccades-tiny fixational eye movements-improve discriminability in high-acuity tasks in the foveola. To investigate whether they help compensate for low discriminability at the perifovea, we examined microsaccade characteristics relative to the adult visual performance field, which is characterized by two perceptual asymmetries: horizontal-vertical anisotropy (better discrimination along the horizontal than vertical meridian) and vertical meridian asymmetry (better discrimination along the lower than upper vertical meridian). We investigated whether and to what extent microsaccade directionality varies when stimuli are at isoeccentric locations along the cardinals under conditions of heterogeneous discriminability (Experiment 1) and homogeneous discriminability, equated by adjusting stimulus contrast (Experiment 2). Participants performed a two-alternative forced-choice orientation discrimination task. In both experiments, performance was better on trials without microsaccades between ready signal onset and stimulus offset than on trials with microsaccades. Across the trial sequence, the microsaccade rate and directional pattern were similar across locations. Our results indicate that microsaccades were similar regardless of stimulus discriminability and target location, except during the response period-once the stimuli were no longer present and target location no longer uncertain-when microsaccades were biased toward the target location. Thus, this study reveals that microsaccades do not flexibly adapt as a function of varying discriminability in a basic visual task around the visual field.


Subject(s)
Photic Stimulation , Saccades , Visual Fields , Humans , Saccades/physiology , Visual Fields/physiology , Male , Adult , Female , Young Adult , Photic Stimulation/methods , Fixation, Ocular/physiology , Orientation/physiology , Discrimination, Psychological/physiology , Fovea Centralis/physiology
14.
Sci Rep ; 14(1): 14288, 2024 06 21.
Article in English | MEDLINE | ID: mdl-38906960

ABSTRACT

Interpersonal coordination is a key determinant of successful social interaction but can be disrupted when people experience symptoms related to social anxiety or autism. Effective coordination rests on individuals directing their attention towards interaction partners. Yet little is known about the impact of the attentional behaviours of the partner themselves. As the gaze of others has heightened salience for those experiencing social anxiety or autism, addressing this gap can provide insight into how symptoms of these disorders impact coordination. Using a novel virtual reality task, we investigated whether partner gaze (i.e., direct vs. averted) influenced the emergence of interpersonal coordination. Results revealed: (i) spontaneous coordination was diminished in the averted (cf. direct) gaze condition; (ii) spontaneous coordination was positively related to symptoms of social anxiety, but only when partner gaze was averted. This latter finding contrasts the extant literature and points to the importance of social context in shaping the relationship between symptoms of psychopathology and interpersonal coordination.


Subject(s)
Fixation, Ocular , Interpersonal Relations , Humans , Male , Female , Adult , Young Adult , Fixation, Ocular/physiology , Attention/physiology , Social Interaction , Anxiety/psychology , Anxiety/physiopathology , Adolescent , Autistic Disorder/psychology , Autistic Disorder/physiopathology
15.
Int J Comput Assist Radiol Surg ; 19(7): 1459-1467, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38888820

ABSTRACT

PURPOSE: To facilitate the integration of point of gaze (POG) as an input modality for robot-assisted surgery, we introduce a robust head movement compensation gaze tracking system for the da Vinci Surgical System. Previous surgical eye gaze trackers require multiple recalibrations and suffer from accuracy loss when users move from the calibrated position. We investigate whether eye corner detection can reduce gaze estimation error in a robotic surgery context. METHODS: A polynomial regressor is first used to estimate POG after an 8-point calibration, and then, using another regressor, the POG error from head movement is estimated from the shift in 2D eye corner location. Eye corners are computed by first detecting regions of interest using the You Only Look Once (YOLO) object detector trained on 1600 annotated eye images (open dataset included). Contours are then extracted from the bounding boxes and a derivative-based curvature detector refines the eye corner. RESULTS: Through a user study (n = 24), our corner-contingent head compensation algorithm showed an error reduction in degrees of visual angle of 1.20 ∘ (p = 0.037) for the left eye and 1.26 ∘ (p = 0.079) for the right compared to the previous gold-standard POG error correction method. In addition, the eye corner pipeline showed a root-mean-squared error of 3.57 (SD = 1.92) pixels in detecting eye corners over 201 annotated frames. CONCLUSION: We introduce an effective method of using eye corners to correct for eye gaze estimation, enabling the practical acquisition of POG in robotic surgery.


Subject(s)
Algorithms , Eye-Tracking Technology , Head Movements , Robotic Surgical Procedures , Humans , Robotic Surgical Procedures/methods , Robotic Surgical Procedures/instrumentation , Head Movements/physiology , Eye Movements/physiology , Male , Female , Fixation, Ocular/physiology , Adult , Calibration
16.
J Neurophysiol ; 132(1): 147-161, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38836297

ABSTRACT

People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.


Subject(s)
Feedback, Sensory , Fixation, Ocular , Hand , Memory , Psychomotor Performance , Humans , Male , Female , Hand/physiology , Adult , Psychomotor Performance/physiology , Biomechanical Phenomena/physiology , Feedback, Sensory/physiology , Memory/physiology , Fixation, Ocular/physiology , Young Adult , Visual Perception/physiology , Saccades/physiology
17.
J Vis ; 24(6): 16, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38913016

ABSTRACT

Humans saccade to faces in their periphery faster than to other types of objects. Previous research has highlighted the potential importance of the upper face region in this phenomenon, but it remains unclear whether this is driven by the eye region. Similarly, it remains unclear whether such rapid saccades are exclusive to faces or generalize to other semantically salient stimuli. Furthermore, it is unknown whether individuals differ in their face-specific saccadic reaction times and, if so, whether such differences could be linked to differences in face fixations during free viewing. To explore these open questions, we invited 77 participants to perform a saccadic choice task in which we contrasted faces as well as other salient objects, particularly isolated face features and text, with cars. Additionally, participants freely viewed 700 images of complex natural scenes in a separate session, which allowed us to determine the individual proportion of first fixations falling on faces. For the saccadic choice task, we found advantages for all categories of interest over cars. However, this effect was most pronounced for images of full faces. Full faces also elicited faster saccades compared with eyes, showing that isolated eye regions are not sufficient to elicit face-like responses. Additionally, we found consistent individual differences in saccadic reaction times toward faces that weakly correlated with face salience during free viewing. Our results suggest a link between semantic salience and rapid detection, but underscore the unique status of faces. Further research is needed to resolve the mechanisms underlying rapid face saccades.


Subject(s)
Facial Recognition , Individuality , Photic Stimulation , Reaction Time , Saccades , Humans , Saccades/physiology , Male , Female , Reaction Time/physiology , Adult , Young Adult , Facial Recognition/physiology , Photic Stimulation/methods , Fixation, Ocular/physiology , Adolescent
18.
Article in English | MEDLINE | ID: mdl-38848230

ABSTRACT

Children with Autism Spectrum Disorder (ASD) show severe attention deficits, hindering their capacity to acquire new skills. The automatic assessment of their attention response would provide the therapists with an important biomarker to better quantify their behaviour and monitor their progress during therapy. This work aims to develop a quantitative model, to evaluate the attention response of children with ASD, during robotic-assistive therapeutic sessions. Previous attempts to quantify the attention response of autistic subjects during human-robot interaction tasks were limited to restrained child movements. Instead, we developed an accurate quantitative system to assess the attention of ASD children in unconstrained scenarios. Our approach combines gaze extraction (Gaze360 model) with the definition of angular Areas-of-Interest, to characterise periods of attention towards elements of interest in the therapy environment during the session. The methodology was tested with 12 ASD children, achieving a mean test accuracy of 79.5 %. Finally, the proposed attention index was consistent with the therapists' evaluation of patients, allowing a meaningful interpretation of the automatic evaluation. This encourages the future clinical use of the proposed system.


Subject(s)
Attention , Autism Spectrum Disorder , Robotics , Humans , Child , Male , Female , Algorithms , Fixation, Ocular/physiology , Reproducibility of Results , Autistic Disorder , Eye-Tracking Technology
19.
Soc Cogn Affect Neurosci ; 19(1)2024 Jul 13.
Article in English | MEDLINE | ID: mdl-38918898

ABSTRACT

Gaze direction and pupil dilation play a critical role in communication and social interaction due to their ability to redirect and capture our attention and their relevance for emotional information. The present study aimed to explore whether the pupil size and gaze direction of the speaker affect language comprehension. Participants listened to sentences that could be correct or contain a syntactic anomaly, while the static face of a speaker was manipulated in terms of gaze direction (direct, averted) and pupil size (mydriasis, miosis). Left anterior negativity (LAN) and P600 linguistic event-related potential components were observed in response to syntactic anomalies across all conditions. The speaker's gaze did not impact syntactic comprehension. However, the amplitude of the LAN component for mydriasis (dilated pupil) was larger than for miosis (constricted pupil) condition. Larger pupils are generally associated with care, trust, interest, and attention, which might facilitate syntactic processing at early automatic stages. The result also supports the permeable and context-dependent nature of syntax. Previous studies also support an automatic nature of syntax (fast and efficient), which combined with the permeability to relevant sources of communicative information, such as pupil size and emotions, is highly adaptive for language comprehension and social interaction.


Subject(s)
Comprehension , Electroencephalography , Evoked Potentials , Pupil , Speech Perception , Humans , Pupil/physiology , Female , Male , Comprehension/physiology , Young Adult , Evoked Potentials/physiology , Adult , Speech Perception/physiology , Electroencephalography/methods , Fixation, Ocular/physiology , Attention/physiology , Miosis , Mydriasis , Adolescent
20.
Opt Lett ; 49(9): 2489-2492, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38691751

ABSTRACT

Point scanning retinal imaging modalities, including confocal scanning light ophthalmoscopy (cSLO) and optical coherence tomography, suffer from fixational motion artifacts. Fixation targets, though effective at reducing eye motion, are infeasible in some applications (e.g., handheld devices) due to their bulk and complexity. Here, we report on a cSLO device that scans the retina in a spiral pattern under pseudo-visible illumination, thus collecting image data while simultaneously projecting, into the subject's vision, the image of a bullseye, which acts as a virtual fixation target. An imaging study of 14 young adult volunteers was conducted to compare the fixational performance of this technique to that of raster scanning, with and without a discrete inline fixation target. Image registration was used to quantify subject eye motion; a strip-wise registration method was used for raster scans, and a novel, to the best of our knowledge, ring-based method was used for spiral scans. Results indicate a statistically significant reduction in eye motion by the use of spiral scanning as compared to raster scanning without a fixation target.


Subject(s)
Fixation, Ocular , Ophthalmoscopy , Retina , Humans , Retina/diagnostic imaging , Fixation, Ocular/physiology , Ophthalmoscopy/methods , Adult , Young Adult , Eye Movements
SELECTION OF CITATIONS
SEARCH DETAIL
...