Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Exp Brain Res ; 242(6): 1339-1348, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38563980

RESUMO

Using the "Don't look" (DL) paradigm, wherein participants are asked not to look at a specific feature (i.e., eye, nose, and mouth), we previously documented that Easterners struggled to completely avoid fixating on the eyes and nose. Their underlying mechanisms for attractiveness may differ because the fixations on the eyes were triggered only reflexively, whereas fixations on the nose were consistently elicited. In this study, we predominantly focused on the nose, where the center-of-gravity (CoG) effect, which refers to a person's tendency to look near an object's CoG, could be confounded. Full-frontal and mid-profile faces were used because the latter's CoG did not correspond to the nose location. Although we hypothesized that these two effects are independent, the results indicated that, in addition to the successful tracing of previous studies, the CoG effect explains the nose-attracting effect. This study not only reveals this explanation but also raises a question regarding the CoG effect on Eastern participants.


Assuntos
Reconhecimento Facial , Humanos , Feminino , Masculino , Reconhecimento Facial/fisiologia , Adulto Jovem , Adulto , Fixação Ocular/fisiologia , Olho , Estimulação Luminosa/métodos , Face
2.
Cereb Cortex ; 32(22): 5083-5107, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-35176752

RESUMO

Neuronal spiking was sampled from the frontal eye field (FEF) and from the rostral part of area 6 that reaches to the superior limb of the arcuate sulcus, dorsal to the arcuate spur when present (F2vr) in macaque monkeys performing memory-guided saccades and visually guided saccades for visual search. Neuronal spiking modulation in F2vr resembled that in FEF in many but not all respects. A new consensus clustering algorithm of neuronal modulation patterns revealed that F2vr and FEF contain a greater variety of modulation patterns than previously reported. The areas differ in the proportions of visuomotor neuron types, the proportions of neurons discriminating a target from distractors during visual search, and the consistency of modulation patterns across tasks. However, between F2vr and FEF we found no difference in the magnitude of delay period activity, the timing of the peak discharge rate relative to saccades, or the time of search target selection. The observed similarities and differences between the 2 cortical regions contribute to other work establishing the organization of eye fields in the frontal lobe and may help explain why FEF in monkeys is identified within granular prefrontal area 8 but in humans is identified within agranular premotor area 6.


Assuntos
Córtex Motor , Movimentos Sacádicos , Animais , Humanos , Haplorrinos , Macaca , Campos Visuais , Lobo Frontal/fisiologia
3.
Sensors (Basel) ; 23(12)2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37420738

RESUMO

This study addresses the challenges faced by individuals with upper limb disadvantages in operating power wheelchair joysticks by utilizing the extended Function-Behavior-Structure (FBS) model to identify design requirements for an alternative wheelchair control system. A gaze-controlled wheelchair system is proposed based on design requirements from the extended FBS model and prioritized using the MosCow method. This innovative system relies on the user's natural gaze and comprises three levels: perception, decision making, and execution. The perception layer senses and acquires information from the environment, including user eye movements and driving context. The decision-making layer processes this information to determine the user's intended direction, while the execution layer controls the wheelchair's movement accordingly. The system's effectiveness was validated through indoor field testing, with an average driving drift of less than 20 cm for participates. Additionally, the user experience scale revealed overall positive user experiences and perceptions of the system's usability, ease of use, and satisfaction.


Assuntos
Cadeiras de Rodas , Humanos , Movimentos Oculares , Extremidade Superior , Sensação , Desenho de Equipamento
4.
Proc Biol Sci ; 288(1943): 20202374, 2021 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-33499788

RESUMO

In the true flies (Diptera), the hind wings have evolved into specialized mechanosensory organs known as halteres, which are sensitive to gyroscopic and other inertial forces. Together with the fly's visual system, the halteres direct head and wing movements through a suite of equilibrium reflexes that are crucial to the fly's ability to maintain stable flight. As in other animals (including humans), this presents challenges to the nervous system as equilibrium reflexes driven by the inertial sensory system must be integrated with those driven by the visual system in order to control an overlapping pool of motor outputs shared between the two of them. Here, we introduce an experimental paradigm for reproducibly altering haltere stroke kinematics and use it to quantify multisensory integration of wing and gaze equilibrium reflexes. We show that multisensory wing-steering responses reflect a linear superposition of haltere-driven and visually driven responses, but that multisensory gaze responses are not well predicted by this framework. These models, based on populations, extend also to the responses of individual flies.


Assuntos
Drosophila , Voo Animal , Animais , Fenômenos Biomecânicos , Drosophila melanogaster , Humanos , Reflexo , Asas de Animais
5.
BMC Neurol ; 21(1): 63, 2021 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-33568101

RESUMO

BACKGROUND: Limited research exists to guide clinical decisions about trialling, selecting, implementing and evaluating eye-gaze control technology. This paper reports on the outcomes of a Delphi study that was conducted to build international stakeholder consensus to inform decision making about trialling and implementing eye-gaze control technology with people with cerebral palsy. METHODS: A three-round online Delphi survey was conducted. In Round 1, 126 stakeholders responded to questions identified through an international stakeholder Advisory Panel and systematic reviews. In Round 2, 63 respondents rated the importance of 200 statements generated by in Round 1. In Round 3, 41 respondents rated the importance of the 105 highest ranked statements retained from Round 2. RESULTS: Stakeholders achieved consensus on 94 of the original 200 statements. These statements related to person factors, support networks, the environment, and technical aspects to consider during assessment, trial, implementation and follow-up. Findings reinforced the importance of an individualised approach and that information gathered from the user, their support network and professionals are central when measuring outcomes. Information required to support an application for funding was obtained. CONCLUSION: This Delphi study has identified issues which are unique to eye-gaze control technology and will enhance its implementation with people with cerebral palsy.


Assuntos
Paralisia Cerebral , Tomada de Decisão Clínica , Fixação Ocular , Tecnologia/instrumentação , Interface Usuário-Computador , Adolescente , Adulto , Criança , Consenso , Técnica Delphi , Feminino , Humanos , Masculino , Inquéritos e Questionários
6.
Cereb Cortex ; 30(9): 4995-5013, 2020 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-32390052

RESUMO

The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.


Assuntos
Fixação Ocular/fisiologia , Lobo Frontal/fisiologia , Percepção Visual/fisiologia , Animais , Feminino , Macaca mulatta , Estimulação Luminosa
7.
J Neuroeng Rehabil ; 18(1): 173, 2021 12 18.
Artigo em Inglês | MEDLINE | ID: mdl-34922590

RESUMO

BACKGROUND: Building control architecture that balances the assistive manipulation systems with the benefits of direct human control is a crucial challenge of human-robot collaboration. It promises to help people with disabilities more efficiently control wheelchair and wheelchair-mounted robot arms to accomplish activities of daily living. METHODS: In this study, our research objective is to design an eye-tracking assistive robot control system capable of providing targeted engagement and motivating individuals with a disability to use the developed method for self-assistance activities of daily living. The graphical user interface is designed and integrated with the developed control architecture to achieve the goal. RESULTS: We evaluated the system by conducting a user study. Ten healthy participants performed five trials of three manipulation tasks using the graphical user interface and the developed control framework. The 100% success rate on task performance demonstrates the effectiveness of our system for individuals with motor impairments to control wheelchair and wheelchair-mounted assistive robotic manipulators. CONCLUSIONS: We demonstrated the usability of using this eye-gaze system to control a robotic arm mounted on a wheelchair in activities of daily living for people with disabilities. We found high levels of acceptance with higher ratings in the evaluation of the system with healthy participants.


Assuntos
Pessoas com Deficiência , Robótica , Tecnologia Assistiva , Cadeiras de Rodas , Atividades Cotidianas , Humanos , Interface Usuário-Computador
8.
Sensors (Basel) ; 21(5)2021 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-33807599

RESUMO

This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM 2) for RGB-D cameras with a Magnetic Angular rate Gravity (MARG)-sensor filter. The data fusion process is designed to dynamically switch between magnetic, inertial and visual heading sources to enable robust orientation estimation under various disturbances, e.g., magnetic disturbances or degraded visual sensor data. The interface furthermore delivers accurate eye- and head-gaze vectors to enable precise robot end effector (EFF) positioning and employs a head motion mapping technique to effectively control the robots end effector orientation. An experimental proof of concept demonstrates that the proposed interface and its data fusion process generate reliable and robust pose estimation. The three-dimensional head- and eye-gaze position estimation pipeline delivers a mean Euclidean error of 19.0±15.7 mm for head-gaze and 27.4±21.8 mm for eye-gaze at a distance of 0.3-1.1 m to the user. This indicates that the proposed interface offers a precise control mechanism for hands-free and full six degree of freedom (DoF) robot teleoperation in Cartesian space by head- or eye-gaze and head motion.


Assuntos
Movimentos Oculares , Robótica , Fixação Ocular , Movimento (Física) , Orientação
9.
Proc Natl Acad Sci U S A ; 112(15): E1956-65, 2015 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-25825743

RESUMO

The optic tectum (called superior colliculus in mammals) is critical for eye-head gaze shifts as we navigate in the terrain and need to adapt our movements to the visual scene. The neuronal mechanisms underlying the tectal contribution to stimulus selection and gaze reorientation remains, however, unclear at the microcircuit level. To analyze this complex--yet phylogenetically conserved--sensorimotor system, we developed a novel in vitro preparation in the lamprey that maintains the eye and midbrain intact and allows for whole-cell recordings from prelabeled tectal gaze-controlling cells in the deep layer, while visual stimuli are delivered. We found that receptive field activation of these cells provide monosynaptic retinal excitation followed by local GABAergic inhibition (feedforward). The entire remaining retina, on the other hand, elicits only inhibition (surround inhibition). If two stimuli are delivered simultaneously, one inside and one outside the receptive field, the former excitatory response is suppressed. When local inhibition is pharmacologically blocked, the suppression induced by competing stimuli is canceled. We suggest that this rivalry between visual areas across the tectal map is triggered through long-range inhibitory tectal connections. Selection commands conveyed via gaze-controlling neurons in the optic tectum are, thus, formed through synaptic integration of local retinotopic excitation and global tectal inhibition. We anticipate that this mechanism not only exists in lamprey but is also conserved throughout vertebrate evolution.


Assuntos
Interneurônios/fisiologia , Lampreias/fisiologia , Colículos Superiores/fisiologia , Vias Visuais/fisiologia , Algoritmos , Animais , Neurônios GABAérgicos/citologia , Neurônios GABAérgicos/metabolismo , Neurônios GABAérgicos/fisiologia , Imuno-Histoquímica , Interneurônios/citologia , Interneurônios/metabolismo , Lampreias/anatomia & histologia , Lampreias/metabolismo , Microscopia Confocal , Microscopia de Fluorescência , Modelos Neurológicos , Inibição Neural/fisiologia , Técnicas de Patch-Clamp , Retinaldeído/fisiologia , Colículos Superiores/citologia , Colículos Superiores/metabolismo , Sinapses/fisiologia , Transmissão Sináptica/fisiologia , Vias Visuais/citologia , Vias Visuais/metabolismo , Ácido gama-Aminobutírico/metabolismo
10.
J Exp Biol ; 220(Pt 12): 2218-2227, 2017 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-28385799

RESUMO

Animals typically combine inertial and visual information to stabilize their gaze against confounding self-generated visual motion, and to maintain a level gaze when the body is perturbed by external forces. In vertebrates, an inner ear vestibular system provides information about body rotations and accelerations, but gaze stabilization is less understood in insects, which lack a vestibular organ. In flies, the halteres, reduced hindwings imbued with hundreds of mechanosensory cells, sense inertial forces and provide input to neck motoneurons that control gaze. These neck motoneurons also receive input from the visual system. Head movement responses to visual motion and physical rotations of the body have been measured independently, but how inertial information might influence gaze responses to visual motion has not been fully explored. We measured the head movement responses to visual motion in intact and haltere-ablated tethered flies to explore the role of the halteres in modulating visually guided head movements in the absence of rotation. We note that visually guided head movements occur only during flight. Although halteres are not necessary for head movements, the amplitude of the response is smaller in haltereless flies at higher speeds of visual motion. This modulation occurred in the absence of rotational body movements, demonstrating that the inertial forces associated with straight tethered flight are important for gaze-control behavior. The cross-modal influence of halteres on the fly's responses to fast visual motion indicates that the haltere's role in gaze stabilization extends beyond its canonical function as a sensor of angular rotations of the thorax.


Assuntos
Drosophila melanogaster/fisiologia , Voo Animal , Movimentos da Cabeça , Percepção Visual , Animais , Fenômenos Biomecânicos , Feminino , Mecanorreceptores/fisiologia , Asas de Animais/fisiologia
11.
J Exp Biol ; 219(Pt 8): 1110-21, 2016 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-27103674

RESUMO

During swimming in the amphibian ITALIC! Xenopus laevis, efference copies of rhythmic locomotor commands produced by the spinal central pattern generator (CPG) can drive extraocular motor output appropriate for producing image-stabilizing eye movements to offset the disruptive effects of self-motion. During metamorphosis, ITALIC! X. laevisremodels its locomotor strategy from larval tail-based undulatory movements to bilaterally synchronous hindlimb kicking in the adult. This change in propulsive mode results in head/body motion with entirely different dynamics, necessitating a concomitant switch in compensatory ocular movements from conjugate left-right rotations to non-conjugate convergence during the linear forward acceleration produced during each kick cycle. Here, using semi-intact or isolated brainstem/spinal cord preparations at intermediate metamorphic stages, we monitored bilateral eye motion along with extraocular, spinal axial and limb motor nerve activity during episodes of spontaneous fictive swimming. Our results show a progressive transition in spinal efference copy control of extraocular motor output that remains adapted to offsetting visual disturbances during the combinatorial expression of bimodal propulsion when functional larval and adult locomotor systems co-exist within the same animal. In stages at metamorphic climax, spino-extraocular motor coupling, which previously derived from axial locomotor circuitry alone, can originate from both axial and ITALIC! de novohindlimb CPGs, although the latter's influence becomes progressively more dominant and eventually exclusive as metamorphosis terminates with tail resorption. Thus, adaptive interactions between locomotor and extraocular motor circuitry allows CPG-driven efference copy signaling to continuously match the changing spatio-temporal requirements for visual image stabilization throughout the transitional period when one propulsive mechanism emerges and replaces another.


Assuntos
Adaptação Fisiológica , Movimentos Oculares/fisiologia , Locomoção/fisiologia , Metamorfose Biológica/fisiologia , Atividade Motora/fisiologia , Medula Espinal/fisiologia , Xenopus laevis/fisiologia , Animais , Modelos Biológicos , Natação/fisiologia
12.
Cereb Cortex ; 25(10): 3932-52, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25491118

RESUMO

A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.


Assuntos
Lobo Frontal/fisiologia , Cabeça/fisiologia , Neurônios/fisiologia , Desempenho Psicomotor/fisiologia , Movimentos Sacádicos , Percepção Visual/fisiologia , Potenciais de Ação , Animais , Feminino , Macaca mulatta
13.
Eur J Neurosci ; 42(11): 2934-51, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26448341

RESUMO

We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.


Assuntos
Movimentos Oculares/fisiologia , Neurônios/fisiologia , Percepção Espacial/fisiologia , Colículos Superiores/fisiologia , Percepção Visual/fisiologia , Animais , Medições dos Movimentos Oculares , Feminino , Movimentos da Cabeça/fisiologia , Macaca mulatta , Modelos Neurológicos , Estimulação Luminosa
14.
J Exp Biol ; 218(Pt 23): 3777-87, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26486370

RESUMO

The ability of hoverflies to control their head orientation with respect to their body contributes importantly to their agility and their autonomous navigation abilities. Many tasks performed by this insect during flight, especially while hovering, involve a head stabilization reflex. This reflex, which is mediated by multisensory channels, prevents the visual processing from being disturbed by motion blur and maintains a consistent perception of the visual environment. The so-called dorsal light response (DLR) is another head control reflex, which makes insects sensitive to the brightest part of the visual field. In this study, we experimentally validate and quantify the control loop driving the head roll with respect to the horizon in hoverflies. The new approach developed here consisted of using an upside-down horizon in a body roll paradigm. In this unusual configuration, tethered flying hoverflies surprisingly no longer use purely vision-based control for head stabilization. These results shed new light on the role of neck proprioceptor organs in head and body stabilization with respect to the horizon. Based on the responses obtained with male and female hoverflies, an improved model was then developed in which the output signals delivered by the neck proprioceptor organs are combined with the visual error in the estimated position of the body roll. An internal estimation of the body roll angle with respect to the horizon might explain the extremely accurate flight performances achieved by some hovering insects.


Assuntos
Dípteros/fisiologia , Movimentos da Cabeça , Propriocepção , Animais , Feminino , Voo Animal/fisiologia , Luz , Masculino , Orientação , Reflexo , Visão Ocular/fisiologia
15.
J Exp Biol ; 217(Pt 4): 570-9, 2014 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-24198264

RESUMO

Visual identification of small moving targets is a challenge for all moving animals. Their own motion generates displacement of the visual surroundings, inducing wide-field optic flow across the retina. Wide-field optic flow is used to sense perturbations in the flight course. Both ego-motion and corrective optomotor responses confound any attempt to track a salient target moving independently of the visual surroundings. What are the strategies that flying animals use to discriminate small-field figure motion from superimposed wide-field background motion? We examined how fruit flies adjust their gaze in response to a compound visual stimulus comprising a small moving figure against an independently moving wide-field ground, which they do by re-orienting their head or their flight trajectory. We found that fixing the head in place impairs object fixation in the presence of ground motion, and that head movements are necessary for stabilizing wing steering responses to wide-field ground motion when a figure is present. When a figure is moving relative to a moving ground, wing steering responses follow components of both the figure and ground trajectories, but head movements follow only the ground motion. To our knowledge, this is the first demonstration that wing responses can be uncoupled from head responses and that the two follow distinct trajectories in the case of simultaneous figure and ground motion. These results suggest that whereas figure tracking by wing kinematics is independent of head movements, head movements are important for stabilizing ground motion during active figure tracking.


Assuntos
Comportamento Animal , Drosophila melanogaster/fisiologia , Voo Animal , Animais , Fenômenos Biomecânicos , Estimulação Luminosa , Percepção Espacial , Asas de Animais/fisiologia
16.
eNeuro ; 11(2)2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38242692

RESUMO

The olivocerebellar system, which is critical for sensorimotor performance and learning, functions through modules with feedback loops. The main feedback to the inferior olive comes from the cerebellar nuclei (CN), which are predominantly GABAergic and contralateral. However, for the subnucleus d of the caudomedial accessory olive (cdMAO), a crucial region for oculomotor and upper body movements, the source of GABAergic input has yet to be identified. Here, we demonstrate the existence of a disynaptic inhibitory projection from the medial CN (MCN) to the cdMAO via the superior colliculus (SC) by exploiting retrograde, anterograde, and transsynaptic viral tracing at the light microscopic level as well as anterograde classical and viral tracing combined with immunocytochemistry at the electron microscopic level. Retrograde tracing in Gad2-Cre mice reveals that the cdMAO receives GABAergic input from the contralateral SC. Anterograde transsynaptic tracing uncovered that the SC neurons receiving input from the contralateral MCN provide predominantly inhibitory projections to contralateral cdMAO, ipsilateral to the MCN. Following ultrastructural analysis of the monosynaptic projection about half of the SC terminals within the contralateral cdMAO are GABAergic. The disynaptic GABAergic projection from the MCN to the ipsilateral cdMAO mirrors that of the monosynaptic excitatory projection from the MCN to the contralateral cdMAO. Thus, while completing the map of inhibitory inputs to the olivary subnuclei, we established that the MCN inhibits the cdMAO via the contralateral SC, highlighting a potential push-pull mechanism in directional gaze control that appears unique in terms of laterality and polarity among olivocerebellar modules.


Assuntos
Cerebelo , Complexo Olivar Inferior , Camundongos , Animais , Núcleo Olivar/fisiologia , Núcleo Olivar/ultraestrutura , Transmissão Sináptica , Núcleos Cerebelares/fisiologia
17.
Accid Anal Prev ; 180: 106905, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36508949

RESUMO

The removal of drivers' active engagement in driving tasks can lead to erratic gaze patterns in SAE Level 2 (L2) and Level 3 (L3) automation, which has been linked to their subsequential degraded take-over performance. To further address how changes in gaze patterns evolve during the take-over phase, and whether they are influenced by the take-over urgency and the location of the human-machine interface, this driving simulator study used a head-up display (HUD) to relay information about the automation status and conducted take-over driving experiments where the ego car was about to exit the highway with variations in the automation level (L2, L3) and time budget (2 s, 6 s). In L2 automation, drivers were required to monitor the environment, while in L3, they were engaged with a visual non-driving related task. Manual driving was also embodied in the experiments as the baseline. Results showed that, compared to manual driving, drivers in L2 automation focused more on the HUD and Far Road (roadway beyond 2 s time headway ahead), and less on the Near Road (roadway within 2 s time headway ahead); while in L3, drivers' attention was predominantly allocated on the non-driving related task. After receiving take-over requests (TORs), there was a gradual diversion of attention from the Far Road to the Near Road in L2 take-overs. This trend changed nearly in proportion to the time within the time budget and it exaggerated given a shorter time budget of 2 s. While in L3, drivers' gaze distribution was similar in the early stage of take-overs for both time budget conditions (2 s vs. 6 s), where they prioritized their early glances to Near Road with a gradual increase in attention towards Far Road. The HUD used in the present study showed the potential to maintain drivers' attention around the road center during automation and to encourage drivers to glance the road earlier after TORs by reducing glances to the instrument cluster, which might be of significance to take-over safety. These findings were discussed based on an extended conceptual gaze control model, which advances our understanding of gaze patterns around control transitions and their underlying gaze control causations. Implications can be contributed to the design of autonomous vehicles to facilitate the transition of control by guiding drivers' attention appropriately according to drivers' attentional state and the take-over urgency.


Assuntos
Acidentes de Trânsito , Condução de Veículo , Humanos , Automação , Tempo de Reação , Veículos Autônomos
18.
Front Hum Neurosci ; 17: 1255465, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38094145

RESUMO

Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online as online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision (±4 deg). EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. It tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the laboratory, using gaze-contingent stimulus presentation; second, in the laboratory, using EasyEyes while independently monitoring gaze using EyeLink 1000; third, online at home, using EasyEyes. We find that crowding thresholds are consistent and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, this method enables fixation-dependent measurements online, for easy testing of larger and more diverse populations.

19.
Front Robot AI ; 10: 1127626, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37427087

RESUMO

Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot's gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot's lack of gaze aversion.

20.
bioRxiv ; 2023 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-37503301

RESUMO

Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online since online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision (±4 deg, Papoutsaki et al., 2016). The EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. EasyEyes tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the lab, using gaze-contingent stimulus presentation (Kurzawski et al., 2023; Pelli et al., 2016); second, in the lab, using EasyEyes while independently monitoring gaze; third, online at home, using EasyEyes. We find that crowding thresholds are consistent (no significant differences in mean and variance of thresholds across ways) and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, EasyEyes enables fixation-dependent measurements online, for easy testing of larger and more diverse populations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA