Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Exp Brain Res ; 242(6): 1339-1348, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38563980

RESUMO

Using the "Don't look" (DL) paradigm, wherein participants are asked not to look at a specific feature (i.e., eye, nose, and mouth), we previously documented that Easterners struggled to completely avoid fixating on the eyes and nose. Their underlying mechanisms for attractiveness may differ because the fixations on the eyes were triggered only reflexively, whereas fixations on the nose were consistently elicited. In this study, we predominantly focused on the nose, where the center-of-gravity (CoG) effect, which refers to a person's tendency to look near an object's CoG, could be confounded. Full-frontal and mid-profile faces were used because the latter's CoG did not correspond to the nose location. Although we hypothesized that these two effects are independent, the results indicated that, in addition to the successful tracing of previous studies, the CoG effect explains the nose-attracting effect. This study not only reveals this explanation but also raises a question regarding the CoG effect on Eastern participants.


Assuntos
Reconhecimento Facial , Humanos , Feminino , Masculino , Reconhecimento Facial/fisiologia , Adulto Jovem , Adulto , Fixação Ocular/fisiologia , Olho , Estimulação Luminosa/métodos , Face
2.
eNeuro ; 11(2)2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38242692

RESUMO

The olivocerebellar system, which is critical for sensorimotor performance and learning, functions through modules with feedback loops. The main feedback to the inferior olive comes from the cerebellar nuclei (CN), which are predominantly GABAergic and contralateral. However, for the subnucleus d of the caudomedial accessory olive (cdMAO), a crucial region for oculomotor and upper body movements, the source of GABAergic input has yet to be identified. Here, we demonstrate the existence of a disynaptic inhibitory projection from the medial CN (MCN) to the cdMAO via the superior colliculus (SC) by exploiting retrograde, anterograde, and transsynaptic viral tracing at the light microscopic level as well as anterograde classical and viral tracing combined with immunocytochemistry at the electron microscopic level. Retrograde tracing in Gad2-Cre mice reveals that the cdMAO receives GABAergic input from the contralateral SC. Anterograde transsynaptic tracing uncovered that the SC neurons receiving input from the contralateral MCN provide predominantly inhibitory projections to contralateral cdMAO, ipsilateral to the MCN. Following ultrastructural analysis of the monosynaptic projection about half of the SC terminals within the contralateral cdMAO are GABAergic. The disynaptic GABAergic projection from the MCN to the ipsilateral cdMAO mirrors that of the monosynaptic excitatory projection from the MCN to the contralateral cdMAO. Thus, while completing the map of inhibitory inputs to the olivary subnuclei, we established that the MCN inhibits the cdMAO via the contralateral SC, highlighting a potential push-pull mechanism in directional gaze control that appears unique in terms of laterality and polarity among olivocerebellar modules.


Assuntos
Cerebelo , Complexo Olivar Inferior , Camundongos , Animais , Núcleo Olivar/fisiologia , Núcleo Olivar/ultraestrutura , Transmissão Sináptica , Núcleos Cerebelares/fisiologia
3.
Front Hum Neurosci ; 17: 1255465, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38094145

RESUMO

Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online as online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision (±4 deg). EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. It tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the laboratory, using gaze-contingent stimulus presentation; second, in the laboratory, using EasyEyes while independently monitoring gaze using EyeLink 1000; third, online at home, using EasyEyes. We find that crowding thresholds are consistent and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, this method enables fixation-dependent measurements online, for easy testing of larger and more diverse populations.

4.
Front Robot AI ; 10: 1127626, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37427087

RESUMO

Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot's gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot's lack of gaze aversion.

5.
Sensors (Basel) ; 23(12)2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37420738

RESUMO

This study addresses the challenges faced by individuals with upper limb disadvantages in operating power wheelchair joysticks by utilizing the extended Function-Behavior-Structure (FBS) model to identify design requirements for an alternative wheelchair control system. A gaze-controlled wheelchair system is proposed based on design requirements from the extended FBS model and prioritized using the MosCow method. This innovative system relies on the user's natural gaze and comprises three levels: perception, decision making, and execution. The perception layer senses and acquires information from the environment, including user eye movements and driving context. The decision-making layer processes this information to determine the user's intended direction, while the execution layer controls the wheelchair's movement accordingly. The system's effectiveness was validated through indoor field testing, with an average driving drift of less than 20 cm for participates. Additionally, the user experience scale revealed overall positive user experiences and perceptions of the system's usability, ease of use, and satisfaction.


Assuntos
Cadeiras de Rodas , Humanos , Movimentos Oculares , Extremidade Superior , Sensação , Desenho de Equipamento
6.
bioRxiv ; 2023 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-37503301

RESUMO

Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online since online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision (±4 deg, Papoutsaki et al., 2016). The EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. EasyEyes tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the lab, using gaze-contingent stimulus presentation (Kurzawski et al., 2023; Pelli et al., 2016); second, in the lab, using EasyEyes while independently monitoring gaze; third, online at home, using EasyEyes. We find that crowding thresholds are consistent (no significant differences in mean and variance of thresholds across ways) and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, EasyEyes enables fixation-dependent measurements online, for easy testing of larger and more diverse populations.

7.
Accid Anal Prev ; 180: 106905, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36508949

RESUMO

The removal of drivers' active engagement in driving tasks can lead to erratic gaze patterns in SAE Level 2 (L2) and Level 3 (L3) automation, which has been linked to their subsequential degraded take-over performance. To further address how changes in gaze patterns evolve during the take-over phase, and whether they are influenced by the take-over urgency and the location of the human-machine interface, this driving simulator study used a head-up display (HUD) to relay information about the automation status and conducted take-over driving experiments where the ego car was about to exit the highway with variations in the automation level (L2, L3) and time budget (2 s, 6 s). In L2 automation, drivers were required to monitor the environment, while in L3, they were engaged with a visual non-driving related task. Manual driving was also embodied in the experiments as the baseline. Results showed that, compared to manual driving, drivers in L2 automation focused more on the HUD and Far Road (roadway beyond 2 s time headway ahead), and less on the Near Road (roadway within 2 s time headway ahead); while in L3, drivers' attention was predominantly allocated on the non-driving related task. After receiving take-over requests (TORs), there was a gradual diversion of attention from the Far Road to the Near Road in L2 take-overs. This trend changed nearly in proportion to the time within the time budget and it exaggerated given a shorter time budget of 2 s. While in L3, drivers' gaze distribution was similar in the early stage of take-overs for both time budget conditions (2 s vs. 6 s), where they prioritized their early glances to Near Road with a gradual increase in attention towards Far Road. The HUD used in the present study showed the potential to maintain drivers' attention around the road center during automation and to encourage drivers to glance the road earlier after TORs by reducing glances to the instrument cluster, which might be of significance to take-over safety. These findings were discussed based on an extended conceptual gaze control model, which advances our understanding of gaze patterns around control transitions and their underlying gaze control causations. Implications can be contributed to the design of autonomous vehicles to facilitate the transition of control by guiding drivers' attention appropriately according to drivers' attentional state and the take-over urgency.


Assuntos
Acidentes de Trânsito , Condução de Veículo , Humanos , Automação , Tempo de Reação , Veículos Autônomos
8.
Hum Mov Sci ; 86: 103015, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36242826

RESUMO

The purpose of the present study was to investigate the effects of attentional focus and cognitive-load on motor performance, quiet-eye-duration, and pupil dilation. 18 participants completed a dart throwing task under four conditions, internal or external focus with high or low cognitive-load. Cognitive-load was created by a secondary tone detection task. During each trial participants pupil size and eye movements were recorded along with accuracy data of the dart throw. Results revealed that decreased cognitive-load increased accuracy while high load increased pupil size (p's < 0.05). An external focus resulted in the greatest accuracy while an external focus with high cognitive-load resulted in the longest quiet-eye-durations (p's < 0.05). Based on these findings an increase in pupil size is related to greater cognitive-load but doesn't explain the improvement in task performance. Likewise, an external focus of attention improved performance but was not strongly related to quiet-eye-duration. Results are further discussed in the article.


Assuntos
Atenção , Pupila , Humanos , Atenção/fisiologia , Pupila/fisiologia , Análise e Desempenho de Tarefas , Movimentos Oculares , Cognição/fisiologia
9.
Front Psychol ; 13: 798766, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35282196

RESUMO

Police officers often encounter potentially dangerous situations in which they strongly rely on their ability to identify threats quickly and react accordingly. Previous studies have shown that practical experience and targeted training significantly improve threat detection time and decision-making performance in law enforcement situations. We applied 90-min traditional firearms training as a control condition (35 participants) and a specifically developed intervention training (25 participants) to police cadets. The intervention training contained theoretical and practical training on tactical gaze control, situational awareness, and visual attention, while the control training focused on precision and speed. In a pre- and posttest, we measured decision-making performance as well as (tactical) response preparation and execution to evaluate the training. Concerning cognitive performance training (i.e., decision-making), the number of correct decisions increased from pre- to posttest. In shoot scenarios, correct decisions improved significantly more in the intervention group than in the control group. In don't-shoot scenarios, there were no considerable differences. Concerning the training of response preparation and execution in shoot scenarios, the intervention group's response time (time until participants first shot at an armed attacker), but not hit time, decreased significantly from pre- to posttest. The control group was significantly faster than the intervention group, with their response and hit time remaining constant across pre- and posttest. Concerning the training of tactical action control, the intervention group performed significantly better than the control group. Moreover, the intervention group improved the tactical handling of muzzle position significantly. The results indicate that a single 90-min session of targeted gaze control and visual attention training improves decision-making performance, response time, and tactical handling of muzzle position in shoot scenarios. However, these faster response times do not necessarily translate to faster hit times - presumably due to the motor complexity of hitting an armed attacker with live ammunition. We conclude that theory-based training on tactical gaze control and visual attention has a higher impact on police officers' decision-making performance than traditional firearms training. Therefore, we recommend law enforcement agencies include perception-based shoot/don't-shoot exercises in training and regular tests for officers' annual firearm requalification.

10.
Cereb Cortex ; 32(22): 5083-5107, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-35176752

RESUMO

Neuronal spiking was sampled from the frontal eye field (FEF) and from the rostral part of area 6 that reaches to the superior limb of the arcuate sulcus, dorsal to the arcuate spur when present (F2vr) in macaque monkeys performing memory-guided saccades and visually guided saccades for visual search. Neuronal spiking modulation in F2vr resembled that in FEF in many but not all respects. A new consensus clustering algorithm of neuronal modulation patterns revealed that F2vr and FEF contain a greater variety of modulation patterns than previously reported. The areas differ in the proportions of visuomotor neuron types, the proportions of neurons discriminating a target from distractors during visual search, and the consistency of modulation patterns across tasks. However, between F2vr and FEF we found no difference in the magnitude of delay period activity, the timing of the peak discharge rate relative to saccades, or the time of search target selection. The observed similarities and differences between the 2 cortical regions contribute to other work establishing the organization of eye fields in the frontal lobe and may help explain why FEF in monkeys is identified within granular prefrontal area 8 but in humans is identified within agranular premotor area 6.


Assuntos
Córtex Motor , Movimentos Sacádicos , Animais , Humanos , Haplorrinos , Macaca , Campos Visuais , Lobo Frontal/fisiologia
11.
J Neuroeng Rehabil ; 18(1): 173, 2021 12 18.
Artigo em Inglês | MEDLINE | ID: mdl-34922590

RESUMO

BACKGROUND: Building control architecture that balances the assistive manipulation systems with the benefits of direct human control is a crucial challenge of human-robot collaboration. It promises to help people with disabilities more efficiently control wheelchair and wheelchair-mounted robot arms to accomplish activities of daily living. METHODS: In this study, our research objective is to design an eye-tracking assistive robot control system capable of providing targeted engagement and motivating individuals with a disability to use the developed method for self-assistance activities of daily living. The graphical user interface is designed and integrated with the developed control architecture to achieve the goal. RESULTS: We evaluated the system by conducting a user study. Ten healthy participants performed five trials of three manipulation tasks using the graphical user interface and the developed control framework. The 100% success rate on task performance demonstrates the effectiveness of our system for individuals with motor impairments to control wheelchair and wheelchair-mounted assistive robotic manipulators. CONCLUSIONS: We demonstrated the usability of using this eye-gaze system to control a robotic arm mounted on a wheelchair in activities of daily living for people with disabilities. We found high levels of acceptance with higher ratings in the evaluation of the system with healthy participants.


Assuntos
Pessoas com Deficiência , Robótica , Tecnologia Assistiva , Cadeiras de Rodas , Atividades Cotidianas , Humanos , Interface Usuário-Computador
12.
Front Neurol ; 12: 682761, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34149606

RESUMO

Gaze control is required for applying visual stimuli to a particular area of the visual field. We developed a visual field test with gaze check tasks to investigate hemianopia. In this test, participants must report the presence or absence of visual stimuli when a small object at the fixation point vibrates. Trials in the absence of visual stimuli were used as gaze check tasks, since the vibration could be observed only when the gaze was directed at the fixation point. We evaluated the efficacy of our test in four control participants and one patient with homonymous hemianopia who was unaware of the defects in the left visual field. This patient presented hemianopia in the test with gaze check tasks, but not when the gaze check tasks were omitted. The patient showed spontaneous gaze movements from the fixation point to the upper left direction, as well as scanning of the left visual field during the test without gaze check tasks. Thus, we concluded that the visual defects in this patient were compensated in daily life by spontaneous eye movements coordinated with visual information processing. The present results show the usefulness of the visual field test with gaze check tasks.

13.
Psychon Bull Rev ; 28(6): 1944-1960, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34159530

RESUMO

Gaze control manifests from a dynamic integration of visual and auditory information, with sound providing important cues for how a viewer should behave. Some past research suggests that music, even if entirely irrelevant to the current task demands, may also sway the timing and frequency of fixations. The current work sought to further assess this idea as well as investigate whether task-irrelevant music could also impact how gaze is spatially allocated. In preparation for a later memory test, participants studied pictures of urban scenes in silence or while simultaneously listening to one of two types of music. Eye tracking was recorded, and nine gaze behaviors were measured to characterize the temporal and spatial aspects of gaze control. Findings showed that while these gaze behaviors changed over the course of viewing, music had no impact. Participants in the music conditions, however, did show better memory performance than those who studied in silence. These findings are discussed within theories of multimodal gaze control.


Assuntos
Música , Atenção , Percepção Auditiva , Sinais (Psicologia) , Movimentos Oculares , Fixação Ocular , Humanos
14.
Sensors (Basel) ; 21(5)2021 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-33807599

RESUMO

This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM 2) for RGB-D cameras with a Magnetic Angular rate Gravity (MARG)-sensor filter. The data fusion process is designed to dynamically switch between magnetic, inertial and visual heading sources to enable robust orientation estimation under various disturbances, e.g., magnetic disturbances or degraded visual sensor data. The interface furthermore delivers accurate eye- and head-gaze vectors to enable precise robot end effector (EFF) positioning and employs a head motion mapping technique to effectively control the robots end effector orientation. An experimental proof of concept demonstrates that the proposed interface and its data fusion process generate reliable and robust pose estimation. The three-dimensional head- and eye-gaze position estimation pipeline delivers a mean Euclidean error of 19.0±15.7 mm for head-gaze and 27.4±21.8 mm for eye-gaze at a distance of 0.3-1.1 m to the user. This indicates that the proposed interface offers a precise control mechanism for hands-free and full six degree of freedom (DoF) robot teleoperation in Cartesian space by head- or eye-gaze and head motion.


Assuntos
Movimentos Oculares , Robótica , Fixação Ocular , Movimento (Física) , Orientação
15.
Vision Res ; 182: 1-8, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33550023

RESUMO

While passive social information (e.g. pictures of people) routinely draws one's eyes, our willingness to look at live others is more nuanced. People tend not to stare at strangers and will modify their gaze behaviour to avoid sending undesirable social signals; yet they often continue to monitor others covertly "out of the corner of their eyes." What this means for looks that are being made near to live others is unknown. Will the eyes be drawn towards the other person, or pushed away? We evaluate changes in two elements of gaze control: image-independent principles guiding how people look (e.g. biases to make eye movements along the cardinal directions) and image-dependent principles guiding what people look at (e.g. a preference for meaningful content within a scene). Participants were asked to freely view semantically unstructured (fractals) and semantically structured (rotated landscape) images, half of which were located in the space near to a live other. We found that eye movements were horizontally displaced away from a visible other starting at 1032 ms after stimulus onset when fractals but not landscapes were viewed. We suggest that the avoidance of looking towards live others extends to the near space around them, at least in the absence of semantically meaningful gaze targets.


Assuntos
Movimentos Oculares , Olho , Fixação Ocular , Humanos
16.
BMC Neurol ; 21(1): 63, 2021 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-33568101

RESUMO

BACKGROUND: Limited research exists to guide clinical decisions about trialling, selecting, implementing and evaluating eye-gaze control technology. This paper reports on the outcomes of a Delphi study that was conducted to build international stakeholder consensus to inform decision making about trialling and implementing eye-gaze control technology with people with cerebral palsy. METHODS: A three-round online Delphi survey was conducted. In Round 1, 126 stakeholders responded to questions identified through an international stakeholder Advisory Panel and systematic reviews. In Round 2, 63 respondents rated the importance of 200 statements generated by in Round 1. In Round 3, 41 respondents rated the importance of the 105 highest ranked statements retained from Round 2. RESULTS: Stakeholders achieved consensus on 94 of the original 200 statements. These statements related to person factors, support networks, the environment, and technical aspects to consider during assessment, trial, implementation and follow-up. Findings reinforced the importance of an individualised approach and that information gathered from the user, their support network and professionals are central when measuring outcomes. Information required to support an application for funding was obtained. CONCLUSION: This Delphi study has identified issues which are unique to eye-gaze control technology and will enhance its implementation with people with cerebral palsy.


Assuntos
Paralisia Cerebral , Tomada de Decisão Clínica , Fixação Ocular , Tecnologia/instrumentação , Interface Usuário-Computador , Adolescente , Adulto , Criança , Consenso , Técnica Delphi , Feminino , Humanos , Masculino , Inquéritos e Questionários
17.
Proc Biol Sci ; 288(1943): 20202374, 2021 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-33499788

RESUMO

In the true flies (Diptera), the hind wings have evolved into specialized mechanosensory organs known as halteres, which are sensitive to gyroscopic and other inertial forces. Together with the fly's visual system, the halteres direct head and wing movements through a suite of equilibrium reflexes that are crucial to the fly's ability to maintain stable flight. As in other animals (including humans), this presents challenges to the nervous system as equilibrium reflexes driven by the inertial sensory system must be integrated with those driven by the visual system in order to control an overlapping pool of motor outputs shared between the two of them. Here, we introduce an experimental paradigm for reproducibly altering haltere stroke kinematics and use it to quantify multisensory integration of wing and gaze equilibrium reflexes. We show that multisensory wing-steering responses reflect a linear superposition of haltere-driven and visually driven responses, but that multisensory gaze responses are not well predicted by this framework. These models, based on populations, extend also to the responses of individual flies.


Assuntos
Drosophila , Voo Animal , Animais , Fenômenos Biomecânicos , Drosophila melanogaster , Humanos , Reflexo , Asas de Animais
18.
Front Neurorobot ; 14: 34, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32625075

RESUMO

When there is an interaction between a robot and a person, gaze control is very important for face-to-face communication. However, when a robot interacts with several people, neurorobotics plays an important role to determine the person to look at and those to pay attention to among the others. There are several factors which can influence the decision: who is speaking, who he/she is speaking to, where people are looking, if the user wants to attract attention, etc. This article presents a novel method to decide who to pay attention to when a robot interacts with several people. The proposed method is based on a competitive network that receives different stimuli (look, speak, pose, hoard conversation, habituation, etc.) that compete with each other to decide who to pay attention to. The dynamic nature of this neural network allows a smooth transition in the focus of attention to a significant change in stimuli. A conversation is created between different participants, replicating human behavior in the robot. The method deals with the problem of several interlocutors appearing and disappearing from the visual field of the robot. A robotic head has been designed and built and a virtual agent projected on the robot's face display has been integrated with the gaze control. Different experiments have been carried out with that robotic head integrated into a ROS architecture model. The work presents the analysis of the method, how the system has been integrated with the robotic head and the experiments and results obtained.

19.
Cereb Cortex ; 30(9): 4995-5013, 2020 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-32390052

RESUMO

The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.


Assuntos
Fixação Ocular/fisiologia , Lobo Frontal/fisiologia , Percepção Visual/fisiologia , Animais , Feminino , Macaca mulatta , Estimulação Luminosa
20.
Artigo em Inglês | MEDLINE | ID: mdl-32138358

RESUMO

Eye-gaze technology allows individuals with severe physical disabilities and complex communication needs to control a computer or other devices with eye-gaze, thereby enabling them to communicate and participate in society. To date, most research on eye-gaze controlled devices related to persons with disabilities has focused on a single diagnosis in either adults or children and has included only a few participants. This current study utilized a total population survey to identify the prevalence and perceived usability of eye-gaze technology among adults and children in Sweden. Participants were 171 eye-gaze technology users with severe physical and communication impairments, ranging between 4 and 81 years. Cerebral palsy was the most common diagnosis. Daily usage was found in 63%, while 33% had weekly, and 4% had less frequent usage. Adults, compared with children, reported using their computers more frequently (65%/38%; p < 0.01), and for the activities they needed to perform (59%/31%; p < 0.01) and were more satisfied with services, indicating that service providers should prioritize and develop more effective services for children and their parents.


Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência , Computadores , Tecnologia Assistiva , Adolescente , Adulto , Criança , Movimentos Oculares , Feminino , Fixação Ocular , Humanos , Masculino , Pessoa de Meia-Idade , Inquéritos e Questionários , Suécia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA