Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Behav Res Methods ; 2022 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-36085543

RESUMO

Assessing gaze behavior during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make gaze analysis difficult. Current approaches involve laborious coding of pupil positions. In settings where motion capture and mobile eye tracking are used concurrently in naturalistic tasks, it is critical that data collection be simple, efficient, and systematic. One solution is to combine eye tracking with motion capture to generate 3D gaze vectors. When combined with tracked or known object locations, 3D gaze vector generation can be automated. Here we use combined eye and motion capture and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three pupil data inputs: the efficacy of calibration routines was assessed, a validation task requiring short fixations on task-relevant locations, and a naturalistic object interaction task to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and Cartesian coordinate systems and monocular (left or right) or binocular data. All calibration routines performed similarly, with the best performance (i.e., sub-centimeter errors) coming from the naturalistic task trials when the participant is looking at an object in front of them. We found that spherical coordinate systems generate the most accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend 1-min calibration routines using binocular pupil data combined with a spherical world coordinate system to produce the highest-quality gaze vectors.

2.
Cortex ; 138: 253-265, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33752137

RESUMO

Everyday tasks such as catching a ball appear effortless, but in fact require complex interactions and tight temporal coordination between the brain's visual and motor systems. What makes such interceptive actions particularly impressive is the capacity of the brain to account for temporal delays in the central nervous system-a limitation that can be mitigated by making predictions about the environment as well as one's own actions. Here, we wanted to assess how well human participants can plan an upcoming movement based on a dynamic, predictable stimulus that is not the target of action. A central stationary or rotating stimulus determined the probability that each of two potential targets would be the eventual target of a rapid reach-to-touch movement. We examined the extent to which reach movement trajectories convey internal predictions about the future state of dynamic probabilistic information conveyed by the rotating stimulus. We show that movement trajectories reflect the target probabilities determined at movement onset, suggesting that humans rapidly and accurately integrate visuospatial predictions and estimates of their own reaction times to effectively guide action.


Assuntos
Movimento , Desempenho Psicomotor , Atenção , Humanos , Tempo de Reação
3.
Front Psychol ; 9: 2764, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30687198

RESUMO

Previc (1990) postulated that most peri-personal space interactions occurred in the lower visual field (LVF), leading to an advantage when compared to the upper visual field (UVF). It is not clear if extensive practice can affect the difference between interactions in the LVF/UVF. We tested male and female basketball varsity athletes and non-athletes on a DynaVision D2 visuomotor reaction task. We recruited basketball players because in their training they spend significant amount of time processing UVF information. We found a LVF advantage in all participants, but this advantage was significantly reduced in the athletes. The results suggest that training can be a powerful modulator of visuomotor function.

4.
PLoS One ; 12(8): e0182635, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28792518

RESUMO

Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.


Assuntos
Percepção Auditiva , Percepção de Movimento , Percepção Espacial , Interface Usuário-Computador , Estimulação Acústica , Algoritmos , Análise de Variância , Atenção , Discriminação Psicológica , Humanos , Psicofísica , Tempo de Reação , Software , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...