Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Ano de publicação
Intervalo de ano de publicação
1.
Atten Percept Psychophys ; 85(7): 2257-2276, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37258896

RESUMO

Microsaccades belong to the category of fixational micromovements and may be crucial for image stability on the retina. Eye movement paradigms typically require fixational control, but this does not eliminate all oculomotor activity. The antisaccade task requires a planned eye movement in the direction opposite of an onset, allowing separation of planning and execution. We build on previous studies of microsaccades in the antisaccade task using a combination of fixed and mixed pro- and antisaccade blocks. We hypothesized that microsaccade rates may be reduced prior to the execution of antisaccades as compared with regular saccades (prosaccades). In two experiments, we measured microsaccades in four conditions across three trial blocks: one block each of fixed prosaccade and antisaccade trials, and a mixed block where both saccade types were randomized. We anticipated that microsaccade rates would be higher prior to antisaccades than prosaccades due to the need to preemptively suppress reflexive saccades during antisaccade generation. In Experiment 1, with monocular eye tracking, there was an interaction between the effects of saccade and block type on microsaccade rates, suggesting lower rates on antisaccade trials, but only within mixed blocks. In Experiment 2, eye tracking was binocular, revealing suppressed microsaccade rates on antisaccade trials. A cluster permutation analysis of the microsaccade rate over the course of a trial did not reveal any particular critical time for this difference in microsaccade rates. Our findings suggest that microsaccade rates reflect the degree of suppression of the oculomotor system during the antisaccade task.


Assuntos
Movimentos Oculares , Movimentos Sacádicos , Humanos , Tempo de Reação , Estimulação Luminosa/métodos , Tecnologia de Rastreamento Ocular
2.
Vision (Basel) ; 3(4)2019 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-31735857

RESUMO

The seminal model by Laurent Itti and Cristoph Koch demonstrated that we can compute the entire flow of visual processing from input to resulting fixations. Despite many replications and follow-ups, few have matched the impact of the original model-so what made this model so groundbreaking? We have selected five key contributions that distinguish the original salience model by Itti and Koch; namely, its contribution to our theoretical, neural, and computational understanding of visual processing, as well as the spatial and temporal predictions for fixation distributions. During the last 20 years, advances in the field have brought up various techniques and approaches to salience modelling, many of which tried to improve or add to the initial Itti and Koch model. One of the most recent trends has been to adopt the computational power of deep learning neural networks; however, this has also shifted their primary focus to spatial classification. We present a review of recent approaches to modelling salience, starting from direct variations of the Itti and Koch salience model to sophisticated deep-learning architectures, and discuss the models from the point of view of their contribution to computational cognitive neuroscience.

3.
Brain Sci ; 10(1)2019 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-31892197

RESUMO

Itti and Koch's Saliency Model has been used extensively to simulate fixation selection in a variety of tasks from visual search to simple reaction times. Although the Saliency Model has been tested for its spatial prediction of fixations in visual salience, it has not been well tested for their temporal accuracy. Visual tasks, like search, invariably result in a positively skewed distribution of saccadic reaction times over large numbers of samples, yet we show that the leaky integrate and fire (LIF) neuronal model included in the classic implementation of the model tends to produce a distribution shifted to shorter fixations (in comparison with human data). Further, while parameter optimization using a genetic algorithm and Nelder-Mead method does improve the fit of the resulting distribution, it is still unable to match temporal distributions of human responses in a visual task. Analysis of times for individual images reveal that the LIF algorithm produces initial fixation durations that are fixed instead of a sample from a distribution (as in the human case). Only by aggregating responses over many input images do they result in a distribution, although the form of this distribution still depends on the input images used to create it and not on internal model variability.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA