Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Behav Res Methods ; 55(4): 1513-1536, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35680764

RESUMO

Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Humanos , Cães , Animais , Movimentos Oculares , Piscadela , Cognição
2.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35384605

RESUMO

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Humanos , Pesquisa Empírica
3.
Behav Res Methods ; 54(2): 845-863, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34357538

RESUMO

We empirically investigate the role of small, almost imperceptible balance and breathing movements of the head on the level and colour of noise in data from five commercial video-based P-CR eye trackers. By comparing noise from recordings with completely static artificial eyes to noise from recordings where the artificial eyes are worn by humans, we show that very small head movements increase levels and colouring of the noise in data recorded from all five eye trackers in this study. This increase of noise levels is seen not only in the gaze signal, but also in the P and CR signals of the eye trackers that provide these camera image features. The P and CR signals of the SMI eye trackers correlate strongly during small head movements, but less so or not at all when the head is completely still, indicating that head movements are registered by the P and CR images in the eye camera. By recording with artificial eyes, we can also show that the pupil size artefact has no major role in increasing and colouring noise. Our findings add to and replicate the observation by Niehorster et al., (2021) that lowpass filters in video-based P-CR eye trackers colour the data. Irrespective of source, filters or head movements, coloured noise can be confused for oculomotor drift. We also find that usage of the default head restriction in the EyeLink 1000+, the EyeLink II and the HiSpeed240 result in noisier data compared to less head restriction. Researchers investigating data quality in eye trackers should consider not using the Gen 2 artificial eye from SR Research / EyeLink. Data recorded with this artificial eye are much noisier than data recorded with other artificial eyes, on average 2.2-14.5 times worse for the five eye trackers.


Assuntos
Movimentos Oculares , Movimentos da Cabeça , Cor , Confiabilidade dos Dados , Olho Artificial , Humanos
4.
Behav Res Methods ; 53(5): 2049-2068, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33754324

RESUMO

We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.


Assuntos
Fixação Ocular , Rememoração Mental , Algoritmos , Consenso , Movimentos Oculares , Humanos
5.
Behav Res Methods ; 53(1): 311-324, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32705655

RESUMO

Eye trackers are sometimes used to study the miniature eye movements such as drift that occur while observers fixate a static location on a screen. Specifically, analysis of such eye-tracking data can be performed by examining the temporal spectrum composition of the recorded gaze position signal, allowing to assess its color. However, not only rotations of the eyeball but also filters in the eye tracker may affect the signal's spectral color. Here, we therefore ask whether colored, as opposed to white, signal dynamics in eye-tracking recordings reflect fixational eye movements, or whether they are instead largely due to filters. We recorded gaze position data with five eye trackers from four pairs of human eyes performing fixation sequences, and also from artificial eyes. We examined the spectral color of the gaze position signals produced by the eye trackers, both with their filters switched on, and for unfiltered data. We found that while filtered data recorded from both human and artificial eyes were colored for all eye trackers, for most eye trackers the signal was white when examining both unfiltered human and unfiltered artificial eye data. These results suggest that color in the eye-movement recordings was due to filters for all eye trackers except the most precise eye tracker where it may partly reflect fixational eye movements. As such, researchers studying fixational eye movements should be careful to examine the properties of the filters in their eye tracker to ensure they are studying eyeball rotation and not filter properties.


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Cor , Olho Artificial , Fixação Ocular , Humanos , Pesquisadores
6.
Behav Res Methods ; 52(5): 2098-2121, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32206998

RESUMO

For evaluating whether an eye-tracker is suitable for measuring microsaccades, Poletti & Rucci (2016) propose that a measure called 'resolution' could be better than the more established root-mean-square of the sample-to-sample distances (RMS-S2S). Many open questions exist around the resolution measure, however. Resolution needs to be calculated using data from an artificial eye that can be turned in very small steps. Furthermore, resolution has an unclear and uninvestigated relationship to the RMS-S2S and STD (standard deviation) measures of precision (Holmqvist & Andersson, 2017, p. 159-190), and there is another metric by the same name (Clarke, Ditterich, Drüen, Schönfeld, and Steineke 2002), which instead quantifies the errors of amplitude measurements. In this paper, we present a mechanism, the Stepperbox, for rotating artificial eyes in arbitrary angles from 1' (arcmin) and upward. We then use the Stepperbox to find the minimum reliably detectable rotations in 11 video-based eye-trackers (VOGs) and the Dual Purkinje Imaging (DPI) tracker. We find that resolution correlates significantly with RMS-S2S and, to a lesser extent, with STD. In addition, we find that although most eye-trackers can detect some small rotations of an artificial eye, the rotations of amplitudes up to 2∘ are frequently erroneously measured by video-based eye-trackers. We show evidence that the corneal reflection (CR) feature of these eye-trackers is a major cause of erroneous measurements of small rotations of artificial eyes. Our data strengthen the existing body of evidence that video-based eye-trackers produce errors that may require that we reconsider some results from research on reading, microsaccades, and vergence, where the amplitude of small eye movements have been measured with past or current video-based eye-trackers. In contrast, the DPI reports correct rotation amplitudes down to 1'.


Assuntos
Movimentos Oculares , Olho Artificial , Tecnologia de Rastreamento Ocular , Gravação em Vídeo , Coleta de Dados , Humanos
7.
Behav Res Methods ; 52(6): 2515-2534, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32472501

RESUMO

The magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker's data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Confiabilidade dos Dados , Coleta de Dados , Olho Artificial , Fixação Ocular , Humanos
8.
Behav Res Methods ; 51(2): 840-864, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30334148

RESUMO

Existing event detection algorithms for eye-movement data almost exclusively rely on thresholding one or more hand-crafted signal features, each computed from the stream of raw gaze data. Moreover, this thresholding is largely left for the end user. Here we present and develop gazeNet, a new framework for creating event detectors that do not require hand-crafted signal features or signal thresholding. It employs an end-to-end deep learning approach, which takes raw eye-tracking data as input and classifies it into fixations, saccades and post-saccadic oscillations. Our method thereby challenges an established tacit assumption that hand-crafted features are necessary in the design of event detection algorithms. The downside of the deep learning approach is that a large amount of training data is required. We therefore first develop a method to augment hand-coded data, so that we can strongly enlarge the data set used for training, minimizing the time spent on manual coding. Using this extended hand-coded data, we train a neural network that produces eye-movement event classification from raw eye-movement data without requiring any predefined feature extraction or post-processing steps. The resulting classification performance is at the level of expert human coders. Moreover, an evaluation of gazeNet on two other datasets showed that gazeNet generalized to data from different eye trackers and consistently outperformed several other event detection algorithms that we tested.


Assuntos
Pesquisa Comportamental/métodos , Movimentos Oculares , Redes Neurais de Computação , Algoritmos , Humanos , Movimentos Sacádicos , Análise e Desempenho de Tarefas
9.
Behav Res Methods ; 51(1): 451-452, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30251005

RESUMO

It has come to our attention that the section "Post-processing: Labeling final events" on page 167 of "Using Machine Learning to Detect Events in Eye-Tracking Data" (Zemblys, Niehorster, Komogortsev, & Holmqvist, 2018) contains an erroneous description of the process by which post-processing was performed.

11.
Behav Res Methods ; 50(4): 1645-1656, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29218588

RESUMO

Eyetracking research in psychology has grown exponentially over the past decades, as equipment has become cheaper and easier to use. The surge in eyetracking research has not, however, been equaled by a growth in methodological awareness, and practices that are best avoided have become commonplace. We describe nine threats to the validity of eyetracking research and provide, whenever possible, advice on how to avoid or mitigate these challenges. These threats concern both internal and external validity and relate to the design of eyetracking studies, to data preprocessing, to data analysis, and to the interpretation of eyetracking data.


Assuntos
Pesquisa Comportamental/métodos , Medições dos Movimentos Oculares/normas , Movimentos Oculares , Medições dos Movimentos Oculares/instrumentação , Humanos , Reprodutibilidade dos Testes , Projetos de Pesquisa , Software
12.
Behav Res Methods ; 50(1): 160-181, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28233250

RESUMO

Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.


Assuntos
Movimentos Oculares/fisiologia , Aprendizado de Máquina , Algoritmos , Pesquisa Comportamental , Biometria/instrumentação , Biometria/métodos , Humanos , Análise e Desempenho de Tarefas
13.
Behav Res Methods ; 50(1): 213-227, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28205131

RESUMO

The marketing materials of remote eye-trackers suggest that data quality is invariant to the position and orientation of the participant as long as the eyes of the participant are within the eye-tracker's headbox, the area where tracking is possible. As such, remote eye-trackers are marketed as allowing the reliable recording of gaze from participant groups that cannot be restrained, such as infants, schoolchildren and patients with muscular or brain disorders. Practical experience and previous research, however, tells us that eye-tracking data quality, e.g. the accuracy of the recorded gaze position and the amount of data loss, deteriorates (compared to well-trained participants in chinrests) when the participant is unrestrained and assumes a non-optimal pose in front of the eye-tracker. How then can researchers working with unrestrained participants choose an eye-tracker? Here we investigated the performance of five popular remote eye-trackers from EyeTribe, SMI, SR Research, and Tobii in a series of tasks where participants took on non-optimal poses. We report that the tested systems varied in the amount of data loss and systematic offsets observed during our tasks. The EyeLink and EyeTribe in particular had large problems. Furthermore, the Tobii eye-trackers reported data for two eyes when only one eye was visible to the eye-tracker. This study provides practical insight into how popular remote eye-trackers perform when recording from unrestrained participants. It furthermore provides a testing method for evaluating whether a tracker is suitable for studying a certain target population, and that manufacturers can use during the development of new eye-trackers.


Assuntos
Medições dos Movimentos Oculares , Posicionamento do Paciente/métodos , Confiabilidade dos Dados , Movimentos Oculares , Feminino , Humanos , Masculino , Orientação
14.
Behav Res Methods ; 49(1): 382-393, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-26936462

RESUMO

In experiments investigating dynamic tasks, it is often useful to examine eye movement scan patterns. We can present trials repeatedly and compute within-subjects/conditions similarity in order to distinguish between signal and noise in gaze data. To avoid obvious repetitions of trials, filler trials must be added to the experimental protocol, resulting in long experiments. Alternatively, trials can be modified to reduce the chances that the participant will notice the repetition, while avoiding significant changes in the scan patterns. In tasks in which the stimuli can be geometrically transformed without any loss of meaning, flipping the stimuli around either of the axes represents a candidate modification. In this study, we examined whether flipping of stimulus object trajectories around the x- and y-axes resulted in comparable scan patterns in a multiple object tracking task. We developed two new strategies for the statistical comparison of similarity between two groups of scan patterns, and then tested those strategies on artificial data. Our results suggest that although the scan patterns in flipped trials differ significantly from those in the original trials, this difference is small (as little as a 13 % increase of overall distance). Therefore, researchers could use geometric transformations to test more complex hypotheses regarding scan pattern coherence while retaining the same duration for experiments.


Assuntos
Medições dos Movimentos Oculares/psicologia , Movimentos Oculares/fisiologia , Priming de Repetição , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos
15.
Behav Res Methods ; 49(2): 616-637, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-27193160

RESUMO

Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.


Assuntos
Algoritmos , Coleta de Dados/métodos , Movimentos Oculares/fisiologia , Humanos
16.
Behav Res Methods ; 49(3): 947-959, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27383751

RESUMO

The precision of an eye-tracker is critical to the correct identification of eye movements and their properties. To measure a system's precision, artificial eyes (AEs) are often used, to exclude eye movements influencing the measurements. A possible issue, however, is that it is virtually impossible to construct AEs with sufficient complexity to fully represent the human eye. To examine the consequences of this limitation, we tested currently used AEs from three manufacturers of eye-trackers and compared them to a more complex model, using 12 commercial eye-trackers. Because precision can be measured in various ways, we compared different metrics in the spatial domain and analyzed the power-spectral densities in the frequency domain. To assess how precision measurements compare in artificial and human eyes, we also measured precision using human recordings on the same eye-trackers. Our results show that the modified eye model presented can cope with all eye-trackers tested and acts as a promising candidate for further development of a set of AEs with varying pupil size and pupil-iris contrast. The spectral analysis of both the AE and human data revealed that human eye data have different frequencies that likely reflect the physiological characteristics of human eye movements. We also report the effects of sample selection methods for precision calculations. This study is part of the EMRA/COGAIN Eye Data Quality Standardization Project.


Assuntos
Movimentos Oculares/fisiologia , Olho Artificial/normas , Humanos
18.
Eur Radiol ; 23(4): 997-1005, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23085862

RESUMO

OBJECTIVES: To evaluate the efficiency of different methods of reading breast tomosynthesis (BT) image volumes. METHODS: All viewing procedures consisted of free scroll volume browsing and three were combined with initial cine loops at three different frame rates (9, 14 and 25 fps). The presentation modes consisted of vertically and horizontally orientated BT image volumes. Fifty-five normal BT image volumes in mediolateral oblique view were collected. In these, simulated lesions were inserted, creating four unique image sets, one for each viewing procedure. Four observers interpreted the cases in a free-response task. Time efficiency, visual attention and search were investigated using eye tracking. RESULTS: Horizontally orientated BT image volumes were read faster than vertically when using free scroll browsing only and when combined with fast cine loop. Cine loops at slow frame rates were ruled out as inefficient. CONCLUSIONS: In general, horizontally oriented BT image volumes were read more efficiently. All viewing procedures except for slow frame rates were promising when assuming equivalent detection performance.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Movimentos Oculares , Imageamento Tridimensional/estatística & dados numéricos , Mamografia/estatística & dados numéricos , Competência Profissional/estatística & dados numéricos , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Carga de Trabalho/estatística & dados numéricos , Adulto , Idoso , Neoplasias da Mama/epidemiologia , Feminino , Humanos , Pessoa de Meia-Idade , Suécia/epidemiologia
19.
Behav Res Methods ; 45(1): 272-88, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22956394

RESUMO

Recording eye movement data with high quality is often a prerequisite for producing valid and replicable results and for drawing well-founded conclusions about the oculomotor system. Today, many aspects of data quality are often informally discussed among researchers but are very seldom measured, quantified, and reported. Here we systematically investigated how the calibration method, aspects of participants' eye physiologies, the influences of recording time and gaze direction, and the experience of operators affect the quality of data recorded with a common tower-mounted, video-based eyetracker. We quantified accuracy, precision, and the amount of valid data, and found an increase in data quality when the participant indicated that he or she was looking at a calibration target, as compared to leaving this decision to the operator or the eyetracker software. Moreover, our results provide statistical evidence of how factors such as glasses, contact lenses, eye color, eyelashes, and mascara influence data quality. This method and the results provide eye movement researchers with an understanding of what is required to record high-quality data, as well as providing manufacturers with the knowledge to build better eyetrackers.


Assuntos
Medições dos Movimentos Oculares/instrumentação , Movimentos Oculares/fisiologia , Modelos Biológicos , Modelos Estatísticos , Adulto , Recursos Audiovisuais , Calibragem , Lentes de Contato , Óculos , Fixação Ocular/fisiologia , Humanos , Participação do Paciente/métodos , Projetos de Pesquisa , Movimentos Sacádicos/fisiologia , Adulto Jovem
20.
Behav Res Methods ; 44(4): 1079-100, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22648695

RESUMO

Eye movement sequences-or scanpaths-vary depending on the stimulus characteristics and the task (Foulsham & Underwood Journal of Vision, 8(2), 6:1-17, 2008; Land, Mennie, & Rusted, Perception, 28, 1311-1328, 1999). Common methods for comparing scanpaths, however, are limited in their ability to capture both the spatial and temporal properties of which a scanpath consists. Here, we validated a new method for scanpath comparison based on geometric vectors, which compares scanpaths over multiple dimensions while retaining positional and sequential information (Jarodzka, Holmqvist, & Nyström, Symposium on Eye-Tracking Research and Applications (pp. 211-218), 2010). "MultiMatch" was tested in two experiments and pitted against ScanMatch (Cristino, Mathôt, Theeuwes, & Gilchrist, Behavior Research Methods, 42, 692-700, 2010), the most comprehensive adaptation of the popular Levenshtein method. In Experiment 1, we used synthetic data, demonstrating the greater sensitivity of MultiMatch to variations in spatial position. In Experiment 2, real eye movement recordings were taken from participants viewing sequences of dots, designed to elicit scanpath pairs with commonalities known to be problematic for algorithms (e.g., when one scanpath is shifted in locus or when fixations fall on either side of an AOI boundary). The results illustrate the advantages of a multidimensional approach, revealing how two scanpaths differ. For instance, if one scanpath is the reverse copy of another, the difference is in the direction but not the positions of fixations; or if a scanpath is scaled down, the difference is in the length of the saccadic vectors but not in the overall shape. As well as having enormous potential for any task in which consistency in eye movements is important (e.g., learning), MultiMatch is particularly relevant for "eye movements to nothing" in mental imagery and embodiment-of-cognition research, where satisfactory scanpath comparison algorithms are lacking.


Assuntos
Algoritmos , Atenção/fisiologia , Movimentos Oculares/fisiologia , Modelos Psicológicos , Adulto , Cognição , Feminino , Humanos , Imaginação/fisiologia , Masculino , Memória Episódica , Postura , Movimentos Sacádicos/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA