Understanding the encoding of object locations in small-scale spaces during free exploration using eye tracking.
Neuropsychologia
; 184: 108565, 2023 06 06.
Article
em En
| MEDLINE
| ID: mdl-37080425
Navigation is instrumental to daily life and is often used to encode and locate objects, such as keys in one's house. Yet, little is known about how navigation works in more ecologically valid situations such as finding objects within a room. Specifically, it is not clear how vision vs. body movements contribute differentially to spatial memory in such small-scale spaces. In the current study, participants encoded object locations by viewing them while standing (stationary condition) or by additionally being guided by the experimenter while blindfolded (walking condition) after viewing the objects. They then retrieved the objects from the same or different viewpoint, creating a 2 × 2 within subject design. We simultaneously recorded participant eye movements throughout the experiment using mobile eye tracking. The results showed no statistically significant differences among our four conditions (stationary, same viewpoint as encoding; stationary, different viewpoint; walking, same viewpoint; walking, different viewpoint), suggesting that in a small real-world space, vision may be sufficient to remember object locations. Eye tracking analyses revealed that object locations were better remembered next to landmarks and that participants encoded items on one wall together, suggesting the use of local wall coordinates rather than global room coordinates. A multivariate regression analysis revealed that the only significant predictor of object placement accuracy was average looking time. These results suggest that vision may be sufficient for encoding object locations in a small-scale environment and that such memories may be formed largely locally rather than globally.
Palavras-chave
Texto completo:
1
Base de dados:
MEDLINE
Assunto principal:
Rememoração Mental
/
Tecnologia de Rastreamento Ocular
Idioma:
En
Ano de publicação:
2023
Tipo de documento:
Article