Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Vis ; 23(10): 9, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37707802

RESUMO

Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from "noncanonical" viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalize to strongly noncanonical orientations of three-dimensional (3D) models of objects. Using 3D models allowed us to probe a broad range of viewpoints and empirically establish viewpoints with very strong noncanonical and canonical orientations. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in color (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the viewpoint effect in Experiments 1a and 1b, we could empirically determine the most canonical and noncanonical viewpoints from our set of viewpoints to use in Experiment 2. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that scene context supports object recognition even when using extremely noncanonical orientations of depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy especially under conditions of high uncertainty.


Assuntos
Conscientização , Reconhecimento Psicológico , Humanos , Tempo de Reação , Incerteza , Percepção Visual
2.
Behav Res Methods ; 53(6): 2528-2543, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-33954914

RESUMO

Mixed-effects models are a powerful tool for modeling fixed and random effects simultaneously, but do not offer a feasible analytic solution for estimating the probability that a test correctly rejects the null hypothesis. Being able to estimate this probability, however, is critical for sample size planning, as power is closely linked to the reliability and replicability of empirical findings. A flexible and very intuitive alternative to analytic power solutions are simulation-based power analyses. Although various tools for conducting simulation-based power analyses for mixed-effects models are available, there is lack of guidance on how to appropriately use them. In this tutorial, we discuss how to estimate power for mixed-effects models in different use cases: first, how to use models that were fit on available (e.g. published) data to determine sample size; second, how to determine the number of stimuli required for sufficient power; and finally, how to conduct sample size planning without available data. Our examples cover both linear and generalized linear models and we provide code and resources for performing simulation-based power analyses on openly accessible data sets. The present work therefore helps researchers to navigate sound research design when using mixed-effects models, by summarizing resources, collating available knowledge, providing solutions and tools, and applying them to real-world problems in sample sizing planning when sophisticated analysis procedures like mixed-effects models are outlined as inferential procedures.


Assuntos
Reprodutibilidade dos Testes , Simulação por Computador , Humanos , Modelos Lineares , Tamanho da Amostra
3.
J Vis ; 18(13): 11, 2018 12 03.
Artigo em Inglês | MEDLINE | ID: mdl-30561493

RESUMO

The arrangement of the contents of real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is the role different types of objects hold in a scene. In the current work, we seek to unveil an important building block of scenes-anchor objects. Anchors hold specific spatial predictions regarding the likely position of other objects in an environment. In a series of three eye tracking experiments we tested what role anchor objects occupy during visual search. In all of the experiments, participants searched through scenes for an object that was cued in the beginning of each trial. Critically, in half of the scenes a target relevant anchor was swapped for an irrelevant, albeit semantically consistent, object. We found that relevant anchor objects can guide visual search leading to faster reaction times, less scene coverage, and less time between fixating the anchor and the target. The choice of anchor objects was confirmed through an independent large image database, which allowed us to identify key attributes of anchors. Anchor objects seem to play a unique role in the spatial layout of scenes and need to be considered for understanding the efficiency of visual search in realistic stimuli.


Assuntos
Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Adulto , Atenção , Sinais (Psicologia) , Feminino , Humanos , Masculino , Estimulação Luminosa , Semântica , Adulto Jovem
4.
J Vis ; 17(12): 2, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-28973112

RESUMO

People know surprisingly little about their own visual behavior, which can be problematic when learning or executing complex visual tasks such as search of medical images. We investigated whether providing observers with online information about their eye position during search would help them recall their own fixations immediately afterwards. Seventeen observers searched for various objects in "Where's Waldo" images for 3 s. On two-thirds of trials, observers made target present/absent responses. On the other third (critical trials), they were asked to click twelve locations in the scene where they thought they had just fixated. On half of the trials, a gaze-contingent window showed observers their current eye position as a 7.5° diameter "spotlight." The spotlight "illuminated" everything fixated, while the rest of the display was still visible but dimmer. Performance was quantified as the overlap of circles centered on the actual fixations and centered on the reported fixations. Replicating prior work, this overlap was quite low (26%), far from ceiling (66%) and quite close to chance performance (21%). Performance was only slightly better in the spotlight condition (28%, p = 0.03). Giving observers information about their fixation locations by dimming the periphery improved memory for those fixations modestly, at best.


Assuntos
Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Aprendizagem/fisiologia , Memória/fisiologia , Rememoração Mental/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos
5.
J Vis ; 14(8): 10, 2014 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-25015385

RESUMO

Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.


Assuntos
Memória de Longo Prazo/fisiologia , Rememoração Mental/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Percepção Visual/fisiologia , Adulto , Visão de Cores/fisiologia , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Semântica , Acuidade Visual/fisiologia
6.
Commun Psychol ; 2(1): 68, 2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-39242968

RESUMO

Our visual surroundings are highly complex. Despite this, we understand and navigate them effortlessly. This requires transforming incoming sensory information into representations that not only span low- to high-level visual features (e.g., edges, object parts, objects), but likely also reflect co-occurrence statistics of objects in real-world scenes. Here, so-called anchor objects are defined as being highly predictive of the location and identity of frequently co-occuring (usually smaller) objects, derived from object clustering statistics in real-world scenes, while so-called diagnostic objects are predictive of the larger semantic context (i.e., scene category). Across two studies (N1 = 50, N2 = 44), we investigate which of these properties underlie scene understanding across two dimensions - realism and categorisation - using scenes generated from Generative Adversarial Networks (GANs) which naturally vary along these dimensions. We show that anchor objects and mainly high-level features extracted from a range of pre-trained deep neural networks (DNNs) drove realism both at first glance and after initial processing. Categorisation performance was mainly determined by diagnostic objects, regardless of realism, at first glance and after initial processing. Our results are testament to the visual system's ability to pick up on reliable, category specific sources of information that are flexible towards disturbances across the visual feature-hierarchy.

7.
Sci Rep ; 14(1): 15549, 2024 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969745

RESUMO

Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.


Assuntos
Semântica , Realidade Virtual , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Percepção Espacial/fisiologia , Memória/fisiologia
8.
Commun Psychol ; 2(1): 49, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38812582

RESUMO

Visual distraction is a ubiquitous aspect of everyday life. Studying the consequences of distraction during temporally extended tasks, however, is not tractable with traditional methods. Here we developed a virtual reality approach that segments complex behaviour into cognitive subcomponents, including encoding, visual search, working memory usage, and decision-making. Participants copied a model display by selecting objects from a resource pool and placing them into a workspace. By manipulating the distractibility of objects in the resource pool, we discovered interfering effects of distraction across the different cognitive subcomponents. We successfully traced the consequences of distraction all the way from overall task performance to the decision-making processes that gate memory usage. Distraction slowed down behaviour and increased costly body movements. Critically, distraction increased encoding demands, slowed visual search, and decreased reliance on working memory. Our findings illustrate that the effects of visual distraction during natural behaviour can be rather focal but nevertheless have cascading consequences.

9.
Psychol Sci ; 24(9): 1816-23, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23842954

RESUMO

In sentence processing, semantic and syntactic violations elicit differential brain responses observable in event-related potentials: An N400 signals semantic violations, whereas a P600 marks inconsistent syntactic structure. Does the brain register similar distinctions in scene perception? To address this question, we presented participants with semantic inconsistencies, in which an object was incongruent with a scene's meaning, and syntactic inconsistencies, in which an object violated structural rules. We found a clear dissociation between semantic and syntactic processing: Semantic inconsistencies produced negative deflections in the N300-N400 time window, whereas mild syntactic inconsistencies elicited a late positivity resembling the P600 found for syntactic inconsistencies in sentence processing. Extreme syntactic violations, such as a hovering beer bottle defying gravity, were associated with earlier perceptual processing difficulties reflected in the N300 response, but failed to produce a P600 effect. We therefore conclude that different neural populations are active during semantic and syntactic processing of scenes, and that syntactically impossible object placements are processed in a categorically different manner than are syntactically resolvable object misplacements.


Assuntos
Encéfalo/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Estimulação Luminosa/métodos , Semântica , Vocabulário , Adulto , Compreensão/fisiologia , Humanos , Masculino , Adulto Jovem
10.
Psychol Sci ; 24(9): 1848-53, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23863753

RESUMO

Researchers have shown that people often miss the occurrence of an unexpected yet salient event if they are engaged in a different task, a phenomenon known as inattentional blindness. However, demonstrations of inattentional blindness have typically involved naive observers engaged in an unfamiliar task. What about expert searchers who have spent years honing their ability to detect small abnormalities in specific types of images? We asked 24 radiologists to perform a familiar lung-nodule detection task. A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented. Eighty-three percent of the radiologists did not see the gorilla. Eye tracking revealed that the majority of those who missed the gorilla looked directly at its location. Thus, even expert searchers, operating in their domain of expertise, are vulnerable to inattentional blindness.


Assuntos
Atenção/fisiologia , Estimulação Luminosa/métodos , Tomografia Computadorizada por Raios X , Percepção Visual/fisiologia , Adulto , Idoso , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Pessoa de Meia-Idade , Adulto Jovem
11.
Radiographics ; 33(1): 263-74, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23104971

RESUMO

Diagnostic accuracy for radiologists is above that expected by chance when they are exposed to a chest radiograph for only one-fifth of a second, a period too brief for more than a single voluntary eye movement. How do radiologists glean information from a first glance at an image? It is thought that this expert impression of the gestalt of an image is related to the everyday, immediate visual understanding of the gist of a scene. Several high-speed mechanisms guide our search of complex images. Guidance by basic features (such as color) requires no learning, whereas guidance by complex scene properties is learned. It is probable that both hardwired guidance by basic features and learned guidance by scene structure become part of radiologists' expertise. Search in scenes may be best explained by a two-pathway model: Object recognition is performed via a selective pathway in which candidate targets must be individually selected for recognition. A second, nonselective pathway extracts information from global or statistical information without selecting specific objects. An appreciation of the role of nonselective processing may be particularly useful for understanding what separates novice from expert radiologists and could help establish new methods of physician training based on medical image perception.


Assuntos
Erros de Diagnóstico/prevenção & controle , Diagnóstico por Imagem , Aplicações da Informática Médica , Percepção Visual , Competência Clínica , Movimentos Oculares , Humanos , Reconhecimento Visual de Modelos
12.
Sci Rep ; 13(1): 5912, 2023 04 11.
Artigo em Inglês | MEDLINE | ID: mdl-37041222

RESUMO

It usually only takes a single glance to categorize our environment into different scene categories (e.g. a kitchen or a highway). Object information has been suggested to play a crucial role in this process, and some proposals even claim that the recognition of a single object can be sufficient to categorize the scene around it. Here, we tested this claim in four behavioural experiments by having participants categorize real-world scene photographs that were reduced to a single, cut-out object. We show that single objects can indeed be sufficient for correct scene categorization and that scene category information can be extracted within 50 ms of object presentation. Furthermore, we identified object frequency and specificity for the target scene category as the most important object properties for human scene categorization. Interestingly, despite the statistical definition of specificity and frequency, human ratings of these properties were better predictors of scene categorization behaviour than more objective statistics derived from databases of labelled real-world images. Taken together, our findings support a central role of object information during human scene categorization, showing that single objects can be indicative of a scene category if they are assumed to frequently and exclusively occur in a certain environment.


Assuntos
Reconhecimento Visual de Modelos , Reconhecimento Psicológico , Humanos
13.
Policy Insights Behav Brain Sci ; 10(2): 317-323, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37900910

RESUMO

Extended reality (XR, including augmented and virtual reality) creates a powerful intersection between information technology and cognitive, clinical, and education sciences. XR technology has long captured the public imagination, and its development is the focus of major technology companies. This article demonstrates the potential of XR to (1) deliver behavioral insights, (2) transform clinical treatments, and (3) improve learning and education. However, without appropriate policy, funding, and infrastructural investment, many research institutions will struggle to keep pace with the advances and opportunities of XR. To realize the full potential of XR for basic and translational research, funding should incentivize (1) appropriate training, (2) open software solutions, and (3) collaborations between complementary academic and industry partners. Bolstering the XR research infrastructure with the right investments and incentives is vital for delivering on the potential for transformative discoveries, innovations, and applications.

14.
J Vis ; 12(13)2012 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-23211270

RESUMO

What controls gaze allocation during dynamic face perception? We monitored participants' eye movements while they watched videos featuring close-ups of pedestrians engaged in interviews. Contrary to previous findings using static displays, we observed no general preference to fixate eyes. Instead, gaze was dynamically directed to the eyes, nose, or mouth in response to the currently depicted event. Fixations to the eyes increased when a depicted face made eye contact with the camera, while fixations to the mouth increased when the face was speaking. When a face moved quickly, fixations concentrated on the nose, suggesting that it served as a spatial anchor. To better understand the influence of auditory speech during dynamic face perception, we presented participants with a second version of the same video, in which the audio speech track had been removed, leaving just the background music. Removing the speech signal modulated gaze allocation by decreasing fixations to faces generally and the mouth specifically. Since the task was to simply rate the likeability of the videos, the decrease of attention allocation to the mouth region implies a reduction of the functional benefits of mouth fixations given that speech comprehension was not required. Together, these results argue against a general prioritization of the eyes and support a more functional, information-seeking use of gaze allocation during dynamic face viewing.


Assuntos
Atenção/fisiologia , Face , Percepção de Movimento/fisiologia , Movimento , Reconhecimento Visual de Modelos/fisiologia , Adolescente , Adulto , Feminino , Fixação Ocular , Humanos , Masculino , Adulto Jovem
15.
J Eye Mov Res ; 15(3)2022.
Artigo em Inglês | MEDLINE | ID: mdl-37215533

RESUMO

Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.

16.
Sci Rep ; 11(1): 21988, 2021 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-34753999

RESUMO

While scene context is known to facilitate object recognition, little is known about which contextual "ingredients" are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses-markers of semantic violations-for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing-even in the absence of spatial scene structure and object content-suggesting that material is one of the contextual "ingredients" driving scene context effects.

17.
Brain Sci ; 11(1)2021 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-33406655

RESUMO

Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.

18.
Sci Rep ; 11(1): 14079, 2021 07 07.
Artigo em Inglês | MEDLINE | ID: mdl-34234183

RESUMO

Human observers can quickly and accurately categorize scenes. This remarkable ability is related to the usage of information at different spatial frequencies (SFs) following a coarse-to-fine pattern: Low SFs, conveying coarse layout information, are thought to be used earlier than high SFs, representing more fine-grained information. Alternatives to this pattern have rarely been considered. Here, we probed all possible SF usage strategies randomly with high resolution in both the SF and time dimensions at two categorization levels. We show that correct basic-level categorizations of indoor scenes are linked to the sampling of relatively high SFs, whereas correct outdoor scene categorizations are predicted by an early use of high SFs and a later use of low SFs (fine-to-coarse pattern of SF usage). Superordinate-level categorizations (indoor vs. outdoor scenes) rely on lower SFs early on, followed by a shift to higher SFs and a subsequent shift back to lower SFs in late stages. In summary, our results show no consistent pattern of SF usage across tasks and only partially replicate the diagnostic SFs found in previous studies. We therefore propose that SF sampling strategies of observers differ with varying stimulus and task characteristics, thus favouring the notion of flexible SF usage.

19.
J Vis ; 10(3): 14.1-13, 2010 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-20377291

RESUMO

A brief glimpse of a scene is sufficient to comprehend its gist. Does information available from a brief glimpse also support further scene exploration? In five experiments, we investigated the role of initial scene processing on eye movement guidance for visual search in scenes. We used the flash-preview moving-window paradigm to separate the duration of the initial scene glimpse from subsequent search. By varying scene preview durations, we found that a 75-ms preview was sufficient to lead to increased search benefits compared to a no-preview control. Search efficiency was further increased by inserting additional scene-target integration time before search initiation: Reducing preview durations to as little as 50 ms led to search benefits only when combined with prolonged integration time. We therefore propose that both initial scene presentation duration and scene-target integration time are crucial for establishing contextual guidance in complex, naturalistic scenes. The present findings show that fast scene processing is not limited to activating gist. Instead, scene representations generated from a brief scene glimpse can also provide sufficient information to guide gaze during object search as long as enough time is available to integrate the initial scene representation.


Assuntos
Fixação Ocular/fisiologia , Percepção de Forma/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Tempo de Reação/fisiologia , Movimentos Sacádicos/fisiologia , Adolescente , Adulto , Feminino , Fóvea Central/fisiologia , Humanos , Masculino , Estimulação Luminosa/métodos , Fatores de Tempo , Adulto Jovem
20.
Cognition ; 196: 104147, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32004760

RESUMO

We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.


Assuntos
Objetivos , Motivação , Atenção , Aprendizagem , Memória
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA