Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
ArXiv ; 2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38351926

RESUMEN

Despite advancements in artificial intelligence, object recognition models still lag behind in emulating visual information processing in human brains. Recent studies have highlighted the potential of using neural data to mimic brain processing; however, these often rely on invasive neural recordings from non-human subjects, leaving a critical gap in understanding human visual perception. Addressing this gap, we present, for the first time, 'Re(presentational)Al(ignment)net', a vision model aligned with human brain activity based on non-invasive EEG, demonstrating a significantly higher similarity to human brain representations. Our innovative image-to-brain multi-layer encoding framework advances human neural alignment by optimizing multiple model layers and enabling the model to efficiently learn and mimic human brain's visual representational patterns across object categories and different modalities. Our findings suggest that ReAlnet represents a breakthrough in bridging the gap between artificial and human vision, and paving the way for more brain-like artificial intelligence systems.

2.
Dev Sci ; : e13482, 2024 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-38332650

RESUMEN

In adults, spatial location plays a special role in visual object processing. People are more likely to judge two sequentially presented objects as being identical when they appear in the same location compared to in different locations (a phenomenon referred to as Spatial Congruency Bias [SCB]). However, no comparable Identity Congruency Bias (ICB) is found, suggesting an asymmetric location-identity relationship in object binding. What gives rise to this asymmetric congruency bias? This paper considered two possible hypotheses. Hypothesis 1 suggests that the asymmetric congruency bias results from an inherently special role of location in the visual system. In contrast, Hypothesis 2 suggests that the asymmetric congruency bias is a product of development, reflecting people's experience with the world. To distinguish the two hypotheses, we tested both adults' and 5-year-old children's SCB and ICB by Identity Judgment Experiments and Spatial Judgment Experiments, respectively. The study found that adults only exhibited a SCB, but no ICB. However, young children exhibited both SCB and ICB, suggesting a symmetric congruency bias and reciprocal influences between location and identity in early development. The results indicate that the asymmetric location-identity relationship develops as object identity's influence on location gets pruned away, while location's influence on identity is preserved, possibly due to people's gained experiences with regularities of the world. RESEARCH HIGHLIGHTS: Adults exhibit Spatial Congruency Bias-an asymmetric location-identity relationship with location biasing their judgment of object identities, but not vice versa. Asymmetric congruency bias may result from an inherently special role of location in visual system (Hypothesis 1) or accumulated experiences with the world (Hypothesis 2). To distinguish the two hypotheses, the study investigated the Spatial Congruency Bias and Identity Congruency Bias in both adults and 5-year-old children. Unlike adults who exhibited only Spatial Congruency Bias, 5-year-old children exhibited both Spatial Congruency Bias and Identity Congruency Bias.

3.
J Exp Psychol Gen ; 153(4): 873-888, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38300544

RESUMEN

Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about if and how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher level object-location binding. But these studies are generally conducted in fairly static experimental contexts. Might spatiotopic object-location binding only emerge in more dynamic saccade contexts? In the present study, we investigated this using the spatial congruency bias paradigm in healthy adults. In the static (single-saccade) context, we found purely retinotopic binding, as before. However, robust spatiotopic binding emerged in the dynamic saccade context (multiple frequent saccades and saccades during stimulus presentation). We further isolated specific factors that modulate retinotopic and spatiotopic binding. Our results provide strong evidence that dynamic saccade context can trigger more stable object-location binding in ecologically relevant spatiotopic coordinates, perhaps via a more flexible brain state that accommodates improved visual stability in the dynamic world. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Retina , Movimientos Sacádicos , Adulto , Humanos , Movimientos Oculares , Encéfalo , Estimulación Luminosa
4.
Psychon Bull Rev ; 31(1): 223-233, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37528277

RESUMEN

We are often bombarded with salient stimuli that capture our attention and distract us from our current goals. Decades of research have shown the robust detrimental impacts of salient distractors on search performance and, of late, in leading to altered feature perception. These feature errors can be quite extreme, and thus, undesirable. In search tasks, salient distractors can be suppressed if they appear more frequently in one location, and this learned spatial suppression can lead to reductions in the cost of distraction as measured by reaction time slowing. Can learned spatial suppression also protect against visual feature errors? To investigate this question, participants were cued to report one of four briefly presented colored squares on a color wheel. On two-thirds of trials, a salient distractor appeared around one of the nontarget squares, appearing more frequently in one location over the course of the experiment. Participants' responses were fit to a model estimating performance parameters and compared across conditions. Our results showed that general performance (guessing and precision) improved when the salient distractor appeared in a likely location relative to elsewhere. Critically, feature swap errors (probability of misreporting the color at the salient distractor's location) were also significantly reduced when the distractor appeared in a likely location, suggesting that learned spatial suppression of a salient distractor helps protect the processing of target features. This study provides evidence that, in addition to helping us avoid salient distractors, suppression likely plays a larger role in helping to prevent distracting information from being encoded.


Asunto(s)
Atención , Aprendizaje , Humanos , Tiempo de Reacción/fisiología , Atención/fisiología , Señales (Psicología) , Probabilidad
5.
bioRxiv ; 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-37662197

RESUMEN

Remarkably, human brains have the ability to accurately perceive and process the real-world size of objects, despite vast differences in distance and perspective. While previous studies have delved into this phenomenon, distinguishing this ability from other visual perceptions, like depth, has been challenging. Using the THINGS EEG2 dataset with high time-resolution human brain recordings and more ecologically valid naturalistic stimuli, our study uses an innovative approach to disentangle neural representations of object real-world size from retinal size and perceived real-world depth in a way that was not previously possible. Leveraging this state-of-the-art dataset, our EEG representational similarity results reveal a pure representation of object real-world size in human brains. We report a representational timeline of visual object processing: object real-world depth appeared first, then retinal size, and finally, real-world size. Additionally, we input both these naturalistic images and object-only images without natural background into artificial neural networks. Consistent with the human EEG findings, we also successfully disentangled representation of object real-world size from retinal size and real-world depth in all three types of artificial neural networks (visual-only ResNet, visual-language CLIP, and language-only Word2Vec). Moreover, our multi-modal representational comparison framework across human EEG and artificial neural networks reveals real-world size as a stable and higher-level dimension in object space incorporating both visual and semantic information. Our research provides a detailed and clear characterization of the object processing process, which offers further advances and insights into our understanding of object space and the construction of more brain-like visual models.

6.
J Exp Psychol Hum Percept Perform ; 49(5): 672-686, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37261773

RESUMEN

Previous studies have posited that spatial location plays a special role in object recognition. Notably, the "spatial congruency bias (SCB)" is a tendency to report objects as the same identity if they are presented at the same location, compared to different locations. Here we found that even when statistical regularities were manipulated in the opposite direction (objects in the same location were three times more likely to be different identities), subjects still exhibited a robust SCB (more likely to report them as the same identity). We replicated this finding across two preregistered experiments. Only in a third experiment where we explicitly informed subjects of the manipulation did the SCB disappear, though the lack of a significantly reversed bias suggests the ingrained congruency bias was not completely overcome. The inclusion of catch trials where the second object was completely masked further bolsters previous evidence that the congruency bias is perceptual, not simply a guessing strategy. These results reinforce the dominant role of spatial information during object recognition and present the SCB as a strong perceptual phenomenon that is incredibly hard to overcome even in the face of opposing regularities and explicit instruction. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Juicio , Percepción Visual , Humanos
7.
J Neurophysiol ; 130(1): 139-154, 2023 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-37283457

RESUMEN

Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications.NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.


Asunto(s)
Atención , Electroencefalografía , Humanos , Electroencefalografía/métodos , Atención/fisiología , Orientación Espacial , Señales (Psicología)
8.
ArXiv ; 2023 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-37131879

RESUMEN

Most models in cognitive and computational neuroscience trained on one subject do not generalize to other subjects due to individual differences. An ideal individual-to-individual neural converter is expected to generate real neural signals of one subject from those of another one, which can overcome the problem of individual differences for cognitive and computational models. In this study, we propose a novel individual-to-individual EEG converter, called EEG2EEG, inspired by generative models in computer vision. We applied THINGS EEG2 dataset to train and test 72 independent EEG2EEG models corresponding to 72 pairs across 9 subjects. Our results demonstrate that EEG2EEG is able to effectively learn the mapping of neural representations in EEG signals from one subject to another and achieve high conversion performance. Additionally, the generated EEG signals contain clearer representations of visual information than that can be obtained from real data. This method establishes a novel and state-of-the-art framework for neural conversion of EEG signals, which can realize a flexible and high-performance mapping from individual to individual and provide insight for both neural engineering and cognitive neuroscience.

9.
bioRxiv ; 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37162863

RESUMEN

Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher-level object-location binding. But these studies are generally conducted in fairly static experimental contexts. Might spatiotopic object-location binding only emerge in more dynamic saccade contexts? In the present study, we investigated this using the Spatial Congruency Bias paradigm in healthy adults. In the static (single saccade) context, we found purely retinotopic binding, as before. However, robust spatiotopic binding emerged in the dynamic (multiple frequent saccades) context. We further isolated specific factors that modulate retinotopic and spatiotopic binding. Our results provide strong evidence that dynamic saccade context can trigger more stable object-location binding in ecologically-relevant spatiotopic coordinates, perhaps via a more flexible brain state which accommodates improved visual stability in the dynamic world.

10.
J Exp Psychol Hum Percept Perform ; 49(6): 802-820, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37141038

RESUMEN

Spatial attention affects not only where we look, but also what we perceive and remember in attended and unattended locations. Previous work has shown that manipulating attention via top-down cues or bottom-up capture leads to characteristic patterns of feature errors. Here we investigated whether experience-driven attentional guidance-and probabilistic attentional guidance more generally-leads to similar feature errors. We conducted a series of pre-registered experiments employing a learned spatial probability or probabilistic pre-cue; all experiments involved reporting the color of one of four simultaneously presented stimuli using a continuous response modality. When the probabilistic cues guided attention to an invalid (nontarget) location, participants were less likely to report the target color, as expected. But strikingly, their errors tended to be clustered around a nontarget color opposite the color of the invalidly-cued nontarget. This "feature avoidance" was found for both experience-driven and top-down probabilistic cues, and appears to be the product of a strategic-but possibly subconscious-behavior, occurring when information about the features and/or feature-location bindings outside the focus of attention is limited. The findings emphasize the importance of considering how different types of attentional guidance can exert different effects on feature perception and memory reports. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Señales (Psicología) , Recuerdo Mental , Humanos , Tiempo de Reacción/fisiología , Estimulación Luminosa , Atención , Percepción Visual/fisiología
11.
J Exp Psychol Hum Percept Perform ; 49(7): 1031-1041, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37199949

RESUMEN

Learning to ignore distractors is critical for navigating the visual world. Research has suggested that a location frequently containing a salient distractor can be suppressed. How does such suppression work? Previous studies provided evidence for proactive suppression, but methodological limitations preclude firm conclusions. We sought to overcome these limitations with a new search-probe paradigm. On search trials, participants searched for a shape oddball target while a salient color singleton distractor frequently appeared in a high-probability location. On randomly interleaved probe trials, participants discriminated the orientation of a tilted bar presented briefly at one of the search locations, allowing us to index the spatial distribution of attention at the moment the search would have begun. Results on search trials replicated previous findings: reduced attentional capture when a salient distractor appeared in the high-probability location. However, critically, probe discrimination was no different at the high-probability and low-probability locations. We increased the incentive to ignore the high-probability location in Experiment 2 and found, strikingly, that probe discrimination accuracy was greater at the high-probability location. These results suggest that the high-probability location was initially selected before being suppressed, consistent with a reactive mechanism. Overall, the accuracy probe procedure demonstrates that learned spatial suppression is not always proactive, even when response time metrics seem consistent with such an inference. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Atención , Aprendizaje , Humanos , Aprendizaje/fisiología , Tiempo de Reacción/fisiología , Atención/fisiología
12.
Wiley Interdiscip Rev Cogn Sci ; 14(1): e1633, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36317275

RESUMEN

This opinion piece is part of a collection on the topic: "What is attention?" Despite the word's place in the common vernacular, a satisfying definition for "attention" remains elusive. Part of the challenge is there exist many different types of attention, which may or may not share common mechanisms. Here we review this literature and offer an intuitive definition that draws from aspects of prior theories and models of attention but is broad enough to recognize the various types of attention and modalities it acts upon: attention as a multi-level system of weights and balances. While the specific mechanism(s) governing the weighting/balancing may vary across levels, the fundamental role of attention is to dynamically weigh and balance all signals-both externally-generated and internally-generated-such that the highest weighted signals are selected and enhanced. Top-down, bottom-up, and experience-driven factors dynamically impact this balancing, and competition occurs both within and across multiple levels of processing. This idea of a multi-level system of weights and balances is intended to incorporate both external and internal attention and capture their myriad of constantly interacting processes. We review key findings and open questions related to external attention guidance, internal attention and working memory, and broader attentional control (e.g., ongoing competition between external stimuli and internal thoughts) within the framework of this analogy. We also speculate about the implications of failures of attention in terms of weights and balances, ranging from momentary one-off errors to clinical disorders, as well as attentional development and degradation across the lifespan. This article is categorized under: Psychology > Attention Neuroscience > Cognition.


Asunto(s)
Cognición , Memoria a Corto Plazo , Humanos
13.
J Cogn Neurosci ; 34(8): 1521-1533, 2022 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-35579979

RESUMEN

Our behavioral goals shape how we process information via attentional filters that prioritize goal-relevant information, dictating both where we attend and what we attend to. When something unexpected or salient appears in the environment, it captures our spatial attention. Extensive research has focused on the spatiotemporal aspects of attentional capture, but what happens to concurrent nonspatial filters during visual distraction? Here, we demonstrate a novel, broader consequence of distraction: widespread disruption to filters that regulate category-specific object processing. We recorded fMRI while participants viewed arrays of face/house hybrid images. On distractor-absent trials, we found robust evidence for the standard signature of category-tuned attentional filtering: greater BOLD activation in fusiform face area during attend-faces blocks and in parahippocampal place area during attend-houses blocks. However, on trials where a salient distractor (white rectangle) flashed abruptly around a nontarget location, not only was spatial attention captured, but the concurrent category-tuned attentional filter was disrupted, revealing a boost in activation for the to-be-ignored category. This disruption was robust, resulting in errant processing-and early on, prioritization-of goal-inconsistent information. These findings provide a direct test of the filter disruption theory: that in addition to disrupting spatial attention, distraction also disrupts nonspatial attentional filters tuned to goal-relevant information. Moreover, these results reveal that, under certain circumstances, the filter disruption may be so profound as to induce a full reversal of the attentional control settings, which carries novel implications for both theory and real-world perception.


Asunto(s)
Atención , Corteza Visual , Atención/fisiología , Humanos , Imagen por Resonancia Magnética , Tiempo de Reacción , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Percepción Visual/fisiología
14.
Atten Percept Psychophys ; 83(7): 2822-2842, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34435320

RESUMEN

Attention is dynamic, constantly shifting between different locations - sometimes imperfectly. How do goal-driven expectations impact dynamic spatial attention? A previous study (Dowd & Golomb, Psychological Science, 30(3), 343-361, 2019) explored object-feature binding when covert attention needed to be either maintained at a single location or shifted from one location to another. In addition to revealing feature-binding errors during dynamic shifts of attention, this study unexpectedly found that participants sometimes made correlated errors on trials when they did not have to shift attention, mistakenly reporting the features and location of an object at a different location. The authors posited that these errors represent "spatial lapses" attention, which are perhaps driven by the implicit sampling of other locations in anticipation of having to shift attention. To investigate whether these spatial lapses are indeed anticipatory, we conducted a series of four experiments. We first replicated in Psychological Science, 30(3), the original finding of spatial lapses, and then showed that these spatial lapses were not observed in contexts where participants are not expecting to have to shift attention. We then tested contexts where the direction of attentional shifts was spatially predictable, and found that participants lapse preferentially to more likely shift locations. Finally, we found that spatial lapses do not seem to be driven by explicit knowledge of likely shift locations. Combined, these results suggest that spatial lapses of attention are induced by the implicit anticipation of making an attentional shift, providing further insight into the interplay between implicit expectations, dynamic spatial attention, and visual perception.


Asunto(s)
Motivación , Percepción Espacial , Atención , Humanos , Percepción Visual
15.
Annu Rev Vis Sci ; 7: 257-277, 2021 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-34242055

RESUMEN

Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs. The earliest remapping studies focused on anticipatory, presaccadic shifts of neuronal spatial receptive fields. Over time, it has become clear that there are multiple forms of remapping and that different forms of remapping may be mediated by different neural mechanisms. This review attempts to organize the various forms of remapping into a functional taxonomy based on experimental data and ongoing debates about forward versus convergent remapping, presaccadic versus postsaccadic remapping, and spatial versus attentional remapping. We integrate findings from primate neurophysiological, human neuroimaging and behavioral, and computational modeling studies. We conclude by discussing persistent open questions related to remapping, with specific attention to binding of spatial and featural information during remapping and speculations about remapping's functional significance.


Asunto(s)
Movimientos Sacádicos , Campos Visuales , Animales , Movimientos Oculares , Estimulación Luminosa/métodos , Retina/fisiología
16.
J Exp Psychol Gen ; 150(12): 2506-2524, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34014755

RESUMEN

How are humans capable of maintaining detailed representations of visual items in memory? When required to make fine discriminations, we sometimes implicitly differentiate memory representations away from each other to reduce interitem confusion. However, this separation of representations can inadvertently lead memories to be recalled as biased away from other memory items, a phenomenon termed repulsion bias. Using a nonretinotopically specific working memory paradigm, we found stronger repulsion bias with longer working memory delays, but only when items were actively maintained. These results suggest that (a) repulsion bias can reflect a mnemonic phenomenon, distinct from perceptually driven observations of repulsion bias; and (b) mnemonic repulsion bias is ongoing during maintenance and dependent on attention to internally maintained memory items. These results support theories of working memory where items are represented interdependently and further reveals contexts where stronger attention to working memory items during maintenance increases repulsion bias between them. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Memoria a Corto Plazo , Recuerdo Mental , Humanos , Percepción Visual
17.
Psychon Bull Rev ; 28(5): 1592-1600, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34027621

RESUMEN

Given the complexity of our visual environments, a number of mechanisms help us prioritize goal-consistent visual information. When searching for a friend in a crowd, for instance, visual working memory (VWM) maintains a representation of your target (i.e., your friend's shirt) so that attention can be subsequently guided toward target-matching features. In turn, attentional filters gate access to VWM to ensure that only the most relevant information is encoded and used to guide behavior. Distracting (i.e., unexpected/salient) information, however, can also capture your attention, disrupting search. In the current study we ask: does distraction also disrupt control over the VWM filter? Although the effect of distraction on search behavior is heavily studied, we know little about its consequences for VWM. Participants performed two consecutive visual search tasks on each trial. Stimulus color was irrelevant for both search tasks, but on trials where a salient distractor appeared on Search 1, we found evidence that the color associated with this distractor was incidentally encoded into VWM, resulting in memory-driven capture on Search 2. In two different experiments we observed slower responses on Search 2 when a non-target item matched the color of the salient distractor from Search 1; this effect was specific to the color associated with salient distraction and not induced by other non-target colors from the Search 1 display. We propose a novel Filter Disruption Theory: distraction disrupts the attentional filter that controls access to VWM, resulting in the encoding of irrelevant inputs at the time of capture.


Asunto(s)
Memoria a Corto Plazo , Percepción Visual , Recolección de Datos , Humanos
18.
eNeuro ; 8(2)2021.
Artículo en Inglés | MEDLINE | ID: mdl-33558269

RESUMEN

We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location ("hold attention") or shifted attention to another location midway through the trial ("shift attention"). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the "retinotopic attention" condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the "spatiotopic attention" condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention "hold" and "shift" signals across different regions.


Asunto(s)
Retina , Movimientos Sacádicos , Fijación Ocular , Técnicas Histológicas , Humanos , Imagen por Resonancia Magnética , Estimulación Luminosa
19.
Atten Percept Psychophys ; 83(4): 1652-1672, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33462770

RESUMEN

Humans use regularities in the environment to facilitate learning, often without awareness or intent. How might such regularities distort long-term memory? Here, participants studied and reported the colors of objects in a long-term memory paradigm, uninformed that certain colors were sampled more frequently overall. When participants misreported an object's color, these errors were often centered around the average studied color (i.e., "Rich" color), demonstrating swap errors in long-term memory due to imposed statistical regularities. We observed such swap errors regardless of memory load, explicit knowledge, or the distance in color space between the correct color of the tested object and the Rich color. An explicit guessing strategy where participants intentionally made swap errors when uncertain could not fully account for our results. We discuss other potential sources of observed swap errors such as false memory and implicit biased guessing. Although less robust than swap errors, evidence was also observed for subtle shift errors towards or away from the Rich color dependent on the color distance between the correct color and the Rich color. Together, these findings of swap and shift errors provide converging evidence for memory distortion mechanisms induced by a reference point, bridging a gap in the literature between how attention to regularities similarly influences visual working memory and visual long-term memory.


Asunto(s)
Recuerdo Mental , Percepción Visual , Percepción de Color , Humanos , Memoria a Largo Plazo , Memoria a Corto Plazo
20.
Artículo en Inglés | MEDLINE | ID: mdl-33090835

RESUMEN

The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...