Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
Dev Sci ; 27(4): e13482, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38332650

RESUMO

In adults, spatial location plays a special role in visual object processing. People are more likely to judge two sequentially presented objects as being identical when they appear in the same location compared to in different locations (a phenomenon referred to as Spatial Congruency Bias [SCB]). However, no comparable Identity Congruency Bias (ICB) is found, suggesting an asymmetric location-identity relationship in object binding. What gives rise to this asymmetric congruency bias? This paper considered two possible hypotheses. Hypothesis 1 suggests that the asymmetric congruency bias results from an inherently special role of location in the visual system. In contrast, Hypothesis 2 suggests that the asymmetric congruency bias is a product of development, reflecting people's experience with the world. To distinguish the two hypotheses, we tested both adults' and 5-year-old children's SCB and ICB by Identity Judgment Experiments and Spatial Judgment Experiments, respectively. The study found that adults only exhibited a SCB, but no ICB. However, young children exhibited both SCB and ICB, suggesting a symmetric congruency bias and reciprocal influences between location and identity in early development. The results indicate that the asymmetric location-identity relationship develops as object identity's influence on location gets pruned away, while location's influence on identity is preserved, possibly due to people's gained experiences with regularities of the world. RESEARCH HIGHLIGHTS: Adults exhibit Spatial Congruency Bias-an asymmetric location-identity relationship with location biasing their judgment of object identities, but not vice versa. Asymmetric congruency bias may result from an inherently special role of location in visual system (Hypothesis 1) or accumulated experiences with the world (Hypothesis 2). To distinguish the two hypotheses, the study investigated the Spatial Congruency Bias and Identity Congruency Bias in both adults and 5-year-old children. Unlike adults who exhibited only Spatial Congruency Bias, 5-year-old children exhibited both Spatial Congruency Bias and Identity Congruency Bias.


Assuntos
Cognição , Percepção Espacial , Percepção Visual , Humanos , Pré-Escolar , Feminino , Percepção Espacial/fisiologia , Cognição/fisiologia , Masculino , Percepção Visual/fisiologia , Adulto , Desenvolvimento Infantil/fisiologia , Julgamento/fisiologia , Adulto Jovem , Estimulação Luminosa , Viés
2.
J Neurophysiol ; 130(1): 139-154, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37283457

RESUMO

Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications.NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.


Assuntos
Atenção , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Atenção/fisiologia , Orientação Espacial , Sinais (Psicologia)
3.
J Cogn Neurosci ; 34(8): 1521-1533, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35579979

RESUMO

Our behavioral goals shape how we process information via attentional filters that prioritize goal-relevant information, dictating both where we attend and what we attend to. When something unexpected or salient appears in the environment, it captures our spatial attention. Extensive research has focused on the spatiotemporal aspects of attentional capture, but what happens to concurrent nonspatial filters during visual distraction? Here, we demonstrate a novel, broader consequence of distraction: widespread disruption to filters that regulate category-specific object processing. We recorded fMRI while participants viewed arrays of face/house hybrid images. On distractor-absent trials, we found robust evidence for the standard signature of category-tuned attentional filtering: greater BOLD activation in fusiform face area during attend-faces blocks and in parahippocampal place area during attend-houses blocks. However, on trials where a salient distractor (white rectangle) flashed abruptly around a nontarget location, not only was spatial attention captured, but the concurrent category-tuned attentional filter was disrupted, revealing a boost in activation for the to-be-ignored category. This disruption was robust, resulting in errant processing-and early on, prioritization-of goal-inconsistent information. These findings provide a direct test of the filter disruption theory: that in addition to disrupting spatial attention, distraction also disrupts nonspatial attentional filters tuned to goal-relevant information. Moreover, these results reveal that, under certain circumstances, the filter disruption may be so profound as to induce a full reversal of the attentional control settings, which carries novel implications for both theory and real-world perception.


Assuntos
Atenção , Córtex Visual , Atenção/fisiologia , Humanos , Imageamento por Ressonância Magnética , Tempo de Reação , Córtex Visual/diagnóstico por imagem , Córtex Visual/fisiologia , Percepção Visual/fisiologia
4.
Neuroimage ; 196: 289-301, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-30978498

RESUMO

Multiple regions in the human brain are dedicated to accomplish the feat of object recognition; yet our brains must also compute the 2D and 3D locations of the objects we encounter in order to make sense of our visual environments. A number of studies have explored how various object category-selective regions are sensitive to and have preferences for specific 2D spatial locations in addition to processing their preferred-stimulus categories, but there is no survey of how these regions respond to depth information. In a blocked functional MRI experiment, subjects viewed a series of category-specific (i.e., faces, objects, scenes) and unspecific (e.g., random moving dots) stimuli with red/green anaglyph glasses. Critically, these stimuli were presented at different depth planes such that they appeared in front of, behind, or at the same (i.e., middle) depth plane as the fixation point (Experiment 1) or simultaneously in front of and behind fixation (i.e., mixed depth; Experiment 2). Comparisons of mean response magnitudes between back, middle, and front depth planes reveal that face and object regions OFA and LOC exhibit a preference for front depths, and motion area MT+ exhibits a strong linear preference for front, followed by middle, followed by back depth planes. In contrast, scene-selective regions PPA and OPA prefer front and/or back depth planes (relative to middle). Moreover, the occipital place area demonstrates a strong preference for "mixed" depth above and beyond back alone, raising potential implications about its particular role in scene perception. Crucially, the observed depth preferences in nearly all areas were evoked irrespective of the semantic stimulus category being viewed. These results reveal that the object category-selective regions may play a role in processing or incorporating depth information that is orthogonal to their primary processing of object category information.


Assuntos
Encéfalo/fisiologia , Percepção de Profundidade/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Córtex Visual/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Adulto Jovem
5.
Psychol Sci ; 30(3): 343-361, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30694718

RESUMO

Visual object perception requires integration of multiple features; spatial attention is thought to be critical to this binding. But attention is rarely static-how does dynamic attention impact object integrity? Here, we manipulated covert spatial attention and had participants (total N = 48) reproduce multiple properties (color, orientation, location) of a target item. Object-feature binding was assessed by applying probabilistic models to the joint distribution of feature errors: Feature reports for the same object could be correlated (and thus bound together) or independent. We found that splitting attention across multiple locations degrades object integrity, whereas rapid shifts of spatial attention maintain bound objects. Moreover, we document a novel attentional phenomenon, wherein participants exhibit unintentional fluctuations- lapses of spatial attention-yet nevertheless preserve object integrity at the wrong location. These findings emphasize the importance of a single focus of spatial attention for object-feature binding, even when that focus is dynamically moving across the visual field.


Assuntos
Atenção/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Algoritmos , Percepção de Cores/fisiologia , Feminino , Humanos , Masculino , Modelos Estatísticos , Orientação , Campos Visuais , Adulto Jovem
6.
J Neurosci ; 36(16): 4434-42, 2016 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-27098688

RESUMO

By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT: Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment.


Assuntos
Encéfalo/metabolismo , Expressão Facial , Reconhecimento Facial/fisiologia , Estimulação Luminosa/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino
7.
Neuroimage ; 147: 507-516, 2017 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-28039760

RESUMO

Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy.


Assuntos
Mapeamento Encefálico/métodos , Percepção de Profundidade/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Percepção Espacial/fisiologia , Córtex Visual/fisiologia , Adolescente , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Córtex Visual/diagnóstico por imagem , Adulto Jovem
8.
Proc Natl Acad Sci U S A ; 109(5): 1796-801, 2012 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-22307648

RESUMO

Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.


Assuntos
Memória , Retina/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Movimentos Sacádicos , Adulto Jovem
9.
Psychol Sci ; 25(5): 1067-78, 2014 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-24647672

RESUMO

When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades.


Assuntos
Atenção/fisiologia , Movimentos Oculares/fisiologia , Movimentos Sacádicos/fisiologia , Percepção Visual/fisiologia , Adulto , Cor , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Retina/fisiologia , Adulto Jovem
10.
bioRxiv ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-37662197

RESUMO

Remarkably, human brains have the ability to accurately perceive and process the real-world size of objects, despite vast differences in distance and perspective. While previous studies have delved into this phenomenon, distinguishing this ability from other visual perceptions, like depth, has been challenging. Using the THINGS EEG2 dataset with high time-resolution human brain recordings and more ecologically valid naturalistic stimuli, our study uses an innovative approach to disentangle neural representations of object real-world size from retinal size and perceived real-world depth in a way that was not previously possible. Leveraging this state-of-the-art dataset, our EEG representational similarity results reveal a pure representation of object real-world size in human brains. We report a representational timeline of visual object processing: object real-world depth appeared first, then retinal size, and finally, real-world size. Additionally, we input both these naturalistic images and object-only images without natural background into artificial neural networks. Consistent with the human EEG findings, we also successfully disentangled representation of object real-world size from retinal size and real-world depth in all three types of artificial neural networks (visual-only ResNet, visual-language CLIP, and language-only Word2Vec). Moreover, our multi-modal representational comparison framework across human EEG and artificial neural networks reveals real-world size as a stable and higher-level dimension in object space incorporating both visual and semantic information. Our research provides a detailed and clear characterization of the object processing process, which offers further advances and insights into our understanding of object space and the construction of more brain-like visual models.

11.
J Exp Psychol Gen ; 153(4): 873-888, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38300544

RESUMO

Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about if and how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher level object-location binding. But these studies are generally conducted in fairly static experimental contexts. Might spatiotopic object-location binding only emerge in more dynamic saccade contexts? In the present study, we investigated this using the spatial congruency bias paradigm in healthy adults. In the static (single-saccade) context, we found purely retinotopic binding, as before. However, robust spatiotopic binding emerged in the dynamic saccade context (multiple frequent saccades and saccades during stimulus presentation). We further isolated specific factors that modulate retinotopic and spatiotopic binding. Our results provide strong evidence that dynamic saccade context can trigger more stable object-location binding in ecologically relevant spatiotopic coordinates, perhaps via a more flexible brain state that accommodates improved visual stability in the dynamic world. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Retina , Movimentos Sacádicos , Adulto , Humanos , Movimentos Oculares , Encéfalo , Estimulação Luminosa
12.
ArXiv ; 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38351926

RESUMO

Despite advancements in artificial intelligence, object recognition models still lag behind in emulating visual information processing in human brains. Recent studies have highlighted the potential of using neural data to mimic brain processing; however, these often rely on invasive neural recordings from non-human subjects, leaving a critical gap in understanding human visual perception. Addressing this gap, we present, for the first time, 'Re(presentational)Al(ignment)net', a vision model aligned with human brain activity based on non-invasive EEG, demonstrating a significantly higher similarity to human brain representations. Our innovative image-to-brain multi-layer encoding framework advances human neural alignment by optimizing multiple model layers and enabling the model to efficiently learn and mimic human brain's visual representational patterns across object categories and different modalities. Our findings suggest that ReAlnet represents a breakthrough in bridging the gap between artificial and human vision, and paving the way for more brain-like artificial intelligence systems.

13.
Psychon Bull Rev ; 31(1): 223-233, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37528277

RESUMO

We are often bombarded with salient stimuli that capture our attention and distract us from our current goals. Decades of research have shown the robust detrimental impacts of salient distractors on search performance and, of late, in leading to altered feature perception. These feature errors can be quite extreme, and thus, undesirable. In search tasks, salient distractors can be suppressed if they appear more frequently in one location, and this learned spatial suppression can lead to reductions in the cost of distraction as measured by reaction time slowing. Can learned spatial suppression also protect against visual feature errors? To investigate this question, participants were cued to report one of four briefly presented colored squares on a color wheel. On two-thirds of trials, a salient distractor appeared around one of the nontarget squares, appearing more frequently in one location over the course of the experiment. Participants' responses were fit to a model estimating performance parameters and compared across conditions. Our results showed that general performance (guessing and precision) improved when the salient distractor appeared in a likely location relative to elsewhere. Critically, feature swap errors (probability of misreporting the color at the salient distractor's location) were also significantly reduced when the distractor appeared in a likely location, suggesting that learned spatial suppression of a salient distractor helps protect the processing of target features. This study provides evidence that, in addition to helping us avoid salient distractors, suppression likely plays a larger role in helping to prevent distracting information from being encoded.


Assuntos
Atenção , Aprendizagem , Humanos , Tempo de Reação/fisiologia , Atenção/fisiologia , Sinais (Psicologia) , Probabilidade
14.
Neuroimage ; 66: 553-62, 2013 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-23108276

RESUMO

Attention during encoding improves later memory, but how this happens is poorly understood. To investigate the role of attention in memory formation, we combined a variant of a spatial attention cuing task with a subsequent memory fMRI design. Scene stimuli were presented in the periphery to either the left or right of fixation, preceded by a central face cue whose gaze oriented attention to the probable location of the scene. We contrasted activity for scenes appearing in cued versus uncued locations to identify: (1) regions where cuing facilitated processing, and (2) regions involved in reorienting. We then tested how activity in these facilitation and reorienting regions of interest predicted subsequent long-term memory for individual scenes. In facilitation regions such as parahippocampal cortex, greater activity during encoding predicted memory success. In reorienting regions such as right temporoparietal junction, greater activity during encoding predicted memory failure. We interpret these results as evidence that memory formation benefits from attentional facilitation of perceptual processing combined with suppression of the ventral attention network to prevent reorienting to distractors.


Assuntos
Atenção/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Memória/fisiologia , Adolescente , Adulto , Sinais (Psicologia) , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
15.
Cereb Cortex ; 22(12): 2794-810, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22190434

RESUMO

The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex-important for stable object recognition and action-contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a "searchlight" analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates.


Assuntos
Mapeamento Encefálico , Rede Nervosa/fisiologia , Retina/fisiologia , Percepção Espacial/fisiologia , Visão Ocular/fisiologia , Córtex Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Reconhecimento Visual de Modelos
16.
bioRxiv ; 2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37162863

RESUMO

Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher-level object-location binding. But these studies are generally conducted in fairly static experimental contexts. Might spatiotopic object-location binding only emerge in more dynamic saccade contexts? In the present study, we investigated this using the Spatial Congruency Bias paradigm in healthy adults. In the static (single saccade) context, we found purely retinotopic binding, as before. However, robust spatiotopic binding emerged in the dynamic (multiple frequent saccades) context. We further isolated specific factors that modulate retinotopic and spatiotopic binding. Our results provide strong evidence that dynamic saccade context can trigger more stable object-location binding in ecologically-relevant spatiotopic coordinates, perhaps via a more flexible brain state which accommodates improved visual stability in the dynamic world.

17.
ArXiv ; 2023 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-37131879

RESUMO

Most models in cognitive and computational neuroscience trained on one subject do not generalize to other subjects due to individual differences. An ideal individual-to-individual neural converter is expected to generate real neural signals of one subject from those of another one, which can overcome the problem of individual differences for cognitive and computational models. In this study, we propose a novel individual-to-individual EEG converter, called EEG2EEG, inspired by generative models in computer vision. We applied THINGS EEG2 dataset to train and test 72 independent EEG2EEG models corresponding to 72 pairs across 9 subjects. Our results demonstrate that EEG2EEG is able to effectively learn the mapping of neural representations in EEG signals from one subject to another and achieve high conversion performance. Additionally, the generated EEG signals contain clearer representations of visual information than that can be obtained from real data. This method establishes a novel and state-of-the-art framework for neural conversion of EEG signals, which can realize a flexible and high-performance mapping from individual to individual and provide insight for both neural engineering and cognitive neuroscience.

18.
Wiley Interdiscip Rev Cogn Sci ; 14(1): e1633, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36317275

RESUMO

This opinion piece is part of a collection on the topic: "What is attention?" Despite the word's place in the common vernacular, a satisfying definition for "attention" remains elusive. Part of the challenge is there exist many different types of attention, which may or may not share common mechanisms. Here we review this literature and offer an intuitive definition that draws from aspects of prior theories and models of attention but is broad enough to recognize the various types of attention and modalities it acts upon: attention as a multi-level system of weights and balances. While the specific mechanism(s) governing the weighting/balancing may vary across levels, the fundamental role of attention is to dynamically weigh and balance all signals-both externally-generated and internally-generated-such that the highest weighted signals are selected and enhanced. Top-down, bottom-up, and experience-driven factors dynamically impact this balancing, and competition occurs both within and across multiple levels of processing. This idea of a multi-level system of weights and balances is intended to incorporate both external and internal attention and capture their myriad of constantly interacting processes. We review key findings and open questions related to external attention guidance, internal attention and working memory, and broader attentional control (e.g., ongoing competition between external stimuli and internal thoughts) within the framework of this analogy. We also speculate about the implications of failures of attention in terms of weights and balances, ranging from momentary one-off errors to clinical disorders, as well as attentional development and degradation across the lifespan. This article is categorized under: Psychology > Attention Neuroscience > Cognition.


Assuntos
Cognição , Memória de Curto Prazo , Humanos
19.
J Exp Psychol Hum Percept Perform ; 49(6): 802-820, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37141038

RESUMO

Spatial attention affects not only where we look, but also what we perceive and remember in attended and unattended locations. Previous work has shown that manipulating attention via top-down cues or bottom-up capture leads to characteristic patterns of feature errors. Here we investigated whether experience-driven attentional guidance-and probabilistic attentional guidance more generally-leads to similar feature errors. We conducted a series of pre-registered experiments employing a learned spatial probability or probabilistic pre-cue; all experiments involved reporting the color of one of four simultaneously presented stimuli using a continuous response modality. When the probabilistic cues guided attention to an invalid (nontarget) location, participants were less likely to report the target color, as expected. But strikingly, their errors tended to be clustered around a nontarget color opposite the color of the invalidly-cued nontarget. This "feature avoidance" was found for both experience-driven and top-down probabilistic cues, and appears to be the product of a strategic-but possibly subconscious-behavior, occurring when information about the features and/or feature-location bindings outside the focus of attention is limited. The findings emphasize the importance of considering how different types of attentional guidance can exert different effects on feature perception and memory reports. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Sinais (Psicologia) , Rememoração Mental , Humanos , Tempo de Reação/fisiologia , Estimulação Luminosa , Atenção , Percepção Visual/fisiologia
20.
J Exp Psychol Hum Percept Perform ; 49(5): 672-686, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37261773

RESUMO

Previous studies have posited that spatial location plays a special role in object recognition. Notably, the "spatial congruency bias (SCB)" is a tendency to report objects as the same identity if they are presented at the same location, compared to different locations. Here we found that even when statistical regularities were manipulated in the opposite direction (objects in the same location were three times more likely to be different identities), subjects still exhibited a robust SCB (more likely to report them as the same identity). We replicated this finding across two preregistered experiments. Only in a third experiment where we explicitly informed subjects of the manipulation did the SCB disappear, though the lack of a significantly reversed bias suggests the ingrained congruency bias was not completely overcome. The inclusion of catch trials where the second object was completely masked further bolsters previous evidence that the congruency bias is perceptual, not simply a guessing strategy. These results reinforce the dominant role of spatial information during object recognition and present the SCB as a strong perceptual phenomenon that is incredibly hard to overcome even in the face of opposing regularities and explicit instruction. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Julgamento , Percepção Visual , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa