Your browser doesn't support javascript.
loading
Cross-modal contextual memory guides selective attention in visual-search tasks.
Chen, Siyi; Shi, Zhuanghua; Zinchenko, Artyom; Müller, Hermann J; Geyer, Thomas.
Afiliação
  • Chen S; General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
  • Shi Z; General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
  • Zinchenko A; Munich Center for Neurosciences-Brain & Mind, Ludwig-Maximilians-Universität München, Munich, Germany.
  • Müller HJ; General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
  • Geyer T; General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
Psychophysiology ; 59(7): e14025, 2022 07.
Article em En | MEDLINE | ID: mdl-35141899
ABSTRACT
Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items ("contextual cueing"). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively; both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Atenção / Percepção Visual Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Atenção / Percepção Visual Idioma: En Ano de publicação: 2022 Tipo de documento: Article