Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 101
Filter
1.
PLoS Biol ; 20(9): e3001711, 2022 09.
Article in English | MEDLINE | ID: mdl-36067148

ABSTRACT

Sensory responses and behavior are strongly shaped by stimulus history. For example, perceptual reports are sometimes biased toward previously viewed stimuli (serial dependence). While behavioral studies have pointed to both perceptual and postperceptual origins of this phenomenon, neural data that could elucidate where these biases emerge is limited. We recorded functional magnetic resonance imaging (fMRI) responses while human participants (male and female) performed a delayed orientation discrimination task. While behavioral reports were attracted to the previous stimulus, response patterns in visual cortex were repelled. We reconciled these opposing neural and behavioral biases using a model where both sensory encoding and readout are shaped by stimulus history. First, neural adaptation reduces redundancy at encoding and leads to the repulsive biases that we observed in visual cortex. Second, our modeling work suggest that serial dependence is induced by readout mechanisms that account for adaptation in visual cortex. According to this account, the visual system can simultaneously improve efficiency via adaptation while still optimizing behavior based on the temporal structure of natural stimuli.


Subject(s)
Visual Cortex , Visual Perception , Adaptation, Physiological , Bias , Decision Making/physiology , Female , Humans , Male , Visual Cortex/physiology , Visual Perception/physiology
2.
J Neurosci ; 43(18): 3312-3330, 2023 05 03.
Article in English | MEDLINE | ID: mdl-36963848

ABSTRACT

Perceptual difficulty is sometimes used to manipulate selective attention. However, these two factors are logically distinct. Selective attention is defined by priority given to specific stimuli based on their behavioral relevance, whereas perceptual difficulty is often determined by perceptual demands required to discriminate relevant stimuli. That said, both perceptual difficulty and selective attention are thought to modulate the gain of neural responses in early sensory areas. Previous studies found that selectively attending to a stimulus or increasing perceptual difficulty enhanced the gain of neurons in visual cortex. However, some other studies suggest that perceptual difficulty can have either a null or even reversed effect on gain modulations in visual cortex. According to Yerkes-Dodson's Law, it is possible that this discrepancy arises because of an interaction between perceptual difficulty and attentional gain modulations yielding a nonlinear inverted-U function. Here, we used EEG to measure modulations in the visual cortex of male and female human participants performing an attention-cueing task where we systematically manipulated perceptual difficulty across blocks of trials. The behavioral and neural data implicate a nonlinear inverted-U relationship between selective attention and perceptual difficulty: a focused-attention cue led to larger response gain in both neural and behavioral data at intermediate difficulty levels compared with when the task was more or less difficult. Moreover, difficulty-related changes in attentional gain positively correlated with those predicted by quantitative modeling of the behavioral data. These findings suggest that perceptual difficulty mediates attention-related changes in perceptual performance via selective neural modulations in human visual cortex.SIGNIFICANCE STATEMENT Both perceptual difficulty and selective attention are thought to influence perceptual performance by modulating response gain in early sensory areas. That said, less is known about how selective attention interacts with perceptual difficulty. Here, we measured neural gain modulations in the visual cortex of human participants performing an attention-cueing task where perceptual difficulty was systematically manipulated. Consistent with Yerkes-Dodson's Law, our behavioral and neural data implicate a nonlinear inverted-U relationship between selective attention and perceptual difficulty. These results suggest that perceptual difficulty mediates attention-related changes in perceptual performance via selective neural modulations in visual cortex, extending our understanding of the attentional operation under different levels of perceptual demands.


Subject(s)
Visual Cortex , Visual Perception , Humans , Male , Female , Visual Perception/physiology , Attention/physiology , Cues , Neurons , Visual Cortex/physiology , Photic Stimulation
3.
J Neurosci ; 43(39): 6628-6652, 2023 Sep 27.
Article in English | MEDLINE | ID: mdl-37620156

ABSTRACT

A prominent theoretical framework spanning philosophy, psychology, and neuroscience holds that selective attention penetrates early stages of perceptual processing to alter the subjective visual experience of behaviorally relevant stimuli. For example, searching for a red apple at the grocery store might make the relevant color appear brighter and more saturated compared with seeing the exact same red apple while searching for a yellow banana. In contrast, recent proposals argue that data supporting attention-related changes in appearance reflect decision- and motor-level response biases without concurrent changes in perceptual experience. Here, we tested these accounts by evaluating attentional modulations of EEG responses recorded from male and female human subjects while they compared the perceived contrast of attended and unattended visual stimuli rendered at different levels of physical contrast. We found that attention enhanced the amplitude of the P1 component, an early evoked potential measured over visual cortex. A linking model based on signal detection theory suggests that response gain modulations of the P1 component track attention-induced changes in perceived contrast as measured with behavior. In contrast, attentional cues induced changes in the baseline amplitude of posterior alpha band oscillations (Ć¢ĀˆĀ¼9-12 Hz), an effect that best accounts for cue-induced response biases, particularly when no stimuli are presented or when competing stimuli are similar and decisional uncertainty is high. The observation of dissociable neural markers that are linked to changes in subjective appearance and response bias supports a more unified theoretical account and demonstrates an approach to isolate subjective aspects of selective information processing.SIGNIFICANCE STATEMENT Does attention alter visual appearance, or does it simply induce response bias? In the present study, we examined these competing accounts using EEG and linking models based on signal detection theory. We found that response gain modulations of the visually evoked P1 component best accounted for attention-induced changes in visual appearance. In contrast, cue-induced baseline shifts in alpha band activity better explained response biases. Together, these results suggest that attention concurrently impacts visual appearance and response bias, and that these processes can be experimentally isolated.


Subject(s)
Evoked Potentials , Visual Cortex , Humans , Male , Female , Uncertainty , Cognition , Cues , Visual Cortex/physiology , Visual Perception/physiology , Photic Stimulation/methods , Electroencephalography
4.
Cereb Cortex ; 32(5): 1077-1092, 2022 02 19.
Article in English | MEDLINE | ID: mdl-34428283

ABSTRACT

Current theories propose that the short-term retention of information in working memory (WM) and the recall of information from long-term memory (LTM) are supported by overlapping neural mechanisms in occipital and parietal cortex. However, the extent of the shared representations between WM and LTM is unclear. We designed a spatial memory task that allowed us to directly compare the representations of remembered spatial information in WM and LTM with carefully matched behavioral response precision between tasks. Using multivariate pattern analyses on functional magnetic resonance imaging data, we show that visual memories were represented in a sensory-like code in both memory tasks across retinotopic regions in occipital and parietal cortex. Regions in lateral parietal cortex also encoded remembered locations in both tasks, but in a format that differed from sensory-evoked activity. These results suggest a striking correspondence in the format of representations maintained in WM and retrieved from LTM across occipital and parietal cortex. On the other hand, we also show that activity patterns in nearly all parietal regions, but not occipital regions, contained information that could discriminate between WM and LTM trials. Our data provide new evidence for theories of memory systems and the representation of mnemonic content.


Subject(s)
Memory, Long-Term , Memory, Short-Term , Brain Mapping/methods , Magnetic Resonance Imaging , Memory, Short-Term/physiology , Occipital Lobe , Parietal Lobe/diagnostic imaging , Parietal Lobe/physiology
5.
J Neurosci ; 41(38): 8007-8022, 2021 09 22.
Article in English | MEDLINE | ID: mdl-34330776

ABSTRACT

To find important objects, we must focus on our goals, ignore distractions, and take our changing environment into account. This is formalized in models of visual search whereby goal-driven, stimulus-driven, and history-driven factors are integrated into a priority map that guides attention. Stimulus history robustly influences where attention is allocated even when the physical stimulus is the same: when a salient distractor is repeated over time, it captures attention less effectively. A key open question is how we come to ignore salient distractors when they are repeated. Goal-driven accounts propose that we use an active, expectation-driven mechanism to attenuate the distractor signal (e.g., predictive coding), whereas stimulus-driven accounts propose that the distractor signal is attenuated because of passive changes to neural activity and inter-item competition (e.g., adaptation). To test these competing accounts, we measured item-specific fMRI responses in human visual cortex during a visual search task where trial history was manipulated (colors unpredictably switched or were repeated). Consistent with a stimulus-driven account of history-based distractor suppression, we found that repeated singleton distractors were suppressed starting in V1, and distractor suppression did not increase in later visual areas. In contrast, we observed signatures of goal-driven target enhancement that were absent in V1, increased across visual areas, and were not modulated by stimulus history. Our data suggest that stimulus history does not alter goal-driven expectations, but rather modulates canonically stimulus-driven sensory responses to contribute to a temporally integrated representation of priority.SIGNIFICANCE STATEMENT Visual search refers to our ability to find what we are looking for in a cluttered visual world (e.g., finding your keys). To perform visual search, we must integrate information about our goals (e.g., "find the red keychain"), the environment (e.g., salient items capture your attention), and changes to the environment (i.e., stimulus history). Although stimulus history impacts behavior, the neural mechanisms that mediate history-driven effects remain debated. Here, we leveraged fMRI and multivariate analysis techniques to measure history-driven changes to the neural representation of items during visual search. We found that stimulus history influenced the representation of a salient "pop-out" distractor starting in V1, suggesting that stimulus history operates via modulations of early sensory processing rather than goal-driven expectations.


Subject(s)
Attention/physiology , Brain/physiology , Visual Cortex/physiology , Visual Perception/physiology , Adult , Brain/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Visual Cortex/diagnostic imaging , Young Adult
6.
J Cogn Neurosci ; 35(1): 24-26, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36322835

ABSTRACT

In this short perspective, we reflect upon our tendency to use oversimplified and idiosyncratic tasks in a quest to discover general mechanisms of working memory. We discuss how the work of Mark Stokes and collaborators has looked beyond localized, temporally persistent neural activity and shifted focus toward the importance of distributed, dynamic neural codes for working memory. A critical lesson from this work is that using simplified tasks does not automatically simplify the neural computations supporting behavior (even if we wish it were so). Moreover, Stokes' insights about multidimensional dynamics highlight the flexibility of the neural codes underlying cognition and have pushed the field to look beyond static measures of working memory.


Subject(s)
Cognition , Memory, Short-Term , Humans
7.
J Neurophysiol ; 127(2): 504-518, 2022 02 01.
Article in English | MEDLINE | ID: mdl-35020526

ABSTRACT

Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared with the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used functional MRI (fMRI) and human participants to assess the precision of bottom-up spatial representations evoked by high-contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially specific bottom-up drive. Whereas V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, midlevel areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g., V1 vs. V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.NEW & NOTEWORTHY When the relative precision of purely top-down and bottom-up signals were compared across visual areas, early visual areas like V1 showed higher bottom-up precision compared with top-down precision. In contrast, midlevel areas showed similar levels of top-down and bottom-up precision. This result suggests that the precision of top-down attentional modulations may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas generating and relaying the signals.


Subject(s)
Attention/physiology , Functional Neuroimaging , Psychomotor Performance/physiology , Space Perception/physiology , Visual Cortex/physiology , Visual Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
8.
PLoS Biol ; 17(8): e3000186, 2019 08.
Article in English | MEDLINE | ID: mdl-31398186

ABSTRACT

When a behaviorally relevant stimulus has been previously associated with reward, behavioral responses are faster and more accurate compared to equally relevant but less valuable stimuli. Conversely, task-irrelevant stimuli that were previously associated with a high reward can capture attention and distract processing away from relevant stimuli (e.g., seeing a chocolate bar in the pantry when you are looking for a nice, healthy apple). Although increasing the value of task-relevant stimuli systematically up-regulates neural responses in early visual cortex to facilitate information processing, it is not clear whether the value of task-irrelevant distractors influences behavior via competition in early visual cortex or via competition at later stages of decision-making and response selection. Here, we measured functional magnetic resonance imaging (fMRI) in human visual cortex while subjects performed a value-based learning task, and we applied a multivariate inverted encoding model (IEM) to assess the fidelity of distractor representations in early visual cortex. We found that the fidelity of neural representations related to task-irrelevant distractors increased when the distractors were previously associated with a high reward. This finding suggests that value-driven attentional capture begins with sensory modulations of distractor representations in early areas of visual cortex.


Subject(s)
Attention/physiology , Reaction Time/physiology , Visual Cortex/physiology , Adult , Brain Mapping , Female , Humans , Learning/physiology , Magnetic Resonance Imaging , Male , Photic Stimulation , Reward , Visual Perception/physiology
9.
Proc Natl Acad Sci U S A ; 116(39): 19705-19710, 2019 09 24.
Article in English | MEDLINE | ID: mdl-31492814

ABSTRACT

Prior knowledge about the probabilistic structure of visual environments is necessary to resolve ambiguous information about objects in the world. Expectations based on stimulus regularities exert a powerful influence on human perception and decision making by improving the efficiency of information processing. Another type of prior knowledge, termed top-down attention, can also improve perceptual performance by facilitating the selective processing of relevant over irrelevant information. While much is known about attention, the mechanisms that support expectations about statistical regularities are not well-understood. The hippocampus has been implicated as a key structure involved in or perhaps necessary for the learning of statistical regularities, consistent with its role in various kinds of learning and memory. Here, we tested this hypothesis using a motion discrimination task in which we manipulated the most likely direction of motion, the degree of attention afforded to the relevant stimulus, and the amount of available sensory evidence. We tested memory-impaired patients with bilateral damage to the hippocampus and compared their performance with controls. Despite a modest slowing in response initiation across all task conditions, patients performed similar to controls. Like controls, patients exhibited a tendency to respond faster and more accurately when the motion direction was more probable, the stimulus was better attended, and more sensory evidence was available. Together, these findings demonstrate a robust, hippocampus-independent capacity for learning statistical regularities in the sensory environment in order to improve information processing.


Subject(s)
Attention/physiology , Hippocampus/physiopathology , Learning/physiology , Adult , Aged , Aged, 80 and over , Brain/physiology , Brain Mapping , Cognition/physiology , Decision Making/physiology , Female , Humans , Male , Memory , Middle Aged , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Visual Perception/physiology
10.
J Neurosci ; 40(4): 917-931, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31862856

ABSTRACT

Categorization allows organisms to generalize existing knowledge to novel stimuli and to discriminate between physically similar yet conceptually different stimuli. Humans, nonhuman primates, and rodents can readily learn arbitrary categories defined by low-level visual features, and learning distorts perceptual sensitivity for category-defining features such that differences between physically similar yet categorically distinct exemplars are enhanced, whereas differences between equally similar but categorically identical stimuli are reduced. We report a possible basis for these distortions in human occipitoparietal cortex. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivoxel and multielectrode patterns of human brain activity while human participants (both sexes) classified continuous stimulus sets into discrete groups. In each experiment, reconstructed representations of to-be-categorized stimuli were systematically biased toward the center of the appropriate category. These biases were largest for exemplars near a category boundary, predicted participants' overt category judgments, emerged shortly after stimulus onset, and could not be explained by mechanisms of response selection or motor preparation. Collectively, our findings suggest that category learning can influence processing at the earliest stages of cortical visual processing.SIGNIFICANCE STATEMENT Category learning enhances perceptual sensitivity for physically similar yet categorically different stimuli. We report a possible mechanism for these changes in human occipitoparietal cortex. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivariate patterns in occipitoparietal cortex while participants categorized sets of continuous stimuli into discrete groups. The recovered representations were systematically biased by category membership, with larger biases for exemplars adjacent to a category boundary. These results suggest that mechanisms of categorization shape information processing at the earliest stages of the visual system.


Subject(s)
Cognition/physiology , Judgment/physiology , Occipital Lobe/physiology , Parietal Lobe/physiology , Pattern Recognition, Visual/physiology , Adult , Brain Mapping , Electroencephalography , Female , Functional Neuroimaging , Humans , Magnetic Resonance Imaging , Male , Occipital Lobe/diagnostic imaging , Parietal Lobe/diagnostic imaging , Photic Stimulation
11.
J Cogn Neurosci ; 33(4): 695-724, 2021 04.
Article in English | MEDLINE | ID: mdl-33416444

ABSTRACT

Feature-based attention is the ability to selectively attend to a particular feature (e.g., attend to red but not green items while looking for the ketchup bottle in your refrigerator), and steady-state visually evoked potentials (SSVEPs) measured from the human EEG signal have been used to track the neural deployment of feature-based attention. Although many published studies suggest that we can use trial-by-trial cues to enhance relevant feature information (i.e., greater SSVEP response to the cued color), there is ongoing debate about whether participants may likewise use trial-by-trial cues to voluntarily ignore a particular feature. Here, we report the results of a preregistered study in which participants either were cued to attend or to ignore a color. Counter to prior work, we found no attention-related modulation of the SSVEP response in either cue condition. However, positive control analyses revealed that participants paid some degree of attention to the cued color (i.e., we observed a greater P300 component to targets in the attended vs. the unattended color). In light of these unexpected null results, we conducted a focused review of methodological considerations for studies of feature-based attention using SSVEPs. In the review, we quantify potentially important stimulus parameters that have been used in the past (e.g., stimulation frequency, trial counts) and we discuss the potential importance of these and other task factors (e.g., feature-based priming) for SSVEP studies.


Subject(s)
Evoked Potentials, Visual , Negative Results , Electroencephalography , Event-Related Potentials, P300 , Humans , Photic Stimulation
12.
J Vis ; 21(8): 10, 2021 08 02.
Article in English | MEDLINE | ID: mdl-34351397

ABSTRACT

Visual acuity is better for vertical and horizontal compared to other orientations. This cross-species phenomenon is often explained by "efficient coding," whereby more neurons show sharper tuning for the orientations most common in natural vision. However, it is unclear if experience alone can account for such biases. Here, we measured orientation representations in a convolutional neural network, VGG-16, trained on modified versions of ImageNet (rotated by 0Ā°, 22.5Ā°, or 45Ā° counterclockwise of upright). Discriminability for each model was highest near the orientations that were most common in the network's training set. Furthermore, there was an overrepresentation of narrowly tuned units selective for the most common orientations. These effects emerged in middle layers and increased with depth in the network, though this layer-wise pattern may depend on properties of the evaluation stimuli used. Biases emerged early in training, consistent with the possibility that nonuniform representations may play a functional role in the network's task performance. Together, our results suggest that biased orientation representations can emerge through experience with a nonuniform distribution of orientations, supporting the efficient coding hypothesis.


Subject(s)
Visual Cortex , Humans , Neural Networks, Computer , Neurons , Orientation , Vision, Ocular
13.
J Neurosci ; 39(31): 6162-6179, 2019 07 31.
Article in English | MEDLINE | ID: mdl-31127004

ABSTRACT

Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) are two noninvasive methods commonly used to study neural mechanisms supporting visual attention in humans. Studies using these tools, which have complementary spatial and temporal resolutions, implicitly assume they index similar underlying neural modulations related to external stimulus and internal attentional manipulations. Accordingly, they are often used interchangeably for constraining understanding about the impact of bottom-up and top-down factors on neural modulations. To test this core assumption, we simultaneously manipulated bottom-up sensory inputs by varying stimulus contrast and top-down cognitive modulations by changing the focus of spatial attention. Each of the male and female subjects participated in both fMRI and EEG sessions performing the same experimental paradigm. We found categorically different patterns of attentional modulation on fMRI activity in early visual cortex and early stimulus-evoked potentials measured via EEG (e.g., the P1 component and steady-state visually-evoked potentials): fMRI activation scaled additively with attention, whereas evoked EEG components scaled multiplicatively with attention. However, across longer time scales, a contralateral negative-going potential and oscillatory EEG signals in the alpha band revealed additive attentional modulation patterns like those observed with fMRI. These results challenge prior assumptions that fMRI and early stimulus-evoked potentials measured with EEG can be interchangeably used to index the same neural mechanisms of attentional modulations at different spatiotemporal scales. Instead, fMRI measures of attentional modulations are more closely linked with later EEG components and alpha-band oscillations. Considered together, hemodynamic and electrophysiological signals can jointly constrain understanding of the neural mechanisms supporting cognition.SIGNIFICANCE STATEMENT fMRI and EEG have been used as tools to measure the location and timing of attentional modulations in visual cortex and are often used interchangeably for constraining computational models under the assumption that they index similar underlying neural processes. However, by varying attentional and stimulus parameters, we found differential patterns of attentional modulations of fMRI activity in early visual cortex and commonly used stimulus-evoked potentials measured via EEG. Instead, across longer time scales, a contralateral negative-going potential and EEG oscillations in the alpha band exhibited attentional modulations similar to those observed with fMRI. Together, these results suggest that different physiological processes assayed by these complementary techniques must be jointly considered when making inferences about the neural underpinnings of cognitive operations.


Subject(s)
Attention/physiology , Brain Mapping/methods , Electroencephalography/methods , Magnetic Resonance Imaging/methods , Visual Cortex/physiology , Adult , Female , Humans , Male
14.
PLoS Biol ; 15(6): e2001724, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28654635

ABSTRACT

Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes.


Subject(s)
Attention/physiology , Cognition/physiology , Visual Cortex/physiology , Visual Perception/physiology , Algorithms , Electroencephalography , Evoked Potentials, Visual/physiology , Female , Humans , Male , Models, Neurological , Photic Stimulation , Reaction Time/physiology , Young Adult
15.
Cereb Cortex ; 29(12): 5037-5048, 2019 12 17.
Article in English | MEDLINE | ID: mdl-30877786

ABSTRACT

When viewing familiar stimuli (e.g., common words), processing is highly automatized such that it can interfere with the processing of incompatible sensory information. At least two mechanisms may help mitigate this interference. Early selection accounts posit that attentional processes filter out distracting sensory information to avoid conflict. Alternatively, late selection accounts hold that all sensory inputs receive full semantic analysis and that frontal executive mechanisms are recruited to resolve conflict. To test how these mechanisms operate to overcome conflict induced by highly automatized processing, we developed a novel version of the color-word Stroop task, where targets and distractors were simultaneously flickered at different frequencies. We measured the quality of early sensory processing by assessing the amplitude of steady-state visually evoked potentials (SSVEPs) elicited by targets and distractors. We also indexed frontal executive processes by assessing changes in frontal theta oscillations induced by color-word incongruency. We found that target- and distractor-related SSVEPs were not modulated by changes in the level of conflict whereas frontal theta activity increased on high compared to low conflict trials. These results suggest that frontal executive processes play a more dominant role in mitigating cognitive interference driven by the automatic tendency to process highly familiar stimuli.


Subject(s)
Conflict, Psychological , Executive Function/physiology , Frontal Lobe/physiology , Pattern Recognition, Visual/physiology , Adolescent , Adult , Attention/physiology , Evoked Potentials, Visual , Female , Humans , Male , Semantics , Stroop Test , Theta Rhythm , Young Adult
16.
J Neurosci ; 38(40): 8635-8649, 2018 10 03.
Article in English | MEDLINE | ID: mdl-30143576

ABSTRACT

Decision-making becomes slower when more choices are available. Existing models attribute this slowing to poor sensory processing, to attenuated rates of sensory evidence accumulation, or to increases in the amount of evidence required before committing to a decision (a higher decision threshold). However, studies have not isolated the effects of having more choices on sensory and decision-related processes from changes in task difficulty and divided attention. Here, we controlled task difficulty while independently manipulating the distribution of attention and the number of choices available to male and female human observers. We used EEG to measure steady-state visually evoked potentials (SSVEPs) and a frontal late positive deflection (LPD), EEG markers of sensory and postsensory decision-related processes, respectively. We found that dividing attention decreased SSVEP and LPD amplitudes, consistent with dampened sensory responses and slower rates of evidence accumulation, respectively. In contrast, having more choices did not alter SSVEP amplitude and led to a larger LPD. These results suggest that having more options largely spares early sensory processing and slows down decision-making via a selective increase in decision thresholds.SIGNIFICANCE STATEMENT When more choices are available, decision-making becomes slower. We tested whether this phenomenon is due to poor sensory processing, to reduced rates of evidence accumulation, or to increases in the amount of evidence required before committing to a decision (a higher decision threshold). We measured choice modulations of sensory and decision-related neural responses using EEG. We also minimized potential confounds from changes in the distribution of attention and task difficulty, which often covary with having more choices. Dividing attention reduced the activity levels of both sensory and decision-related responses. However, having more choices did not change sensory processing and led to larger decision-related responses. These results suggest that having more choices spares sensory processing and selectively increases decision thresholds.


Subject(s)
Attention/physiology , Brain/physiology , Decision Making/physiology , Reaction Time , Adult , Electroencephalography , Evoked Potentials, Visual , Female , Humans , Male , Models, Neurological , Models, Psychological , Photic Stimulation , Young Adult
17.
J Neurosci ; 38(24): 5632-5648, 2018 06 13.
Article in English | MEDLINE | ID: mdl-29773755

ABSTRACT

Two factors play important roles in shaping perception: the allocation of selective attention to behaviorally relevant sensory features, and prior expectations about regularities in the environment. Signal detection theory proposes distinct roles of attention and expectation on decision-making such that attention modulates early sensory processing, whereas expectation influences the selection and execution of motor responses. Challenging this classic framework, recent studies suggest that expectations about sensory regularities enhance the encoding and accumulation of sensory evidence during decision-making. However, it is possible, that these findings reflect well documented attentional modulations in visual cortex. Here, we tested this framework in a group of male and female human participants by examining how expectations about stimulus features (orientation and color) and expectations about motor responses impacted electroencephalography (EEG) markers of early sensory processing and the accumulation of sensory evidence during decision-making (the early visual negative potential and the centro-parietal positive potential, respectively). We first demonstrate that these markers are sensitive to changes in the amount of sensory evidence in the display. Then we show, counter to recent findings, that neither marker is modulated by either feature or motor expectations, despite a robust effect of expectations on behavior. Instead, violating expectations about likely sensory features and motor responses impacts posterior alpha and frontal theta oscillations, signals thought to index overall processing time and cognitive conflict. These findings are inconsistent with recent theoretical accounts and suggest instead that expectations primarily influence decisions by modulating post-perceptual stages of information processing.SIGNIFICANCE STATEMENT Expectations about likely features or motor responses play an important role in shaping behavior. Classic theoretical frameworks posit that expectations modulate decision-making by biasing late stages of decision-making including the selection and execution of motor responses. In contrast, recent accounts suggest that expectations also modulate decisions by improving the quality of early sensory processing. However, these effects could instead reflect the influence of selective attention. Here we examine the effect of expectations about sensory features and motor responses on a set of electroencephalography (EEG) markers that index early sensory processing and later post-perceptual processing. Counter to recent empirical results, expectations have little effect on early sensory processing but instead modulate EEG markers of time-on-task and cognitive conflict.


Subject(s)
Attention/physiology , Brain/physiology , Decision Making/physiology , Discrimination, Psychological/physiology , Motivation/physiology , Female , Humans , Male , Photic Stimulation , Visual Perception/physiology , Young Adult
18.
J Neurophysiol ; 121(4): 1410-1427, 2019 04 01.
Article in English | MEDLINE | ID: mdl-30759040

ABSTRACT

Searching for items that are useful given current goals, or "target" recognition, requires observers to flexibly attend to certain object properties at the expense of others. This could involve focusing on the identity of an object while ignoring identity-preserving transformations such as changes in viewpoint or focusing on its current viewpoint while ignoring its identity. To effectively filter out variation due to the irrelevant dimension, performing either type of task is likely to require high-level, abstract search templates. Past work has found target recognition signals in areas of ventral visual cortex and in subregions of parietal and frontal cortex. However, target status in these tasks is typically associated with the identity of an object, rather than identity-orthogonal properties such as object viewpoint. In this study, we used a task that required subjects to identify novel object stimuli as targets according to either identity or viewpoint, each of which was not predictable from low-level properties such as shape. We performed functional MRI in human subjects of both sexes and measured the strength of target-match signals in areas of visual, parietal, and frontal cortex. Our multivariate analyses suggest that the multiple-demand (MD) network, including subregions of parietal and frontal cortex, encodes information about an object's status as a target in the relevant dimension only, across changes in the irrelevant dimension. Furthermore, there was more target-related information in MD regions on correct compared with incorrect trials, suggesting a strong link between MD target signals and behavior. NEW & NOTEWORTHY Real-world target detection tasks, such as searching for a car in a crowded parking lot, require both flexibility and abstraction. We investigated the neural basis of these abilities using a task that required invariant representations of either object identity or viewpoint. Multivariate decoding analyses of our whole brain functional MRI data reveal that invariant target representations are most pronounced in frontal and parietal regions, and the strength of these representations is associated with behavioral performance.


Subject(s)
Frontal Lobe/physiology , Parietal Lobe/physiology , Pattern Recognition, Visual , Visual Cortex/physiology , Adult , Female , Humans , Male , Spatial Behavior , Spatial Memory
19.
J Neurophysiol ; 122(2): 539-551, 2019 08 01.
Article in English | MEDLINE | ID: mdl-31188708

ABSTRACT

A hallmark of episodic memory is the phenomenon of mentally reexperiencing the details of past events, and a well-established concept is that the neuronal activity that mediates encoding is reinstated at retrieval. Evidence for reinstatement has come from multiple modalities, including functional magnetic resonance imaging and electroencephalography (EEG). These EEG studies have shed light on the time course of reinstatement but have been limited to distinguishing between a few categories. The goal of this work was to use recently developed experimental and technical approaches, namely continuous report tasks and inverted encoding models, to determine which frequencies of oscillatory brain activity support the retrieval of precise spatial memories. In experiment 1, we establish that an inverted encoding model applied to multivariate alpha topography tracks the retrieval of precise spatial memories. In experiment 2, we demonstrate that the frequencies and patterns of multivariate activity at study are similar to the frequencies and patterns observed during retrieval. These findings highlight the broad potential for using encoding models to characterize long-term memory retrieval.NEW & NOTEWORTHY Previous EEG work has shown that category-level information observed during encoding is recapitulated during memory retrieval, but studies with this time-resolved method have not demonstrated the reinstatement of feature-specific patterns of neural activity during retrieval. Here we show that EEG alpha-band activity tracks the retrieval of spatial representations from long-term memory. Moreover, we find considerable overlap between the frequencies and patterns of activity that track spatial memories during initial study and at retrieval.


Subject(s)
Alpha Rhythm/physiology , Cerebral Cortex/physiology , Memory, Episodic , Memory, Long-Term/physiology , Mental Recall/physiology , Spatial Memory/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
20.
J Vis ; 19(14): 8, 2019 12 02.
Article in English | MEDLINE | ID: mdl-31826253

ABSTRACT

Although attention is known to improve the efficacy of sensory processing, the impact of attention on subjective visual appearance is still a matter of debate. Although recent studies suggest that attention can alter the appearance of stimulus contrast, others argue that these changes reflect response bias induced by attention cues. Here, we provide evidence that attention has effects on both appearance and response bias. In a comparative judgment task in which subjects reported whether the attended or unattended visual stimulus had a higher perceived contrast, attention induced substantial baseline-offset response bias as well as small but significant changes in subjective contrast appearance when subjects viewed near-threshold stimuli. However, when subjects viewed suprathreshold stimuli, baseline-offset response bias decreased and attention primarily changed contrast appearance. To address the possibility that these changes in appearance might be influenced by uncertainty due to the attended and unattended stimuli having similar physical contrasts, subjects performed an equality judgment task in which they reported if the contrast of the two stimuli was the same or different. We found that, although there were still attention-induced changes in contrast appearance at lower contrasts, the robust changes in contrast appearance at higher contrasts observed in the comparative judgment task were diminished in the equality judgment task. Together, these results suggest that attention can impact both response bias and appearance, and these two types of attention effects are differentially mediated by stimulus visibility and uncertainty. Collectively, these findings help constrain arguments about the cognitive penetrability of perception.


Subject(s)
Attention/physiology , Contrast Sensitivity/physiology , Adolescent , Adult , Bias , Cues , Female , Humans , Judgment/physiology , Male , Photic Stimulation/methods , Sensation , Uncertainty , Visual Perception/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL