ABSTRACT
The natural environment is dynamic and moving objects become constantly occluded, engaging the brain in a challenging completion process to estimate where and when the object might reappear. Although motion extrapolation is critical in daily life-imagine crossing the street while an approaching car is occluded by a larger standing vehicle-its neural underpinnings are still not well understood. While the engagement of low-level visual cortex during dynamic occlusion has been postulated, most of the previous group-level fMRI-studies failed to find evidence for an involvement of low-level visual areas during occlusion. In this fMRI-study, we therefore used individually defined retinotopic maps and multivariate pattern analysis to characterize the neural basis of visible and occluded changes in motion direction in humans. To this end, participants learned velocity-direction change pairings (slow motion-upwards; fast motion-downwards or vice versa) during a training phase without occlusion and judged the change in stimulus direction, based on its velocity, during a following test phase with occlusion. We find that occluded motion direction can be predicted from the activity patterns during visible motion within low-level visual areas, supporting the notion of a mental representation of motion trajectory in these regions during occlusion.
Subject(s)
Motion Perception , Visual Cortex , Humans , Motion Perception/physiology , Primary Visual Cortex , Brain Mapping , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Brain , Magnetic Resonance Imaging , Motion , Photic StimulationABSTRACT
Taste processing is an essential ability in all animals signaling potential harm or benefit of ingestive behavior. However, current evidence for cortical taste representations remains contradictory. To address this issue, high-resolution functional MRI (fMRI) and multivariate pattern analysis were used to characterize taste-related informational content in human insular cortex, which contains primary gustatory cortex. Human participants judged pleasantness and intensity of low- and high-concentration tastes (salty, sweet, sour, and bitter) in two fMRI experiments on two different days to test for task- and concentration-invariant taste representations. We observed patterns of fMRI activity within insular cortex narrowly tuned to specific tastants consistently across tasks in all participants. Fewer patterns responded to more than one taste category. Importantly, changes in taste concentration altered the spatial layout of putative taste-specific patterns with distinct, almost nonoverlapping patterns for each taste category at different concentration levels. Together, our results point at macroscopic representations in human insular cortex as a complex function of taste category and concentration rather than representations based solely on taste identity.
Subject(s)
Cerebral Cortex/metabolism , Taste Perception/physiology , Taste/physiology , Adult , Brain Mapping/methods , Female , Healthy Volunteers , Humans , Magnetic Resonance Imaging/methods , Male , Multivariate Analysis , Young AdultABSTRACT
Expectations about the temporal occurrence of events (when) are often tied with the expectations about certain event-related properties (what and where) happening at these time points. For instance, slowly waking up in the morning we expect our alarm clock to go off; however, the longer we do not hear it the more likely we already missed it. However, most current evidence for complex time-based event-related expectations (TBEEs) is based on the visual modality. Here we tested whether implicit TBEEs can act cross-modally. To this end, visual and auditory stimulus streams were presented which contained early and late targets embedded among distractors (to maximise temporal target uncertainty). Foreperiod-modality-contingencies were manipulated run-wise: visual targets either occurred early in 80% of trials and auditory targets occurred late in 80% of trials or vice versa. Participants showed increased sensitivity for expected auditory early/visual late targets which increased over time while the opposite pattern was observed for visual early/auditory late targets. A benefit in reaction times was only found for auditory early trials. Together, this pattern of results suggests that implicit context-dependent TBEEs for auditory targets after short foreperiods (be they correct or not) dominated and determined which modality became more expected at the late position irrespective of the veridical statistical regularity. Hence, TBEEs in cross-modal and uncertain environments are context-dependent, shaped by the dominant modality in temporal tasks (i.e., auditory) and only boost performance cross-modally when expectations about the event after the short foreperiod match with the run-wise context (i.e., auditory early/visual late).
Subject(s)
Auditory Perception , Motivation , Acoustic Stimulation , Humans , Photic Stimulation/methods , Reaction Time , Visual PerceptionABSTRACT
Task-irrelevant visual stimuli can enhance auditory perception. However, while there is some neurophysiological evidence for mechanisms that underlie the phenomenon, the neural basis of visually induced effects on auditory perception remains unknown. Combining fMRI and EEG with psychophysical measurements in two independent studies, we identified the neural underpinnings and temporal dynamics of visually induced auditory enhancement. Lower- and higher-intensity sounds were paired with a non-informative visual stimulus, while participants performed an auditory detection task. Behaviourally, visual co-stimulation enhanced auditory sensitivity. Using fMRI, enhanced BOLD signals were observed in primary auditory cortex for low-intensity audiovisual stimuli which scaled with subject-specific enhancement in perceptual sensitivity. Concordantly, a modulation of event-related potentials could already be observed over frontal electrodes at an early latency (30-80 ms), which again scaled with subject-specific behavioural benefits. Later modulations starting around 280 ms, that is in the time range of the P3, did not fit this pattern of brain-behaviour correspondence. Hence, the latency of the corresponding fMRI-EEG brain-behaviour modulation points at an early interplay of visual and auditory signals in low-level auditory cortex, potentially mediated by crosstalk at the level of the thalamus. However, fMRI signals in primary auditory cortex, auditory thalamus and the P50 for higher-intensity auditory stimuli were also elevated by visual co-stimulation (in the absence of any behavioural effect) suggesting a general, intensity-independent integration mechanism. We propose that this automatic interaction occurs at the level of the thalamus and might signify a first step of audiovisual interplay necessary for visually induced perceptual enhancement of auditory perception.
Subject(s)
Auditory Cortex , Visual Perception , Acoustic Stimulation , Auditory Perception , Evoked Potentials , Humans , Photic StimulationABSTRACT
Only small amounts of visual information, as determined by the capacity of working memory, can be held in an active and accessible state. Thus, it is important to select and maintain information that is relevant while ignoring irrelevant information. However, the underlying neural mechanism of these processes has yet to be identified. One potential candidate are alpha oscillations (8-14 Hz), which have been shown to inhibit stimulus processing in perceptual tasks. During memory maintenance, alpha power increases with set size suggesting that alpha oscillations are involved either in memory maintenance or in the inhibition of task-irrelevant information to protect relevant information from interference. The need for such a protection should increase with the amount of distracting information, but most previous studies did not show any distractors. Therefore, we directly tested whether alpha oscillations are involved in inhibition of distractors during memory maintenance. Participants memorized the orientation of one or two target lines embedded among irrelevant distractors. Distractors were either strong or weak and were present during the retention interval after which participants reported the orientation of probed targets. Computational modeling showed that performance decreased with increasing set size and stronger distraction. Alpha power in the retention interval generally increased with set size, replicating previous studies. However, here stronger distractors reduced alpha power. This finding is in clear contradistinction to previous suggestions, as alpha power decrease indicates higher neuronal excitability. Thus, our data do not support the suggested role of alpha oscillations in inhibition of distraction in working memory.
Subject(s)
Alpha Rhythm/physiology , Attention/physiology , Memory, Short-Term/physiology , Visual Perception/physiology , Adult , Electroencephalography/methods , Female , Humans , Inhibition, Psychological , Male , Photic Stimulation/methods , Reaction Time/physiology , Young AdultABSTRACT
Change blindness-the failure to detect changes in visual scenes-has often been interpreted as a result of impoverished visual information encoding or as a failure to compare the prechange and postchange scene. In the present electroencephalography study, we investigated whether semantic features of prechange and postchange information are processed unconsciously, even when observers are unaware that a change has occurred. We presented scenes composed of natural objects in which one object changed from one presentation to the next. Object changes were either semantically related (e.g., rail car changed to rail) or unrelated (e.g., rail car changed to sausage). Observers were first asked to detect whether any change had occurred and then to judge the semantic relation of the two objects involved in the change. We found a semantic mismatch ERP effect, that is, a more negative-going ERP for semantically unrelated compared to related changes, originating from a cortical network including the left middle temporal gyrus and occipital cortex and resembling the N400 effect, albeit at longer latencies. Importantly, this semantic mismatch effect persisted even when observers were unaware of the change and the semantic relationship of prechange and postchange object. This finding implies that change blindness does not preclude the encoding of the prechange and postchange objects' identities and possibly even the comparison of their semantic content. Thus, change blindness cannot be interpreted as resulting from impoverished or volatile visual representations or as a failure to process the prechange and postchange object. Instead, change detection appears to be limited at a later, postperceptual stage.
Subject(s)
Blindness/physiopathology , Pattern Recognition, Visual/physiology , Semantics , Signal Detection, Psychological/physiology , Unconsciousness , Adult , Brain Mapping , Electroencephalography , Evoked Potentials , Female , Humans , Male , Photic Stimulation , Psychophysics , Statistics, Nonparametric , Young AdultABSTRACT
The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.
Subject(s)
Attention/physiology , Memory , Visual Perception/physiology , Adult , Awareness , Computer Simulation , Female , Humans , Individuality , Male , Photography , Software , Young AdultABSTRACT
Temporal structures in the environment can shape temporal expectations (TE); and previous studies demonstrated that TEs interact with multisensory interplay (MSI) when multisensory stimuli are presented synchronously. Here, we tested whether other types of MSI - evoked by asynchronous yet temporally flanking irrelevant stimuli - result in similar performance patterns. To this end, we presented sequences of 12 stimuli (10 Hz) which consisted of auditory (A), visual (V) or alternating auditory-visual stimuli (e.g. A-V-A-V- ) with either auditory or visual targets (Exp. 1). Participants discriminated target frequencies (auditory pitch or visual spatial frequency) embedded in these sequences. To test effects of TE, the proportion of early and late temporal target positions was manipulated run-wise. Performance for unisensory targets was affected by temporally flanking distractors, with auditory temporal flankers selectively improving visual target perception (Exp. 1). However, no effect of temporal expectation was observed. Control experiments (Exp. 2-3) tested whether this lack of TE effect was due to the higher presentation frequency in Exp. 1 relative to previous experiments. Importantly, even at higher stimulation frequencies redundant multisensory targets (Exp. 2-3) reliably modulated TEs. Together, our results indicate that visual target detection was enhanced by MSI. However, this cross-modal enhancement - in contrast to the redundant target effect - was still insufficient to generate TEs. We posit that unisensory target representations were either instable or insufficient for the generation of TEs while less demanding MSI still occurred; highlighting the need for robust stimulus representations when generating temporal expectations.
Subject(s)
Auditory Perception , Motivation , Acoustic Stimulation , Humans , Photic Stimulation , Visual PerceptionABSTRACT
While temporal expectations (TE) generally improve reactions to temporally predictable events, it remains unknown how the learning of temporal regularities (one time point more likely than another time point) and explicit knowledge about temporal regularities contribute to performance improvements; and whether any contributions generalise across modalities. Here, participants discriminated the frequency of diverging auditory, visual or audio-visual targets embedded in auditory, visual or audio-visual distractor sequences. Temporal regularities were manipulated run-wise (early vs. late target within sequence). Behavioural performance (accuracy, RT) plus measures from a computational learning model all suggest that learning of temporal regularities occurred but did not generalise across modalities, and that dynamics of learning (size of TE effect across runs) and explicit knowledge have little to no effect on the strength of TE. Remarkably, explicit knowledge affects performance-if at all-in a context-dependent manner: Only under complex task regimes (here, unknown target modality) might it partially help to resolve response conflict while it is lowering performance in less complex environments.
Subject(s)
Learning , Motivation , Attention , Auditory Perception , Humans , Visual PerceptionABSTRACT
Learning the statistical regularities of environmental events is a powerful tool for enhancing performance. However, it remains unclear whether this often implicit type of behavioral facilitation can be proactively modulated by explicit knowledge about temporal regularities. Only recently, Menceloglu and colleagues (Attention, Perception & Psychophysics, 79(1), 169-179, 2017) tested for differences between implicit versus explicit statistical learning of temporal regularities by using a within-paradigm manipulation of metacognitive temporal knowledge. The authors reported that temporal expectations were enhanced if participants had explicit knowledge about temporal regularities. Here, we attempted to replicate and extend their results, and to provide a mechanistic framework for any effects by means of computational modelling. Participants performed a letter-discrimination task, with target letters embedded in congruent or incongruent flankers. Temporal predictability was manipulated block-wise, with targets occurring more often after either a short or a long delay period. During the delay a sound was presented in half of the trials. Explicit knowledge about temporal regularities was manipulated by changing instructions: Participants received no information (implicit), information about the most likely cue-target delay (explicit), or received 100% valid cues on each trial (highly explicit). We replicated previous effects of target-flanker congruence and sound presence. However, no evidence was found for an effect of explicit knowledge on temporal expectations using Bayesian statistics. Concordantly, computational modelling suggested that explicit knowledge may only influence non-perceptual processing such as response criteria. Together, our results indicate that explicit metacognitive knowledge does not necessarily alter sensory representations or temporal expectations but rather affects response strategies.
Subject(s)
Attention , Motivation , Bayes Theorem , Cues , Humans , LearningABSTRACT
Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d') and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations.
Subject(s)
Anticipation, Psychological/physiology , Auditory Perception/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Time Perception/physiology , Uncertainty , Adult , Humans , Young AdultABSTRACT
Every moment organisms are confronted with complex streams of information which they use to generate a reliable mental model of the world. There is converging evidence for several optimization mechanisms instrumental in integrating (or segregating) incoming information; among them are multisensory interplay (MSI) and temporal expectation (TE). Both mechanisms can account for enhanced perceptual sensitivity and are well studied in isolation; how these two mechanisms interact is currently less well-known. Here, we tested in a series of four psychophysical experiments for TE effects in uni- and multisensory contexts with different levels of modality-related and spatial uncertainty. We found that TE enhanced perceptual sensitivity for the multisensory relative to the best unisensory condition (i.e. multisensory facilitation according to the max-criterion). In the latter TE effects even vanished if stimulus-related spatial uncertainty was increased. Accordingly, computational modelling indicated that TE, modality-related and spatial uncertainty predict multisensory facilitation. Finally, the analysis of stimulus history revealed that matching expectation at trial n-1 selectively improves multisensory performance irrespective of stimulus-related uncertainty. Together, our results indicate that benefits of multisensory stimulation are enhanced by TE especially in noisy environments, which allows for more robust information extraction to boost performance on both short and sustained time ranges.
Subject(s)
Auditory Perception/physiology , Motivation/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Computer Simulation , Female , Humans , Models, Theoretical , Noise , Psychophysiology , Uncertainty , Young AdultABSTRACT
Studies on change detection and change blindness have investigated the nature of visual representations by testing the conditions under which observers are able to detect when an object in a complex scene changes from one moment to the next. Several authors have proposed that change detection can occur without identification of the changing object, but the perceptual processes underlying this phenomenon are currently unknown. We hypothesized that change detection without localization or identification occurs when the change happens outside the focus of attention. Such changes would usually go entirely unnoticed, unless the change brings about a modification of one of the feature maps representing the scene. Thus, the appearance or disappearance of a unique feature might be registered even in the absence of focused attention and without feature binding, allowing for change detection, but not localization or identification. We tested this hypothesis in three experiments, in which changes either involved colors that were already present elsewhere in the display or entirely unique colors. Observers detected whether any change had occurred and then localized or identified the change. Change detection without localization occurred almost exclusively when changes involved a unique color. Moreover, change detection without localization for unique feature changes was independent of the number of objects in the display and independent of change identification. These findings suggest that pre-attentive registration of a change on a feature map can give rise to a conscious experience even when feature binding has failed: that something has changed without knowing what or where.