RESUMO
In the primate visual system, form (shape, location) and color information are processed in separate but interacting pathways. Recent access to high-resolution neuroimaging has facilitated the exploration of the structure of these pathways at the mesoscopic level in the human visual cortex. We used 7T fMRI to observe selective activation of the primary visual cortex to chromatic versus achromatic stimuli in five participants across two scanning sessions. Achromatic checkerboards with low spatial frequency and high temporal frequency targeted the color-insensitive magnocellular pathway. Chromatic checkerboards with higher spatial frequency and low temporal frequency targeted the color-selective parvocellular pathway. This work resulted in three main findings. First, responses driven by chromatic stimuli had a laminar profile biased towards superficial layers of V1, as compared to responses driven by achromatic stimuli. Second, we found stronger preference for chromatic stimuli in parafoveal V1 compared with peripheral V1. Finally, we found alternating, stimulus-selective bands stemming from the V1 border into V2 and V3. Similar alternating patterns have been previously found in both NHP and human extrastriate cortex. Together, our findings confirm the utility of fMRI for revealing details of mesoscopic neural architecture in human cortex.
Assuntos
Percepção de Cores/fisiologia , Córtex Visual/fisiologia , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Adulto JovemRESUMO
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Assuntos
Motivação , Tato , Atenção , Teorema de Bayes , Humanos , Estimulação Luminosa , Percepção VisualRESUMO
There is growing interest in understanding how specific neural events that occur during sleep, including characteristic spindle oscillations between 10 and 16â¯Hz (Hz), are related to learning and memory. Neural events can be recorded during sleep using the well-known method of scalp electroencephalography (EEG). While publicly available sleep EEG datasets exist, most consist of only a few channels collected in specific patient groups being evaluated overnight for sleep disorders in clinical settings. The dataset described in this Data in Brief includes 22 participants who each participated in EEG recordings on two separate days. The dataset includes manual annotation of sleep stages and 2528 manually annotated spindles. Signals from 64-channels were continuously recorded at 1â¯kHz with a high-density active electrode system while participants napped for 30 or 60â¯min inside a sound-attenuated testing booth after performing a high- or low-load visual working memory task where load was randomized across recording days. The high-density EEG datasets present several advantages over single- or few-channel datasets including most notably the opportunity to explore spatial differences in the distribution of neural events, including whether spindles occur locally on only a few channels or co-occur globally across many channels, whether spindle frequency, duration, and amplitude vary as a function of brain hemisphere and anterior-posterior axis, and whether the probability of spindle occurrence varies as a function of the phase of ongoing slow oscillations. The dataset, along with python source code for file input and signal processing, is made freely available at the Open Science Framework through the link https://osf.io/chav7/.
RESUMO
Researchers classify critical neural events during sleep called spindles that are related to memory consolidation using the method of scalp electroencephalography (EEG). Manual classification is time consuming and is susceptible to low inter-rater agreement. This could be improved using an automated approach. This study presents an optimized filter based and thresholding (FBT) model to set up a baseline for comparison to evaluate machine learning models using naïve features, such as raw signals, peak frequency, and dominant power. The FBT model allows us to formally define sleep spindles using signal processing but may miss examples most human scorers would agree are spindles. Machine learning methods in theory should be able to approach performance of human raters but they require a large quantity of scored data, proper feature representation, intensive feature engineering, and model selection. We evaluate both the FBT model and machine learning models with naïve features. We show that the machine learning models derived from the FBT model improve classification performance. An automated approach designed for the current data was applied to the DREAMS dataset [1]. With one of the expert's annotation as a gold standard, our pipeline yields an excellent sensitivity that is close to a second expert's scores and with the advantage that it can classify spindles based on multiple channels if more channels are available. More importantly, our pipeline could be modified as a guide to aid manual annotation of sleep spindles based on multiple channels quickly (6-10 s for processing a 40-min EEG recording), making spindle detection faster and more objective.