Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
Add more filters










Publication year range
1.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38864574

ABSTRACT

The amygdala is present in a diverse range of vertebrate species, such as lizards, rodents, and primates; however, its structure and connectivity differs across species. The increased connections to visual sensory areas in primate species suggests that understanding the visual selectivity of the amygdala in detail is critical to revealing the principles underlying its function in primate cognition. Therefore, we designed a high-resolution, contrast-agent enhanced, event-related fMRI experiment, and scanned 3 adult rhesus macaques, while they viewed 96 naturalistic stimuli. Half of these stimuli were social (defined by the presence of a conspecific), the other half were nonsocial. We also nested manipulations of emotional valence (positive, neutral, and negative) and visual category (faces, nonfaces, animate, and inanimate) within the stimulus set. The results reveal widespread effects of emotional valence, with the amygdala responding more on average to inanimate objects and animals than faces, bodies, or social agents in this experimental context. These findings suggest that the amygdala makes a contribution to primate vision that goes beyond an auxiliary role in face or social perception. Furthermore, the results highlight the importance of stimulus selection and experimental design when probing the function of the amygdala and other visually responsive brain regions.


Subject(s)
Amygdala , Macaca mulatta , Magnetic Resonance Imaging , Photic Stimulation , Animals , Amygdala/physiology , Amygdala/diagnostic imaging , Male , Photic Stimulation/methods , Emotions/physiology , Brain Mapping , Visual Perception/physiology , Female , Pattern Recognition, Visual/physiology
2.
Cognition ; 235: 105398, 2023 06.
Article in English | MEDLINE | ID: mdl-36791506

ABSTRACT

Face pareidolia is the experience of seeing illusory faces in inanimate objects. While children experience face pareidolia, it is unknown whether they perceive gender in illusory faces, as their face evaluation system is still developing in the first decade of life. In a sample of 412 children and adults from 4 to 80 years of age we found that like adults, children perceived many illusory faces in objects to have a gender and had a strong bias to see them as male rather than female, regardless of their own gender identification. These results provide evidence that the male bias for face pareidolia emerges early in life, even before the ability to discriminate gender from facial cues alone is fully developed. Further, the existence of a male bias in children suggests that any social context that elicits the cognitive bias to see faces as male has remained relatively consistent across generations.


Subject(s)
Face , Illusions , Adult , Humans , Male , Child , Female , Illusions/psychology
3.
Cortex ; 158: 71-82, 2023 01.
Article in English | MEDLINE | ID: mdl-36459788

ABSTRACT

The recall and visualization of people and places from memory is an everyday occurrence, yet the neural mechanisms underpinning this phenomenon are not well understood. In particular, the temporal characteristics of the internal representations generated by active recall are unclear. Here, we used magnetoencephalography (MEG) and multivariate pattern analysis to measure the evolving neural representation of familiar places and people across the whole brain when human participants engage in active recall. To isolate self-generated imagined representations, we used a retro-cue paradigm in which participants were first presented with two possible labels before being cued to recall either the first or second item. We collected personalized labels for specific locations and people familiar to each participant. Importantly, no visual stimuli were presented during the recall period, and the retro-cue paradigm allowed the dissociation of responses associated with the labels from those corresponding to the self-generated representations. First, we found that following the retro-cue it took on average ∼1000 ms for distinct neural representations of freely recalled people or places to develop. Second, we found distinct representations of personally familiar concepts throughout the 4 s recall period. Finally, we found that these representations were highly stable and generalizable across time. These results suggest that self-generated visualizations and recall of familiar places and people are subserved by a stable neural mechanism that operates relatively slowly when under conscious control.


Subject(s)
Cues , Mental Recall , Humans , Mental Recall/physiology , Brain/physiology , Brain Mapping , Magnetoencephalography
4.
J Neurosci ; 2022 Jul 21.
Article in English | MEDLINE | ID: mdl-35868861

ABSTRACT

According to a prominent view in neuroscience, visual stimuli are coded by discrete cortical networks that respond preferentially to specific categories, such as faces or objects. However, it remains unclear how these category-selective networks respond when viewing conditions are cluttered, i.e., when there is more than one stimulus in the visual field. Here, we asked three questions: (1) Does clutter reduce the response and selectivity for faces as a function of retinal location? (2) Is the preferential response to faces uniform across the visual field? And (3) Does the ventral visual pathway encode information about the location of cluttered faces? We used fMRI to measure the response of the face-selective network in awake, fixating macaques (2 female, 5 male). Across a series of four experiments, we manipulated the presence and absence of clutter, as well as the location of the faces relative to the fovea. We found that clutter reduces the response to peripheral faces. When presented in isolation, without clutter, the selectivity for faces is fairly uniform across the visual field, but, when clutter is present, there is a marked decrease in the selectivity for peripheral faces. We also found no evidence of a contralateral visual field bias when faces were presented in clutter. Nonetheless, multivariate analyses revealed that the location of cluttered faces could be decoded from the multivoxel response of the face-selective network. Collectively, these findings demonstrate that clutter blunts the selectivity of the face-selective network to peripheral faces, although information about their retinal location is retained.SIGNIFICANCE STATEMENTNumerous studies that have measured brain activity in macaques have found visual regions that respond preferentially to faces. Although these regions are thought to be essential for social behavior, their responses have typically been measured while faces were presented in isolation, a situation atypical of the real world. How do these regions respond when faces are presented with other stimuli? We report that, when clutter is present, the preferential response to foveated faces is spared but preferential response to peripheral faces is reduced. Our results indicate that the presence of clutter changes the response of the face-selective network.

5.
Soc Cogn Affect Neurosci ; 17(11): 965-976, 2022 11 02.
Article in English | MEDLINE | ID: mdl-35445247

ABSTRACT

Face detection is a foundational social skill for primates. This vital function is thought to be supported by specialized neural mechanisms; however, although several face-selective regions have been identified in both humans and nonhuman primates, there is no consensus about which region(s) are involved in face detection. Here, we used naturally occurring errors of face detection (i.e. objects with illusory facial features referred to as examples of 'face pareidolia') to identify regions of the macaque brain implicated in face detection. Using whole-brain functional magnetic resonance imaging to test awake rhesus macaques, we discovered that a subset of face-selective patches in the inferior temporal cortex, on the lower lateral edge of the superior temporal sulcus, and the amygdala respond more to objects with illusory facial features than matched non-face objects. Multivariate analyses of the data revealed differences in the representation of illusory faces across the functionally defined regions of interest. These differences suggest that the cortical and subcortical face-selective regions contribute uniquely to the detection of facial features. We conclude that face detection is supported by a multiplexed system in the primate brain.


Subject(s)
Brain Mapping , Illusions , Animals , Humans , Pattern Recognition, Visual , Macaca mulatta , Magnetic Resonance Imaging/methods , Temporal Lobe
6.
Proc Natl Acad Sci U S A ; 119(5)2022 02 01.
Article in English | MEDLINE | ID: mdl-35074880

ABSTRACT

Despite our fluency in reading human faces, sometimes we mistakenly perceive illusory faces in objects, a phenomenon known as face pareidolia. Although illusory faces share some neural mechanisms with real faces, it is unknown to what degree pareidolia engages higher-level social perception beyond the detection of a face. In a series of large-scale behavioral experiments (ntotal = 3,815 adults), we found that illusory faces in inanimate objects are readily perceived to have a specific emotional expression, age, and gender. Most strikingly, we observed a strong bias to perceive illusory faces as male rather than female. This male bias could not be explained by preexisting semantic or visual gender associations with the objects, or by visual features in the images. Rather, this robust bias in the perception of gender for illusory faces reveals a cognitive bias arising from a broadly tuned face evaluation system in which minimally viable face percepts are more likely to be perceived as male.


Subject(s)
Face/physiology , Illusions/physiology , Adult , Facial Recognition/physiology , Female , Humans , Male , Photic Stimulation/methods
7.
Proc Biol Sci ; 288(1954): 20210966, 2021 07 14.
Article in English | MEDLINE | ID: mdl-34229489

ABSTRACT

Facial expressions are vital for social communication, yet the underlying mechanisms are still being discovered. Illusory faces perceived in objects (face pareidolia) are errors of face detection that share some neural mechanisms with human face processing. However, it is unknown whether expression in illusory faces engages the same mechanisms as human faces. Here, using a serial dependence paradigm, we investigated whether illusory and human faces share a common expression mechanism. First, we found that images of face pareidolia are reliably rated for expression, within and between observers, despite varying greatly in visual features. Second, they exhibit positive serial dependence for perceived facial expression, meaning an illusory face (happy or angry) is perceived as more similar in expression to the preceding one, just as seen for human faces. This suggests illusory and human faces engage similar mechanisms of temporal continuity. Third, we found robust cross-domain serial dependence of perceived expression between illusory and human faces when they were interleaved, with serial effects larger when illusory faces preceded human faces than the reverse. Together, the results support a shared mechanism for facial expression between human faces and illusory faces and suggest that expression processing is not tightly bound to human facial features.


Subject(s)
Facial Recognition , Illusions , Facial Expression , Happiness , Humans
8.
Atten Percept Psychophys ; 83(5): 1942-1953, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33768481

ABSTRACT

Face detection is a priority of both the human and primate visual system. However, occasionally we misperceive faces in inanimate objects -- "face pareidolia". A key feature of these 'false positives' is that face perception occurs in the absence of visual features typical of real faces. Human faces are known to be located faster than objects in visual search. Here we used a visual search paradigm to test whether illusory faces share this advantage. Search times were faster for illusory faces than for matched objects amongst both matched (Experiment 1) and diverse (Experiment 2) distractors, however search times for real human faces were faster and more efficient than objects with or without an illusory face. Importantly, this result indicates that illusory faces are processed quickly enough by the human brain to confer a visual search advantage, suggesting the engagement of a broadly-tuned mechanism that facilitates rapid face detection in cluttered environments.


Subject(s)
Facial Recognition , Illusions , Face , Humans , Pattern Recognition, Visual , Visual Perception
9.
Neuropsychologia ; 151: 107687, 2021 01 22.
Article in English | MEDLINE | ID: mdl-33212137

ABSTRACT

Behavioural categorisation reaction times (RTs) provide a useful way to link behaviour to brain representations measured with neuroimaging. In this framework, objects are assumed to be represented in a multidimensional activation space, with the distances between object representations indicating their degree of neural similarity. Faster RTs have been reported to correlate with greater distances from a classification decision boundary for animacy. Objects inherently belong to more than one category, yet it is not known whether the RT-distance relationship, and its evolution over the time-course of the neural response, is similar across different categories. Here we used magnetoencephalography (MEG) to address this question. Our stimuli included typically animate and inanimate objects, as well as more ambiguous examples (i.e., robots and toys). We conducted four semantic categorisation tasks on the same stimulus set assessing animacy, living, moving, and human-similarity concepts, and linked the categorisation RTs to MEG time-series decoding data. Our results show a sustained RT-distance relationship throughout the time course of object processing for not only animacy, but also categorisation according to human-similarity. Interestingly, this sustained RT-distance relationship was not observed for the living and moving category organisations, despite comparable classification accuracy of the MEG data across all four category organisations. Our findings show that behavioural RTs predict representational distance for an organisational principle other than animacy, however further research is needed to determine why this relationship is observed only for some category organisations and not others.


Subject(s)
Magnetoencephalography , Pattern Recognition, Visual , Brain/diagnostic imaging , Brain Mapping , Humans , Neuroimaging , Reaction Time
10.
Prog Neurobiol ; 195: 101880, 2020 12.
Article in English | MEDLINE | ID: mdl-32918972

ABSTRACT

In the 1970s Charlie Gross was among the first to identify neurons that respond selectively to faces, in the macaque inferior temporal (IT) cortex. This seminal finding has been followed by numerous studies quantifying the visual features that trigger a response from face cells in order to answer the question; what do face cells want? However, the connection between face-selective activity in IT cortex and visual perception remains only partially understood. Here we present fMRI results in the macaque showing that some face patches respond to illusory facial features in objects. We argue that to fully understand the functional role of face cells, we need to develop approaches that test the extent to which their response explains what we see.


Subject(s)
Brain Mapping , Facial Recognition/physiology , Illusions/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Temporal Lobe/physiology , Animals , Behavior, Animal/physiology , Macaca mulatta , Magnetic Resonance Imaging , Prefrontal Cortex/diagnostic imaging , Temporal Lobe/diagnostic imaging
11.
Nat Commun ; 11(1): 4518, 2020 09 09.
Article in English | MEDLINE | ID: mdl-32908146

ABSTRACT

The human brain is specialized for face processing, yet we sometimes perceive illusory faces in objects. It is unknown whether these natural errors of face detection originate from a rapid process based on visual features or from a slower, cognitive re-interpretation. Here we use a multifaceted approach to understand both the spatial distribution and temporal dynamics of illusory face representation in the brain by combining functional magnetic resonance imaging and magnetoencephalography neuroimaging data with model-based analysis. We find that the representation of illusory faces is confined to occipital-temporal face-selective visual cortex. The temporal dynamics reveal a striking evolution in how illusory faces are represented relative to human faces and matched objects. Illusory faces are initially represented more similarly to real faces than matched objects are, but within ~250 ms, the representation transforms, and they become equivalent to ordinary objects. This is consistent with the initial recruitment of a broadly-tuned face detection mechanism which privileges sensitivity over selectivity.


Subject(s)
Facial Recognition/physiology , Illusions/physiology , Models, Neurological , Temporal Lobe/physiology , Visual Cortex/physiology , Adolescent , Adult , Brain Mapping , Computer Simulation , Female , Humans , Magnetic Resonance Imaging , Magnetoencephalography , Male , Neuroimaging , Photic Stimulation , Reaction Time , Temporal Lobe/diagnostic imaging , Visual Cortex/diagnostic imaging , Young Adult
12.
F1000Res ; 92020.
Article in English | MEDLINE | ID: mdl-32566136

ABSTRACT

Object recognition is the ability to identify an object or category based on the combination of visual features observed. It is a remarkable feat of the human brain, given that the patterns of light received by the eye associated with the properties of a given object vary widely with simple changes in viewing angle, ambient lighting, and distance. Furthermore, different exemplars of a specific object category can vary widely in visual appearance, such that successful categorization requires generalization across disparate visual features. In this review, we discuss recent advances in understanding the neural representations underlying object recognition in the human brain. We highlight three current trends in the approach towards this goal within the field of cognitive neuroscience. Firstly, we consider the influence of deep neural networks both as potential models of object vision and in how their representations relate to those in the human brain. Secondly, we review the contribution that time-series neuroimaging methods have made towards understanding the temporal dynamics of object representations beyond their spatial organization within different brain regions. Finally, we argue that an increasing emphasis on the context (both visual and task) within which object recognition occurs has led to a broader conceptualization of what constitutes an object representation for the brain. We conclude by identifying some current challenges facing the experimental pursuit of understanding object recognition and outline some emerging directions that are likely to yield new insight into this complex cognitive process.


Subject(s)
Brain , Cognitive Neuroscience , Humans , Neural Networks, Computer , Time Factors , Visual Perception
13.
Atten Percept Psychophys ; 81(8): 2685-2699, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31218599

ABSTRACT

The human visual system is capable of processing an enormous amount of information in a short time. Although rapid target detection has been explored extensively, less is known about target localization. Here we used natural scenes and explored the relationship between being able to detect a target (present vs. absent) and being able to localize it. Across four presentation durations (~ 33-199 ms), participants viewed scenes taken from two superordinate categories (natural and manmade), each containing exemplars from four basic scene categories. In a two-interval forced choice task, observers were asked to detect a Gabor target inserted in one of the two scenes. This was followed by one of two different localization tasks. Participants were asked either to discriminate whether the target was on the left or the right side of the display or to click on the exact location where they had seen the target. Targets could be detected and localized at our shortest exposure duration (~ 33 ms), with a predictable improvement in performance with increasing exposure duration. We saw some evidence at this shortest duration of detection without localization, but further analyses demonstrated that these trials typically reflected coarse or imprecise localization information, rather than its complete absence. Experiment 2 replicated our main findings while exploring the effect of the level of "openness" in the scene. Our results are consistent with the notion that when we are able to extract what objects are present in a scene, we also have information about where each object is, which provides crucial guidance for our goal-directed actions.


Subject(s)
Attention/physiology , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Middle Aged , Photic Stimulation/methods , Recognition, Psychology/physiology , Time Factors , Young Adult
14.
Proc Natl Acad Sci U S A ; 115(31): 8043-8048, 2018 07 31.
Article in English | MEDLINE | ID: mdl-30012600

ABSTRACT

In free-viewing experiments, primates orient preferentially toward faces and face-like stimuli. To investigate the neural basis of this behavior, we measured the spontaneous viewing preferences of monkeys with selective bilateral amygdala lesions. The results revealed that when faces and nonface objects were presented simultaneously, monkeys with amygdala lesions had no viewing preference for either conspecific faces or illusory facial features in everyday objects. Instead of directing eye movements toward socially relevant features in natural images, we found that, after amygdala loss, monkeys are biased toward features with increased low-level salience. We conclude that the amygdala has a role in our earliest specialized response to faces, a behavior thought to be a precursor for efficient social communication and essential for the development of face-selective cortex.


Subject(s)
Amygdala/physiology , Pattern Recognition, Visual , Visual Perception , Animals , Eye Movements , Face , Female , Macaca mulatta , Male
15.
Cogn Res Princ Implic ; 3(1): 10, 2018.
Article in English | MEDLINE | ID: mdl-29707615

ABSTRACT

Humans can extract considerable information from scenes, even when these are presented extremely quickly. The ability of an experienced radiologist to rapidly detect an abnormality on a mammogram may build upon this general capacity. Although radiologists have been shown to be able to detect an abnormality 'above chance' at short durations, the extent to which abnormalities can be localised at brief presentations is less clear. Extending previous work, we presented radiologists with unilateral mammograms, 50% containing a mass, for 250 or 1000 ms. As the female breast varies with respect to the level of normal fibroglandular tissue, the images were categorised into high and low density (50% of each), resulting in difficult and easy searches, respectively. Participants were asked to decide whether there was an abnormality (detection) and then to locate the mass on a blank outline of the mammogram (localisation). We found both detection and localisation information for all conditions. Although there may be a dissociation between detection and localisation on a small proportion of trials, we find a number of factors that lead to the underestimation of localisation including stimulus variability, response imprecision and participant guesses. We emphasise the importance of taking these factors into account when interpreting results. The effect of density on detection and localisation highlights the importance of considering breast density in medical screening.

16.
J Cogn Neurosci ; 29(12): 1995-2010, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28820673

ABSTRACT

Animacy is a robust organizing principle among object category representations in the human brain. Using multivariate pattern analysis methods, it has been shown that distance to the decision boundary of a classifier trained to discriminate neural activation patterns for animate and inanimate objects correlates with observer RTs for the same animacy categorization task [Ritchie, J. B., Tovar, D. A., & Carlson, T. A. Emerging object representations in the visual system predict reaction times for categorization. PLoS Computational Biology, 11, e1004316, 2015; Carlson, T. A., Ritchie, J. B., Kriegeskorte, N., Durvasula, S., & Ma, J. Reaction time for object categorization is predicted by representational distance. Journal of Cognitive Neuroscience, 26, 132-142, 2014]. Using MEG decoding, we tested if the same relationship holds when a stimulus manipulation (degradation) increases task difficulty, which we predicted would systematically decrease the distance of activation patterns from the decision boundary and increase RTs. In addition, we tested whether distance to the classifier boundary correlates with drift rates in the linear ballistic accumulator [Brown, S. D., & Heathcote, A. The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153-178, 2008]. We found that distance to the classifier boundary correlated with RT, accuracy, and drift rates in an animacy categorization task. Split by animacy, the correlations between brain and behavior were sustained longer over the time course for animate than for inanimate stimuli. Interestingly, when examining the distance to the classifier boundary during the peak correlation between brain and behavior, we found that only degraded versions of animate, but not inanimate, objects had systematically shifted toward the classifier decision boundary as predicted. Our results support an asymmetry in the representation of animate and inanimate object categories in the human brain.


Subject(s)
Brain/physiology , Pattern Recognition, Visual/physiology , Space Perception/physiology , Adult , Analysis of Variance , Choice Behavior/physiology , Female , Humans , Judgment/physiology , Magnetoencephalography , Male , Neuropsychological Tests , Photic Stimulation , Reaction Time , Signal Processing, Computer-Assisted
17.
Curr Biol ; 27(16): 2505-2509.e2, 2017 Aug 21.
Article in English | MEDLINE | ID: mdl-28803877

ABSTRACT

Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species.


Subject(s)
Eye Movements , Facial Recognition , Illusions/physiology , Macaca mulatta/physiology , Pattern Recognition, Visual , Animals , Female , Humans , Male , Young Adult
18.
Neuropsychologia ; 105: 165-176, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28215698

ABSTRACT

Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing.


Subject(s)
Brain/physiology , Concept Formation/physiology , Decision Making/physiology , Pattern Recognition, Visual/physiology , Brain Mapping , Humans , Photic Stimulation , Time Factors
19.
J Neurosci ; 37(5): 1187-1196, 2017 02 01.
Article in English | MEDLINE | ID: mdl-28003346

ABSTRACT

Multivariate pattern analysis is a powerful technique; however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether "edge-related activity" underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxel's overlap with a large annular grating stimulus. Orientation was decodable across the stimulus; however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. SIGNIFICANCE STATEMENT: A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For example, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding.


Subject(s)
Orientation, Spatial/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Models, Neurological , Photic Stimulation
20.
J Cogn Neurosci ; 29(4): 677-697, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27779910

ABSTRACT

Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.


Subject(s)
Brain/physiology , Evoked Potentials/physiology , Functional Neuroimaging/methods , Magnetoencephalography/methods , Multivariate Analysis , Signal Processing, Computer-Assisted , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...