Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 85
Filter
1.
J Neurosci ; 44(24)2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38641406

ABSTRACT

Faces and bodies are processed in separate but adjacent regions in the primate visual cortex. Yet, the functional significance of dividing the whole person into areas dedicated to its face and body components and their neighboring locations remains unknown. Here we hypothesized that this separation and proximity together with a normalization mechanism generate clutter-tolerant representations of the face, body, and whole person when presented in complex multi-category scenes. To test this hypothesis, we conducted a fMRI study, presenting images of a person within a multi-category scene to human male and female participants and assessed the contribution of each component to the response to the scene. Our results revealed a clutter-tolerant representation of the whole person in areas selective for both faces and bodies, typically located at the border between the two category-selective regions. Regions exclusively selective for faces or bodies demonstrated clutter-tolerant representations of their preferred category, corroborating earlier findings. Thus, the adjacent locations of face- and body-selective areas enable a hardwired machinery for decluttering of the whole person, without the need for a dedicated population of person-selective neurons. This distinct yet proximal functional organization of category-selective brain regions enhances the representation of the socially significant whole person, along with its face and body components, within multi-category scenes.


Subject(s)
Facial Recognition , Magnetic Resonance Imaging , Humans , Male , Female , Adult , Young Adult , Facial Recognition/physiology , Brain Mapping , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Visual Cortex/physiology , Visual Cortex/diagnostic imaging , Brain/physiology , Brain/diagnostic imaging
2.
Nat Hum Behav ; 8(4): 702-717, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38332339

ABSTRACT

Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual-semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual-semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.


Subject(s)
Mental Recall , Neural Networks, Computer , Semantics , Visual Perception , Humans , Female , Male , Mental Recall/physiology , Visual Perception/physiology , Adult , Young Adult , Recognition, Psychology/physiology , Facial Recognition/physiology , Memory/physiology
3.
Behav Brain Sci ; 46: e414, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38054326

ABSTRACT

Deep neural networks (DNNs) are powerful computational models, which generate complex, high-level representations that were missing in previous models of human cognition. By studying these high-level representations, psychologists can now gain new insights into the nature and origin of human high-level vision, which was not possible with traditional handcrafted models. Abandoning DNNs would be a huge oversight for psychological sciences.


Subject(s)
Cognition , Neural Networks, Computer , Humans
4.
Proc Biol Sci ; 290(1998): 20230093, 2023 05 10.
Article in English | MEDLINE | ID: mdl-37161322

ABSTRACT

The question of whether task performance is best achieved by domain-specific, or domain-general processing mechanisms is fundemental for both artificial and biological systems. This question has generated a fierce debate in the study of expert object recognition. Because humans are experts in face recognition, face-like neural and cognitive effects for objects of expertise were considered support for domain-general mechanisms. However, effects of domain, experience and level of categorization, are confounded in human studies, which may lead to erroneous inferences. To overcome these limitations, we trained deep learning algorithms on different domains (objects, faces, birds) and levels of categorization (basic, sub-ordinate, individual), matched for amount of experience. Like humans, the models generated a larger inversion effect for faces than for objects. Importantly, a face-like inversion effect was found for individual-based categorization of non-faces (birds) but only in a network specialized for that domain. Thus, contrary to prevalent assumptions, face-like effects for objects of expertise do not support domain-general mechanisms but may originate from domain-specific mechanisms. More generally, we show how deep learning algorithms can be used to dissociate factors that are inherently confounded in the natural environment of biological organisms to test hypotheses about their isolated contributions to cognition and behaviour.


Subject(s)
Deep Learning , Humans , Algorithms , Chromosome Inversion , Cognition , Environment
5.
Br J Psychol ; 114 Suppl 1: 213-229, 2023 May.
Article in English | MEDLINE | ID: mdl-36018320

ABSTRACT

Faces are visual stimuli that convey rich social information. Previous experiments found better recognition for faces that were evaluated based on their social traits than on their perceptual features during encoding. Here, we ask whether this social-encoding benefit in face recognition is also found for categories of faces that we have no previous social experience with, such as other-race faces. To answer this question, we first explored whether social and perceptual evaluations for other-race faces are consistent and valid. We then asked whether social evaluations during encoding improve recognition for other-race faces. Results show that social and perceptual evaluations of own- and other-race faces were valid. We also found high agreement in social and perceptual evaluations across individuals from different races. This indicates that evaluations of other-race faces are not random but meaningful. Furthermore, we found that social evaluations facilitated face recognition regardless of race, demonstrating a social-encoding benefit for both own- and other-race faces. Our findings highlight the role of social information in face recognition and show how it can be used to improve recognition of categories of faces that are hard to recognize due to lack of experience with them.


Subject(s)
Facial Recognition , Humans , Face , Recognition, Psychology , Pattern Recognition, Visual
6.
Vision Res ; 201: 108128, 2022 12.
Article in English | MEDLINE | ID: mdl-36272208

ABSTRACT

Face recognition is a challenging classification task that humans perform effortlessly for familiar faces. Recent studies have emphasized the importance of exposure to high variability appearances of the same identity to perform this task. However, these studies did not explicitly measure the perceptual similarity between the learned images and the images presented at test, which may account for the advantage of learning from high variability. Particularly, randomly selected test images are more likely to be perceptually similar to learned high variability images, and dissimilar to learned low variability images. Here we dissociated effects of learning from variability and study-test perceptual similarity, by collecting human similarity ratings for the study and test images. Using these measures, we independently manipulated the variability between the learning images and their perceptual similarity to the test images. Different groups of participants learned face identities from a low or high variability set of images. The learning phase was followed by a face matching test (Experiment 1) or a face recognition task (Experiment 2) that presented novel images of the learned identities that were perceptually dissimilar or similar to the learned images. Results of both experiments show that perceptual similarity between study and test, rather than image variability at learning per se, predicts face recognition. We conclude that learning from high variability improves face recognition for perceptually similar but not for perceptually dissimilar images. These findings may not be specific to faces and should be similarly evaluated for other domains.


Subject(s)
Facial Recognition , Humans , Recognition, Psychology
7.
Cogn Sci ; 45(9): e13031, 2021 09.
Article in English | MEDLINE | ID: mdl-34490907

ABSTRACT

Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain-inspired algorithms that have recently reached human-level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human-like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human-like representation depends on a system that is optimized for face identification. In the current study, we examined the representation of DCNNs of faces that differ in features that are critical or non-critical for human face recognition. Our findings show that DCNNs optimized for face identification are tuned to the same facial features used by humans for face recognition. Sensitivity to these features was highly correlated with performance of the DCNN on a benchmark face recognition task. Moreover, sensitivity to these features and a view-invariant face representation emerged at higher layers of a DCNN optimized for face recognition but not for object recognition. This finding parallels the division to a face and an object system in high-level visual cortex. Taken together, these findings validate human perceptual models of face recognition, enable us to use DCNNs to test predictions about human face and object recognition as well as contribute to the interpretability of DCNNs.


Subject(s)
Facial Recognition , Visual Cortex , Algorithms , Humans , Neural Networks, Computer , Visual Perception
8.
Article in English | MEDLINE | ID: mdl-34402904

ABSTRACT

Face recognition benefits from associating social information to faces during learning. This has been demonstrated by better recognition for faces that underwent social than perceptual evaluations. Two hypotheses were proposed to account for this effect. According to the feature-elaboration hypothesis, social-evaluations encourage elaborated processing of perceptual information from faces (Winograd, 1981). According to a social-representation hypothesis, social-evaluations convert faces from a perceptual representation to a socially meaningful representation of a person. To decide between these two hypotheses, we ran a functional MRI study in which we functionally localized the posterior face-selective brain areas and social processing brain areas. Participants watched video-clips of young adults and were asked to study them for a recognition test, while making either perceptual evaluations or social evaluations about them. During the fMRI scan, participants performed an old/new recognition test. Behavioural findings replicated better recognition for faces that underwent social then perceptual evaluations. fMRI results showed higher response during the recognition phase for the faces that were learned socially than perceptually, in the social-brain network but not in posterior face-selective network. These results support the social-representation hypothesis and highlight the important role that social processing mechanisms, rather than purely perceptual processes, play in face recognition.

9.
Neuropsychologia ; 160: 107963, 2021 09 17.
Article in English | MEDLINE | ID: mdl-34284039

ABSTRACT

Face recognition depends on the ability of the face processing system to extract facial features that define the identity of a face. In a recent study we discovered that altering a subset of facial features changed the identity of the face, indicating that they are critical for face identification. Changing another set of features did not change the identity of a face, indicating that they are not critical for face identification. In the current study, we assessed whether developmental prosopagnosics (DPs) and super recognizers (SRs) also rely more heavily on these critical features than non-critical features for face identification. To that end, we presented to DPs and SRs faces in which either the critical or the non-critical features were manipulated. In Study 1, we presented SRs with a famous face recognition task. We found that overall SRs recognized famous faces that differ in either critical or non-critical features better than controls. Similar to controls, changes in critical features had a larger effect on SRs' face recognition than changes in non-critical features. In Study 2, we presented an identity matching task to DPs and SRs. Similar to controls, DPs and SRs perceived faces that differed in critical features as more different than faces that differed in non-critical features. Taken together, our results indicate that SRs and DPs use the same critical features for face identification as normal individuals. These findings emphasize the fundamental role of this subset of features for face identification.


Subject(s)
Facial Recognition , Prosopagnosia , Humans , Pattern Recognition, Visual , Recognition, Psychology
10.
Atten Percept Psychophys ; 83(1): 199-214, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33083987

ABSTRACT

Most studies on person perception have primarily investigated static images of faces. However, real-life person perception also involves the body and often the gait of the whole person. Whereas some studies indicated that the face dominates the representation of the whole person, others have emphasized the additional contribution of the body and gait. Here, we compared models of whole-person perception by asking whether a model that includes the body for static whole-person stimuli and also the gait for dynamic whole-person stimuli accounts better for the representation of the whole person than a model that takes into account the face alone. Participants rated the distinctiveness of static or dynamic displays of different people based on either the whole person, face, body, or gait. By fitting a linear regression model to the representation of the whole person based on the face, body, and gait, we revealed that the face and body contribute uniquely and independently to the representation of the static whole person, and that gait further contributes to the representation of the dynamic person. A complementary analysis examined whether these components are also valid dimensions of a whole-person representational space. This analysis further confirmed that the body in addition to the face as well as the gait are valid dimensions of the static and dynamic whole-person representations, respectively. These data clearly show that whole-person perception goes beyond the face and is significantly influenced by the body and gait.


Subject(s)
Motion Perception , Gait , Humans , Recognition, Psychology
11.
Cognition ; 208: 104424, 2021 03.
Article in English | MEDLINE | ID: mdl-32819709

ABSTRACT

Intact recognition of familiar faces is critical for appropriate social interactions. Thus, the human face processing system should be optimized for familiar face recognition. Blauch et al. (2020) used face recognition deep convolutional neural networks (DCNNs) that are trained to maximize recognition of the trained (familiar) identities, to model human unfamiliar and familiar face recognition. In line with this model, we discuss behavioral, neuroimaging and computational findings that indicate that human face recognition develops from the generation of identity-specific concepts of familiar faces that are learned in a supervised manner, to the generation of view-invariant identity-general perceptual representations. Face-trained DCNNs seem to share some fundamental similarities with this framework.


Subject(s)
Facial Recognition , Humans , Learning , Neural Networks, Computer , Recognition, Psychology
12.
Cognition ; 205: 104445, 2020 12.
Article in English | MEDLINE | ID: mdl-32920344

ABSTRACT

Studies on person recognition have primarily examined recognition of static faces, presented on a computer screen at a close distance. Nevertheless, in naturalistic situations we typically see the whole dynamic person, often approaching from a distance. In such cases, facial information may be less clear, and the motion pattern of an individual, their dynamic identity signature (DIS), may be used for person recognition. Studies that examined the role of motion in person recognition, presented videos of people in motion. However, such stimuli do not allow for the dissociation of gait from face and body form, as different identities differ both in their gait and static appearance. To examine the contribution of gait in person recognition, independently from static appearance, we used a virtual environment, and presented across participants, the same face and body form with different gaits. The virtual environment also enabled us to assess the distance at which a person is recognized as a continuous variable. Using this setting, we assessed the accuracy and distance at which identities are recognized based on their gait, as a function of gait distinctiveness. We find that the accuracy and distance at which people were recognized increased with gait distinctiveness. Importantly, these effects were found when recognizing identities in motion but not from static displays, indicating that DIS rather than attention, enabled more accurate person recognition. Overall these findings highlight that gait contributes to person recognition beyond the face and body and stress an important role for gait in real-life person recognition.


Subject(s)
Motion Perception , Virtual Reality , Gait , Humans , Pattern Recognition, Visual , Recognition, Psychology , Videotape Recording
13.
J Neurosci ; 40(39): 7545-7558, 2020 09 23.
Article in English | MEDLINE | ID: mdl-32859715

ABSTRACT

A hallmark of high-level visual cortex is its functional organization of neighboring areas that are selective for single categories, such as faces, bodies, and objects. However, visual scenes are typically composed of multiple categories. How does a category-selective cortex represent such complex stimuli? Previous studies have shown that the representation of multiple stimuli can be explained by a normalization mechanism. Here we propose that a normalization mechanism that operates in a cortical region composed of neighboring category-selective areas would generate a representation of multi-category stimuli that varies continuously across a category-selective cortex as a function of the magnitude of category selectivity for its components. By using fMRI, we can examine this correspondence between category selectivity and the representation of multi-category stimuli along a large, continuous region of cortex. To test these predictions, we used a linear model to fit the fMRI response of human participants (both sexes) to a multi-category stimulus (e.g., a whole person) based on the response to its component stimuli presented in isolation (e.g., a face or a body). Consistent with our predictions, the response of cortical areas in high-level visual cortex to multi-category stimuli varies in a continuous manner along a weighted mean line, as a function of the magnitude of its category selectivity. This was the case for both related (face + body) and unrelated (face+wardrobe) multi-category pairs. We conclude that the functional organization of neighboring category-selective areas may enable a dynamic and flexible representation of complex visual scenes that can be modulated by higher-level cognitive systems according to task demands.SIGNIFICANCE STATEMENT It is well established that the high-level visual cortex is composed of category-selective areas that reside in nearby locations. Here we predicted that this functional organization together with a normalization mechanism would generate a representation for multi-category stimuli that varies as a function of the category selectivity for its components. Consistent with this prediction, in an fMRI study we found that the representation of multi-category stimuli varies along high-level visual cortex, in a continuous manner, along a weighted mean line, in accordance with the category selectivity for a given area. These findings suggest that the functional organization of high-level visual cortex enables a flexible representation of complex scenes that can be modulated by high-level cognitive systems according to task demands.


Subject(s)
Cognition , Pattern Recognition, Visual , Visual Cortex/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male
14.
Vision Res ; 176: 91-99, 2020 11.
Article in English | MEDLINE | ID: mdl-32827880

ABSTRACT

While most studies on person recognition examine the face alone, recent studies have shown evidence for the contribution of the body and gait to person recognition beyond the face. Nevertheless, little is known on whether person recognition can be performed based on the body alone. In this study, we examined two sources of information that may enhance body-based person recognition: body motion and whole person context. Body motion has been shown to contribute to person recognition especially when facial information is unclear. Additionally, generating whole person context, by attaching faceless heads to bodies, has been shown to activate face processing mechanisms and may therefore enhance body-based person recognition. To assess body-based person recognition, participants performed a sequential matching task in which they studied a video of a person walking followed by a headless image of the same or different identity. The role of body motion was examined by comparing recognition from dynamic vs. static headless bodies. The role of whole person context was examined by comparing bodies with and without faceless heads. Our findings show that person recognition from the body alone was better in dynamic vs. static displays indicating that body motion contributed to body-based person recognition. In addition, whole person context contributed to body-based person recognition when recognition was performed in static displays. Overall these findings show that recognizing people based on their body alone is challenging but can be performed under certain circumstances that enhance the processing of the body when seeing the whole dynamic person.


Subject(s)
Facial Recognition , Motion Perception , Humans , Pattern Recognition, Visual , Recognition, Psychology , Walking
15.
Perception ; 48(5): 437-446, 2019 May.
Article in English | MEDLINE | ID: mdl-30939991

ABSTRACT

Faces convey very rich information that is critical for intact social interaction. To extract this information efficiently, faces should be easily detected from a complex visual scene. Here, we asked which features are critical for face detection. To answer this question, we presented non-face objects that generate a strong percept of a face (i.e., Pareidolia). One group of participants rated the faceness of this set of inanimate images. A second group rated the presence of a set of 12 local and global facial features. Regression analysis revealed that only the eyes or mouth significantly contributed to faceness scores. We further showed that removing eyes or mouth, but not teeth or ears, significantly reduced faceness scores. These findings show that face detection depends on specific facial features, the eyes and the mouth. This minimal information leads to over-generalization that generates false face percepts but assures that real faces are not missed.


Subject(s)
Face , Pattern Recognition, Visual/physiology , Social Perception , Adult , Eye , Facial Recognition/physiology , Female , Humans , Male , Mouth , Young Adult
16.
Vision Res ; 157: 105-111, 2019 04.
Article in English | MEDLINE | ID: mdl-29360472

ABSTRACT

Many studies have shown better recognition for faces we have greater experience with, relative to unfamiliar faces. However, it is still not clear if and how the representation of faces changes during the process of familiarization. In a previous study, we discovered a subset of facial features, for which we have high perceptual sensitivity (PS), that were critical for determining the identity of unfamiliar faces. This was done by assigning values to 20 different facial features based on perceptual rating, converting faces into feature-vectors, and measuring the correlations between face similarity ratings and distances between feature-vectors. In the current study, we examined the contribution of high and low-PS features to face identity after familiarization. To familiarize participants with unfamiliar faces, we used an individuation training protocol that was found to be effective in previous studies, in which different names are assigned to different faces and participants are asked to learn the face-name association. Our findings show that even after repeated exposure to the same image of each identity, which allows close examination of all facial features, only high-PS features contributed to face identity, while low-PS features did not. This subset of high-PS features includes both internal and external features and part and configuration features. We therefore conclude that identification of familiarized and unfamiliar faces may rely on the same subset of critical features. These findings further support a new categorization of facial features according to their perceptual sensitivity.


Subject(s)
Facial Recognition/physiology , Recognition, Psychology/physiology , Adult , Female , Humans , Male , Photic Stimulation/methods
17.
Cognition ; 183: 131-138, 2019 02.
Article in English | MEDLINE | ID: mdl-30448534

ABSTRACT

Faces convey rich perceptual and social information. The contribution of perceptual and social information to face recognition has been typically examined in separate experiments. Here, we take a comprehensive approach by studying the contributions of both perceptual experience and social-conceptual information to face learning within the same experimental design. The effect of perceptual experience was examined by systematically varying the similarity between the learned and test face views. Social information was manipulated by asking participants to make social, perceptual, or no evaluations on faces during learning. Results show better recognition for the learned views, which declines as a function of the dissimilarity between the learned and unlearned views. Additionally, processing faces as social concepts produced a general gain in performance of a similar magnitude for both the learned and unlearned views. We concluded that both social-conceptual and perceptual information contribute to face recognition but through complementary, independent mechanisms. These findings highlight the importance of considering both cognition and perception to obtain comprehensive understanding of face recognition.


Subject(s)
Concept Formation/physiology , Facial Recognition/physiology , Recognition, Psychology/physiology , Social Perception , Adult , Female , Humans , Male , Middle Aged , Young Adult
18.
Cognition ; 182: 73-83, 2019 01.
Article in English | MEDLINE | ID: mdl-30218914

ABSTRACT

Face recognition is a computationally challenging task that humans perform effortlessly. Nonetheless, this remarkable ability is better for familiar faces than unfamiliar faces. To account for humans' superior ability to recognize familiar faces, current theories suggest that different features are used for the representation of familiar and unfamiliar faces. In the current study, we applied a reverse engineering approach to reveal which facial features are critical for familiar face recognition. In contrast to current views, we discovered that the same subset of features that are used for matching unfamiliar faces, are also used for matching as well as recognition of familiar faces. We further show that these features are also used by a deep neural network face recognition algorithm. We therefore propose a new framework that assumes similar perceptual representation for all faces and integrates cognition and perception to account for humans' superior recognition of familiar faces.


Subject(s)
Deep Learning , Facial Recognition/physiology , Pattern Recognition, Automated , Recognition, Psychology/physiology , Adult , Aged , Female , Humans , Male , Middle Aged , Young Adult
19.
J Exp Psychol Learn Mem Cogn ; 45(10): 1733-1747, 2019 Oct.
Article in English | MEDLINE | ID: mdl-30570324

ABSTRACT

Our ability to recognize familiar faces is remarkable. During the process of becoming familiar with new people we acquire both perceptual and conceptual information about them. Which of these two types of information contributes to our ability to recognize a person in future encounters? Previously, we showed that associating faces with person-related conceptual information (e.g., name, occupation) during learning improves face recognition. Here, we provide further evidence and assess several possible accounts to the conceptual encoding benefit in face recognition. In a series of experiments, participants were asked to make perceptual (e.g., how round/symmetric is the face?) or conceptual (e.g., how trustworthy/intelligent does the face look?) evaluations about faces. We found better face recognition following conceptual than perceptual encoding. We further showed that this effect cannot be attributed to more global than part-based feature processing, more variable ratings, or more elaborative encoding during conceptual than perceptual evaluations. Finally, we showed that the conceptual over perceptual encoding advantage reflects a conceptual encoding benefit rather than a perceptual encoding cost. Overall these findings show that conceptual evaluations do not improve recognition by modifying the perceptual representation of a face (e.g., elaboration, global processing). Instead, we propose that face recognition benefits from representing faces as socially meaningful concepts rather than percepts during learning. These results highlight the importance of linking cognition and perception to understand recognition. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Concept Formation/physiology , Facial Recognition/physiology , Learning/physiology , Recognition, Psychology/physiology , Social Perception , Adult , Female , Humans , Male , Young Adult
20.
Sci Rep ; 8(1): 7036, 2018 05 04.
Article in English | MEDLINE | ID: mdl-29728577

ABSTRACT

Faces convey rich information including identity, gender and expression. Current neural models of face processing suggest a dissociation between the processing of invariant facial aspects such as identity and gender, that engage the fusiform face area (FFA) and the processing of changeable aspects, such as expression and eye gaze, that engage the posterior superior temporal sulcus face area (pSTS-FA). Recent studies report a second dissociation within this network such that the pSTS-FA, but not the FFA, shows much stronger response to dynamic than static faces. The aim of the current study was to test a unified model that accounts for these two functional characteristics of the neural face network. In an fMRI experiment, we presented static and dynamic faces while subjects judged an invariant (gender) or a changeable facial aspect (expression). We found that the pSTS-FA was more engaged in processing dynamic than static faces and changeable than invariant aspects, whereas the OFA and FFA showed similar response across all four conditions. These findings support an integrated neural model of face processing in which the ventral areas extract form information from both invariant and changeable facial aspects whereas the dorsal face areas are sensitive to dynamic and changeable facial aspects.


Subject(s)
Face/anatomy & histology , Facial Expression , Facial Recognition , Neural Networks, Computer , Adolescent , Adult , Data Interpretation, Statistical , Female , Humans , Magnetic Resonance Imaging/methods , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL