RESUMEN
The human medial temporal lobe (MTL) plays a crucial role in recognizing visual objects, a key cognitive function that relies on the formation of semantic representations. Nonetheless, it remains unknown how visual information of general objects is translated into semantic representations in the MTL. Furthermore, the debate about whether the human MTL is involved in perception has endured for a long time. To address these questions, we investigated three distinct models of neural object coding-semantic coding, axis-based feature coding, and region-based feature coding-in each subregion of the human MTL, using high-resolution fMRI in two male and six female participants. Our findings revealed the presence of semantic coding throughout the MTL, with a higher prevalence observed in the parahippocampal cortex (PHC) and perirhinal cortex (PRC), while axis coding and region coding were primarily observed in the earlier regions of the MTL. Moreover, we demonstrated that voxels exhibiting axis coding supported the transition to region coding and contained information relevant to semantic coding. Together, by providing a detailed characterization of neural object coding schemes and offering a comprehensive summary of visual coding information for each MTL subregion, our results not only emphasize a clear role of the MTL in perceptual processing but also shed light on the translation of perception-driven representations of visual features into memory-driven representations of semantics along the MTL processing pathway.
Asunto(s)
Corteza Perirrinal , Lóbulo Temporal , Humanos , Masculino , Femenino , Cognición , Imagen por Resonancia Magnética/métodos , Hipocampo , Mapeo Encefálico/métodosRESUMEN
Individuals with autism spectrum disorder (ASD) experience pervasive difficulties in processing social information from faces. However, the behavioral and neural mechanisms underlying social trait judgments of faces in ASD remain largely unclear. Here, we comprehensively addressed this question by employing functional neuroimaging and parametrically generated faces that vary in facial trustworthiness and dominance. Behaviorally, participants with ASD exhibited reduced specificity but increased inter-rater variability in social trait judgments. Neurally, participants with ASD showed hypo-activation across broad face-processing areas. Multivariate analysis based on trial-by-trial face responses could discriminate participant groups in the majority of the face-processing areas. Encoding social traits in ASD engaged vastly different face-processing areas compared to controls, and encoding different social traits engaged different brain areas. Interestingly, the idiosyncratic brain areas encoding social traits in ASD were still flexible and context-dependent, similar to neurotypicals. Additionally, participants with ASD also showed an altered encoding of facial saliency features in the eyes and mouth. Together, our results provide a comprehensive understanding of the neural mechanisms underlying social trait judgments in ASD.
Asunto(s)
Trastorno del Espectro Autista , Encéfalo , Reconocimiento Facial , Imagen por Resonancia Magnética , Percepción Social , Humanos , Trastorno del Espectro Autista/fisiopatología , Trastorno del Espectro Autista/diagnóstico por imagen , Trastorno del Espectro Autista/psicología , Masculino , Femenino , Adulto , Adulto Joven , Reconocimiento Facial/fisiología , Encéfalo/fisiopatología , Encéfalo/diagnóstico por imagen , Juicio/fisiología , Mapeo Encefálico , AdolescenteRESUMEN
OBJECTIVES: DEPDC5 is a common causative gene in familial focal epilepsy with or without malformations of cortical development. Its pathogenic variants also confer a significantly higher risk for sudden unexpected death in epilepsy (SUDEP), providing opportunities to investigate the pathophysiology intersecting neurodevelopment, epilepsy, and cardiorespiratory function. There is an urgent need to gain a mechanistic understanding of DEPDC5-related epilepsy and SUDEP, identify biomarkers for patients at high risk, and develop preventive interventions. METHODS: Depdc5 was specifically deleted in excitatory or inhibitory neurons in the mouse brain to determine neuronal subtypes that drive epileptogenesis and SUDEP. Electroencephalogram (EEG), cardiac, and respiratory recordings were performed to determine cardiorespiratory phenotypes associated with SUDEP. Baseline respiratory function and the response to hypoxia challenge were also studied in these mice. RESULTS: Depdc5 deletion in excitatory neurons in cortical layer 5 and dentate gyrus caused frequent generalized tonic-clonic seizures and SUDEP in young adult mice, but Depdc5 deletion in cortical interneurons did not. EEG suppression immediately following ictal offset was observed in fatal and non-fatal seizures, but low amplitude rhythmic theta frequency activity was lost only in fatal seizures. In addition, these mice developed baseline respiratory dysfunction prior to SUDEP, during which ictal apnea occurred long before terminal cardiac asystole. INTERPRETATION: Depdc5 deletion in excitatory neurons is sufficient to cause DEPDC5-related epilepsy and SUDEP. Ictal apnea and respiratory dysregulation play critical roles in SUDEP. Our study also provides a novel mouse model to investigate the underlying mechanisms of DEPDC5-related epilepsy and SUDEP. ANN NEUROL 2023;94:812-824.
Asunto(s)
Epilepsias Parciales , Epilepsia , Muerte Súbita e Inesperada en la Epilepsia , Animales , Ratones , Apnea/complicaciones , Muerte Súbita/etiología , Muerte Súbita/prevención & control , Epilepsias Parciales/complicaciones , Proteínas Activadoras de GTPasa/genética , Convulsiones/complicacionesRESUMEN
Investigations of hippocampal functions have revealed a dizzying array of findings, from lesion-based behavioral deficits, to a diverse range of characterized neural activations, to computational models of putative functionality. Across these findings, there remains an ongoing debate about the core function of the hippocampus and the generality of its representation. Researchers have debated whether the hippocampus's primary role relates to the representation of space, the neural basis of (episodic) memory, or some more general computation that generalizes across various cognitive domains. Within these different perspectives, there is much debate about the nature of feature encodings. Here, we suggest that in order to evaluate hippocampal responses-investigating, for example, whether neuronal representations are narrowly targeted to particular tasks or if they subserve domain-general purposes-a promising research strategy may be the use of multi-task experiments, or more generally switching between multiple task contexts while recording from the same neurons in a given session. We argue that this strategy-when combined with explicitly defined theoretical motivations that guide experiment design-could be a fruitful approach to better understand how hippocampal representations support different behaviors. In doing so, we briefly review key open questions in the field, as exemplified by articles in this special issue, as well as previous work using multi-task experiments, and extrapolate to consider how this strategy could be further applied to probe fundamental questions about hippocampal function.
Asunto(s)
Hipocampo , Memoria Episódica , Hipocampo/fisiología , Neuronas/fisiología , Percepción Espacial/fisiologíaRESUMEN
Investigations into how individual neurons encode behavioral variables of interest have revealed specific representations in single neurons, such as place and object cells, as well as a wide range of cells with conjunctive encodings or mixed selectivity. However, as most experiments examine neural activity within individual tasks, it is currently unclear if and how neural representations change across different task contexts. Within this discussion, the medial temporal lobe is particularly salient, as it is known to be important for multiple behaviors including spatial navigation and memory, however the relationship between these functions is currently unclear. Here, to investigate how representations in single neurons vary across different task contexts in the medial temporal lobe, we collected and analyzed single-neuron activity from human participants as they completed a paired-task session consisting of a passive-viewing visual working memory and a spatial navigation and memory task. Five patients contributed 22 paired-task sessions, which were spike sorted together to allow for the same putative single neurons to be compared between the different tasks. Within each task, we replicated concept-related activations in the working memory task, as well as target-location and serial-position responsive cells in the navigation task. When comparing neuronal activity between tasks, we first established that a significant number of neurons maintained the same kind of representation, responding to stimuli presentations across tasks. Further, we found cells that changed the nature of their representation across tasks, including a significant number of cells that were stimulus responsive in the working memory task that responded to serial position in the spatial task. Overall, our results support a flexible encoding of multiple, distinct aspects of different tasks by single neurons in the human medial temporal lobe, whereby some individual neurons change the nature of their feature coding between task contexts.
Asunto(s)
Navegación Espacial , Lóbulo Temporal , Humanos , Lóbulo Temporal/fisiología , Memoria a Corto Plazo , Neuronas/fisiología , Navegación Espacial/fisiologíaRESUMEN
Processing social information from faces is difficult for individuals with autism spectrum disorder (ASD). However, it remains unclear whether individuals with ASD make high-level social trait judgments from faces in the same way as neurotypical individuals. Here, we comprehensively addressed this question using naturalistic face images and representatively sampled traits. Despite similar underlying dimensional structures across traits, online adult participants with self-reported ASD showed different judgments and reduced specificity within each trait compared with neurotypical individuals. Deep neural networks revealed that these group differences were driven by specific types of faces and differential utilization of features within a face. Our results were replicated in well-characterized in-lab participants and partially generalized to more controlled face images (a preregistered study). By investigating social trait judgments in a broader population, including individuals with neurodevelopmental variations, we found important theoretical implications for the fundamental dimensions, variations, and potential behavioral consequences of social cognition.
Asunto(s)
Trastorno del Espectro Autista , Reconocimiento Facial , Adulto , Humanos , Juicio , Factores SociológicosRESUMEN
People instantaneously evaluate faces with significant agreement on evaluations of social traits. However, the neural basis for such rapid spontaneous face evaluation remains largely unknown. Here, we recorded from 490 neurons in the human amygdala and hippocampus and found that the neuronal activity was associated with the geometry of a social trait space. We further investigated the temporal evolution and modulation on the social trait representation, and we employed encoding and decoding models to reveal the critical social traits for the trait space. We also recorded from another 938 neurons and replicated our findings using different social traits. Together, our results suggest that there exists a neuronal population code for a comprehensive social trait space in the human amygdala and hippocampus that underlies spontaneous first impressions. Changes in such neuronal social trait space may have implications for the abnormal processing of social information observed in some neurological and psychiatric disorders.
Asunto(s)
Amígdala del Cerebelo , Hipocampo , Humanos , Amígdala del Cerebelo/fisiología , Hipocampo/fisiología , Neuronas/fisiología , Factores SociológicosRESUMEN
Objective To explore the semi-supervised learning (SSL) algorithm for long-tail endoscopic image classification with limited annotations. Method We explored semi-supervised long-tail endoscopic image classification in HyperKvasir, the largest gastrointestinal public dataset with 23 diverse classes. Semi-supervised learning algorithm FixMatch was applied based on consistency regularization and pseudo-labeling. After splitting the training dataset and the test dataset at a ratio of 4:1, we sampled 20%, 50%, and 100% labeled training data to test the classification with limited annotations. Results The classification performance was evaluated by micro-average and macro-average evaluation metrics, with the Mathews correlation coefficient (MCC) as the overall evaluation. SSL algorithm improved the classification performance, with MCC increasing from 0.8761 to 0.8850, from 0.8983 to 0.8994, and from 0.9075 to 0.9095 with 20%, 50%, and 100% ratio of labeled training data, respectively. With a 20% ratio of labeled training data, SSL improved both the micro-average and macro-average classification performance; while for the ratio of 50% and 100%, SSL improved the micro-average performance but hurt macro-average performance. Through analyzing the confusion matrix and labeling bias in each class, we found that the pseudo-based SSL algorithm exacerbated the classifier's preference for the head class, resulting in improved performance in the head class and degenerated performance in the tail class. Conclusion SSL can improve the classification performance for semi-supervised long-tail endoscopic image classification, especially when the labeled data is extremely limited, which may benefit the building of assisted diagnosis systems for low-volume hospitals. However, the pseudo-labeling strategy may amplify the effect of class imbalance, which hurts the classification performance for the tail class.
Asunto(s)
Algoritmos , Aprendizaje Automático SupervisadoRESUMEN
Face identity is represented at a high level of the visual hierarchy. Whether the human brain can process facial identity information in the absence of visual awareness remains unclear. In this study, we investigated potential face identity representation through face-identity adaptation with the adapting faces interocularly suppressed by Continuous Flash Suppression (CFS) noise, a modified binocular rivalry paradigm. The strength of interocular suppression was manipulated by varying the contrast of CFS noise. While obeservers reported the face images subjectively unperceived and the face identity objectively unrecognizable, a significant face identity aftereffect was observed under low but not high contrast CFS noise. In addition, the identity of face images under shallow interocular suppression can be decoded from multi-voxel patterns in the right fusiform face area (FFA) obtained with high-resolution 7T fMRI. Thus the comined evidence from visual adaptation and 7T fMRI suggest that face identity can be represented in the human brain without explicit perceptual recognition. The processing of interocularly suppressed faces could occur at different levels depending on how "deep" the information is suppressed.
Asunto(s)
Encéfalo/diagnóstico por imagen , Reconocimiento Facial/fisiología , Imagen por Resonancia Magnética/métodos , Adaptación Fisiológica , Adolescente , Adulto , Cognición , Estado de Conciencia , Femenino , Humanos , Masculino , Reconocimiento en Psicología , Adulto JovenRESUMEN
Faces are among the most important visual stimuli that humans perceive in everyday life. While extensive literature has examined emotional processing and social evaluations of faces, most studies have examined either topic using unimodal approaches. In this review, we promote the use of multimodal cognitive neuroscience approaches to study these processes, using two lines of research as examples: ambiguity in facial expressions of emotion and social trait judgment of faces. In the first set of studies, we identified an event-related potential that signals emotion ambiguity using electroencephalography and we found convergent neural responses to emotion ambiguity using functional neuroimaging and single-neuron recordings. In the second set of studies, we discuss how different neuroimaging and personality-dimensional approaches together provide new insights into social trait judgments of faces. In both sets of studies, we provide an in-depth comparison between neurotypicals and people with autism spectrum disorder. We offer a computational account for the behavioral and neural markers of the different facial processing between the two groups. Finally, we suggest new practices for studying the emotional processing and social evaluations of faces. All data discussed in the case studies of this review are publicly available.
Asunto(s)
Trastorno del Espectro Autista , Reconocimiento Facial , Humanos , Juicio , Emociones/fisiología , Electroencefalografía , Expresión FacialRESUMEN
Visual attention and object recognition are two critical cognitive functions that significantly influence our perception of the world. While these neural processes converge on the temporal cortex, the exact nature of their interactions remains largely unclear. Here, we systematically investigated the interplay between visual attention and object feature coding by training macaques to perform a free-gaze visual search task using natural face and object stimuli. With a large number of units recorded from multiple brain areas, we discovered that units exhibiting visual feature coding displayed a distinct attentional response profile and functional connectivity compared to units not exhibiting feature coding. Attention directed towards search targets enhanced the pattern separation of stimuli across brain areas, and this enhancement was more pronounced for units encoding visual features. Our findings suggest two stages of neural processing, with the early stage primarily focused on processing visual features and the late stage dedicated to processing attention. Importantly, feature coding in the early stage could predict the attentional effect in the late stage. Together, our results suggest an intricate interplay between visual feature and attention coding in the primate brain, which can be attributed to the differential functional connectivity and neural networks engaged in these processes.
RESUMEN
Neurotypical (NT) individuals and individuals with autism spectrum disorder (ASD) make different judgments of social traits from others' faces; they also exhibit different social emotional responses in social interactions. A common hypothesis is that the differences in face perception in ASD compared with NT is related to distinct social behaviors. To test this hypothesis, we combined a face trait judgment task with a novel interpersonal transgression task that induces measures social emotions and behaviors. ASD and neurotypical participants viewed a large set of naturalistic facial stimuli while judging them on a comprehensive set of social traits (e.g., warm, charismatic, critical). They also completed an interpersonal transgression task where their responsibility in causing an unpleasant outcome to a social partner was manipulated. The purpose of the latter task was to measure participants' emotional (e.g., guilt) and behavioral (e.g., compensation) responses to interpersonal transgression. We found that, compared with neurotypical participants, ASD participants' self-reported guilt and compensation tendency was less sensitive to our responsibility manipulation. Importantly, ASD participants and neurotypical participants showed distinct associations between self-reported guilt and judgments of criticalness from others' faces. These findings reveal a novel link between perception of social traits and social emotional responses in ASD.
Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Humanos , Juicio , Emociones , CulpaRESUMEN
Face learning has important critical periods during development. However, the computational mechanisms of critical periods remain unknown. Here, we conducted a series of in silico experiments and showed that, similar to humans, deep artificial neural networks exhibited critical periods during which a stimulus deficit could impair the development of face learning. Face learning could only be restored when providing information within the critical period, whereas, outside of the critical period, the model could not incorporate new information anymore. We further provided a full computational account by learning rate and demonstrated an alternative approach by knowledge distillation and attention transfer to partially recover the model outside of the critical period. We finally showed that model performance and recovery were associated with identity-selective units and the correspondence with the primate visual systems. Our present study not only reveals computational mechanisms underlying face learning but also points to strategies to restore impaired face learning.
RESUMEN
Recognizing familiar faces and learning new faces play an important role in social cognition. However, the underlying neural computational mechanisms remain unclear. Here, we record from single neurons in the human amygdala and hippocampus and find a greater neuronal representational distance between pairs of familiar faces than unfamiliar faces, suggesting that neural representations for familiar faces are more distinct. Representational distance increases with exposures to the same identity, suggesting that neural face representations are sharpened with learning and familiarization. Furthermore, representational distance is positively correlated with visual dissimilarity between faces, and exposure to visually similar faces increases representational distance, thus sharpening neural representations. Finally, we construct a computational model that demonstrates an increase in the representational distance of artificial units with training. Together, our results suggest that the neuronal population geometry, quantified by the representational distance, encodes face familiarity, similarity, and learning, forming the basis of face recognition and memory.
Asunto(s)
Reconocimiento Facial , Reconocimiento en Psicología , Humanos , Reconocimiento en Psicología/fisiología , Aprendizaje , Amígdala del Cerebelo , Reconocimiento Facial/fisiología , Hipocampo , Reconocimiento Visual de Modelos/fisiologíaRESUMEN
BACKGROUND: The potential prognostic value of extranodal soft tissue metastasis (ESTM) has been confirmed by increasing studies about gastric cancer (GC). However, the gold standard of ESTM is determined by pathologic examination after surgery, and there are no preoperative methods for assessment of ESTM yet. PURPOSE: This multicenter study aimed to develop a deep learning-based radiomics model to preoperatively identify ESTM and evaluate its prognostic value. METHODS: A total of 959 GC patients were enrolled from two centers and split into a training cohort (N = 551) and a test cohort (N = 236) for ESTM evaluation. Additionally, an external survival cohort (N = 172) was included for prognostic analysis. Four models were established based on clinical characteristics and multiphase computed tomography (CT) images for preoperative identification of ESTM, including a deep learning model, a hand-crafted radiomic model, a clinical model, and a combined model. C-index, decision curve, and calibration curve were utilized to assess the model performances. Survival analysis was conducted to explore the ability of stratifying overall survival (OS). RESULTS: The combined model showed good discrimination of the ESTM [C-indices (95% confidence interval, CI): 0.770 (0.729-0.812) and 0.761 (0.718-0.805) in training and test cohorts respectively], which outperformed deep learning model, radiomics model, and clinical model. The stratified analysis showed this model was not affected by patient's tumor size, the presence of lymphovascular invasion, and Lauren classification (p < 0.05). Moreover, the model score showed strong consistency with the OS [C-indices (95%CI): 0.723 (0.658-0.789, p < 0.0001) in the internal survival cohort and 0.715 (0.650-0.779, p < 0.0001) in the external survival cohort]. More interestingly, univariate analysis showed the model score was significantly associated with occult distant metastasis (p < 0.05) that was missed by preoperative diagnosis. CONCLUSIONS: The model combining CT images and clinical characteristics had an impressive predictive ability of both ESTM and prognosis, which has the potential to serve as an effective complement to the preoperative TNM staging system.
Asunto(s)
Aprendizaje Profundo , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/patología , Radiómica , Estadificación de Neoplasias , Tomografía Computarizada por Rayos X/métodos , Estudios RetrospectivosRESUMEN
Nasopharyngeal carcinoma is a common head and neck malignancy with distinct clinical management compared to other types of cancer. Precision risk stratification and tailored therapeutic interventions are crucial to improving the survival outcomes. Artificial intelligence, including radiomics and deep learning, has exhibited considerable efficacy in various clinical tasks for nasopharyngeal carcinoma. These techniques leverage medical images and other clinical data to optimize clinical workflow and ultimately benefit patients. In this review, we provide an overview of the technical aspects and basic workflow of radiomics and deep learning in medical image analysis. We then conduct a detailed review of their applications to seven typical tasks in the clinical diagnosis and treatment of nasopharyngeal carcinoma, covering various aspects of image synthesis, lesion segmentation, diagnosis, and prognosis. The innovation and application effects of cutting-edge research are summarized. Recognizing the heterogeneity of the research field and the existing gap between research and clinical translation, potential avenues for improvement are discussed. We propose that these issues can be gradually addressed by establishing standardized large datasets, exploring the biological characteristics of features, and technological upgrades.
Asunto(s)
Aprendizaje Profundo , Neoplasias Nasofaríngeas , Humanos , Carcinoma Nasofaríngeo/diagnóstico por imagen , Carcinoma Nasofaríngeo/tratamiento farmacológico , Inteligencia Artificial , Radiómica , Neoplasias Nasofaríngeas/diagnóstico por imagen , Neoplasias Nasofaríngeas/tratamiento farmacológicoRESUMEN
Face perception is a fundamental aspect of human social interaction, yet most research on this topic has focused on single modalities and specific aspects of face perception. Here, we present a comprehensive multimodal dataset for examining facial emotion perception and judgment. This dataset includes EEG data from 97 unique neurotypical participants across 8 experiments, fMRI data from 19 neurotypical participants, single-neuron data from 16 neurosurgical patients (22 sessions), eye tracking data from 24 neurotypical participants, behavioral and eye tracking data from 18 participants with ASD and 15 matched controls, and behavioral data from 3 rare patients with focal bilateral amygdala lesions. Notably, participants from all modalities performed the same task. Overall, this multimodal dataset provides a comprehensive exploration of facial emotion perception, emphasizing the importance of integrating multiple modalities to gain a holistic understanding of this complex cognitive process. This dataset serves as a key missing link between human neuroimaging and neurophysiology literature, and facilitates the study of neuropsychiatric populations.
Asunto(s)
Reconocimiento Facial , Humanos , Amígdala del Cerebelo/diagnóstico por imagen , Emociones/fisiología , Reconocimiento Facial/fisiología , Juicio , Imagen por Resonancia MagnéticaRESUMEN
Investigations into how individual neurons encode behavioral variables of interest have revealed specific representations in single neurons, such as place and object cells, as well as a wide range of cells with conjunctive encodings or mixed selectivity. However, as most experiments examine neural activity within individual tasks, it is currently unclear if and how neural representations change across different task contexts. Within this discussion, the medial temporal lobe is particularly salient, as it is known to be important for multiple behaviors including spatial navigation and memory, however the relationship between these functions is currently unclear. Here, to investigate how representations in single neurons vary across different task contexts in the MTL, we collected and analyzed single-neuron activity from human participants as they completed a paired-task session consisting of a passive-viewing visual working memory and a spatial navigation and memory task. Five patients contributed 22 paired-task sessions, which were spike sorted together to allow for the same putative single neurons to be compared between the different tasks. Within each task, we replicated concept-related activations in the working memory task, as well as target-location and serial-position responsive cells in the navigation task. When comparing neuronal activity between tasks, we first established that a significant number of neurons maintained the same kind of representation, responding to stimuli presentations across tasks. Further, we found cells that changed the nature of their representation across tasks, including a significant number of cells that were stimulus responsive in the working memory task that responded to serial position in the spatial task. Overall, our results support a flexible encoding of multiple, distinct aspects of different tasks by single neurons in the human MTL, whereby some individual neurons change the nature of their feature coding between task contexts.
RESUMEN
Prognostic prediction has long been a hotspot in disease analysis and management, and the development of image-based prognostic prediction models has significant clinical implications for current personalized treatment strategies. The main challenge in prognostic prediction is to model a regression problem based on censored observations, and semi-supervised learning has the potential to play an important role in improving the utilization efficiency of censored data. However, there are yet few effective semi-supervised paradigms to be applied. In this paper, we propose a semi-supervised co-training deep neural network incorporating a support vector regression layer for survival time estimation (Co-DeepSVS) that improves the efficiency in utilizing censored data for prognostic prediction. First, we introduce a support vector regression layer in deep neural networks to deal with censored data and directly predict survival time, and more importantly to calculate the labeling confidence of each case. Then, we apply a semi-supervised multi-view co-training framework to achieve accurate prognostic prediction, where labeling confidence estimation with prior knowledge of pseudo time is conducted for each view. Experimental results demonstrate that the proposed Co-DeepSVS has a promising prognostic ability and surpasses most widely used methods on a multi-phase CT dataset. Besides, the introduction of SVR layer makes the model more robust in the presence of follow-up bias.
Asunto(s)
Conocimiento , Redes Neurales de la Computación , Pronóstico , Aprendizaje Automático SupervisadoRESUMEN
The human amygdala and hippocampus are critically involved in various processes in face perception. However, it remains unclear how task demands or evaluative contexts modulate processes underlying face perception. In this study, we employed two task instructions when participants viewed the same faces and recorded single-neuron activity from the human amygdala and hippocampus. We comprehensively analyzed task modulation for three key aspects of face processing and we found that neurons in the amygdala and hippocampus (1) encoded high-level social traits such as perceived facial trustworthiness and dominance and this response was modulated by task instructions; (2) encoded low-level facial features and demonstrated region-based feature coding, which was not modulated by task instructions; and (3) encoded fixations on salient face parts such as the eyes and mouth, which was not modulated by task instructions. Together, our results provide a comprehensive survey of task modulation of neural processes underlying face perception at the single-neuron level in the human amygdala and hippocampus.