Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.404
Filtrar
1.
Sci Rep ; 14(1): 12629, 2024 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-38824168

RESUMEN

Moral judgements about people based on their actions is a key component that guides social decision making. It is currently unknown how positive or negative moral judgments associated with a person's face are processed and stored in the brain for a long time. Here, we investigate the long-term memory of moral values associated with human faces using simultaneous EEG-fMRI data acquisition. Results show that only a few exposures to morally charged stories of people are enough to form long-term memories a day later for a relatively large number of new faces. Event related potentials (ERPs) showed a significant differentiation of remembered good vs bad faces over centerofrontal electrode sites (value ERP). EEG-informed fMRI analysis revealed a subcortical cluster centered on the left caudate tail (CDt) as a correlate of the face value ERP. Importantly neither this analysis nor a conventional whole-brain analysis revealed any significant coding of face values in cortical areas, in particular the fusiform face area (FFA). Conversely an fMRI-informed EEG source localization using accurate subject-specific EEG head models also revealed activation in the left caudate tail. Nevertheless, the detected caudate tail region was found to be functionally connected to the FFA, suggesting FFA to be the source of face-specific information to CDt. A further psycho-physiological interaction analysis also revealed task-dependent coupling between CDt and dorsomedial prefrontal cortex (dmPFC), a region previously identified as retaining emotional working memories. These results identify CDt as a main site for encoding the long-term value memories of faces in humans suggesting that moral value of faces activates the same subcortical basal ganglia circuitry involved in processing reward value memory for objects in primates.


Asunto(s)
Electroencefalografía , Potenciales Evocados , Imagen por Resonancia Magnética , Principios Morales , Humanos , Imagen por Resonancia Magnética/métodos , Femenino , Masculino , Adulto , Potenciales Evocados/fisiología , Adulto Joven , Núcleo Caudado/fisiología , Núcleo Caudado/diagnóstico por imagen , Mapeo Encefálico/métodos , Cara/fisiología , Memoria/fisiología , Juicio/fisiología
2.
Sci Rep ; 14(1): 10040, 2024 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-38693189

RESUMEN

Investigation of visual illusions helps us understand how we process visual information. For example, face pareidolia, the misperception of illusory faces in objects, could be used to understand how we process real faces. However, it remains unclear whether this illusion emerges from errors in face detection or from slower, cognitive processes. Here, our logic is straightforward; if examples of face pareidolia activate the mechanisms that rapidly detect faces in visual environments, then participants will look at objects more quickly when the objects also contain illusory faces. To test this hypothesis, we sampled continuous eye movements during a fast saccadic choice task-participants were required to select either faces or food items. During this task, pairs of stimuli were positioned close to the initial fixation point or further away, in the periphery. As expected, the participants were faster to look at face targets than food targets. Importantly, we also discovered an advantage for food items with illusory faces but, this advantage was limited to the peripheral condition. These findings are among the first to demonstrate that the face pareidolia illusion persists in the periphery and, thus, it is likely to be a consequence of erroneous face detection.


Asunto(s)
Ilusiones , Humanos , Femenino , Masculino , Adulto , Ilusiones/fisiología , Adulto Joven , Percepción Visual/fisiología , Estimulación Luminosa , Cara/fisiología , Reconocimiento Facial/fisiología , Movimientos Oculares/fisiología , Reconocimiento Visual de Modelos/fisiología
3.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38732846

RESUMEN

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Gestos , Humanos , Electroencefalografía/métodos , Cara/fisiología , Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Encéfalo/fisiología , Masculino
4.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38732856

RESUMEN

Biometric authentication plays a vital role in various everyday applications with increasing demands for reliability and security. However, the use of real biometric data for research raises privacy concerns and data scarcity issues. A promising approach using synthetic biometric data to address the resulting unbalanced representation and bias, as well as the limited availability of diverse datasets for the development and evaluation of biometric systems, has emerged. Methods for a parameterized generation of highly realistic synthetic data are emerging and the necessary quality metrics to prove that synthetic data can compare to real data are open research tasks. The generation of 3D synthetic face data using game engines' capabilities of generating varied realistic virtual characters is explored as a possible alternative for generating synthetic face data while maintaining reproducibility and ground truth, as opposed to other creation methods. While synthetic data offer several benefits, including improved resilience against data privacy concerns, the limitations and challenges associated with their usage are addressed. Our work shows concurrent behavior in comparing semi-synthetic data as a digital representation of a real identity with their real datasets. Despite slight asymmetrical performance in comparison with a larger database of real samples, a promising performance in face data authentication is shown, which lays the foundation for further investigations with digital avatars and the creation and analysis of fully synthetic data. Future directions for improving synthetic biometric data generation and their impact on advancing biometrics research are discussed.


Asunto(s)
Cara , Juegos de Video , Humanos , Cara/anatomía & histología , Cara/fisiología , Biometría/métodos , Identificación Biométrica/métodos , Imagenología Tridimensional/métodos , Masculino , Femenino , Algoritmos , Reproducibilidad de los Resultados
5.
PLoS One ; 19(5): e0304150, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38805447

RESUMEN

When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.


Asunto(s)
Lenguaje , Fonética , Percepción del Habla , Humanos , Femenino , Masculino , Adulto , Percepción del Habla/fisiología , Adulto Joven , Cara/fisiología , Percepción Visual/fisiología , Movimientos Oculares/fisiología , Habla/fisiología , Tecnología de Seguimiento Ocular
6.
Curr Biol ; 34(9): R346-R348, 2024 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-38714161

RESUMEN

Animals including humans often react to sounds by involuntarily moving their face and body. A new study shows that facial movements provide a simple and reliable readout of a mouse's hearing ability that is more sensitive than traditional measurements.


Asunto(s)
Cara , Animales , Ratones , Cara/fisiología , Percepción Auditiva/fisiología , Audición/fisiología , Sonido , Movimiento/fisiología , Humanos
7.
PLoS One ; 19(5): e0303400, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38739635

RESUMEN

Visual abilities tend to vary predictably across the visual field-for simple low-level stimuli, visibility is better along the horizontal vs. vertical meridian and in the lower vs. upper visual field. In contrast, face perception abilities have been reported to show either distinct or entirely idiosyncratic patterns of variation in peripheral vision, suggesting a dissociation between the spatial properties of low- and higher-level vision. To assess this link more clearly, we extended methods used in low-level vision to develop an acuity test for face perception, measuring the smallest size at which facial gender can be reliably judged in peripheral vision. In 3 experiments, we show the characteristic inversion effect, with better acuity for upright faces than inverted, demonstrating the engagement of high-level face-selective processes in peripheral vision. We also observe a clear advantage for gender acuity on the horizontal vs. vertical meridian and a smaller-but-consistent lower- vs. upper-field advantage. These visual field variations match those of low-level vision, indicating that higher-level face processing abilities either inherit or actively maintain the characteristic patterns of spatial selectivity found in early vision. The commonality of these spatial variations throughout the visual hierarchy means that the location of faces in our visual field systematically influences our perception of them.


Asunto(s)
Reconocimiento Facial , Campos Visuales , Humanos , Campos Visuales/fisiología , Femenino , Masculino , Adulto , Reconocimiento Facial/fisiología , Adulto Joven , Estimulación Luminosa , Percepción Visual/fisiología , Agudeza Visual/fisiología , Cara/fisiología
8.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38714314

RESUMEN

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Asunto(s)
Cara , Confianza , Voz , Humanos , Femenino , Voz/fisiología , Adulto Joven , Adulto , Cara/fisiología , Percepción del Habla/fisiología , Percepción de la Altura Tonal/fisiología , Reconocimiento Facial/fisiología , Señales (Psicología) , Adolescente
9.
Sci Rep ; 14(1): 9402, 2024 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658575

RESUMEN

Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.


Asunto(s)
Electroencefalografía , Reconocimiento Facial , Humanos , Masculino , Femenino , Adulto , Reconocimiento Facial/fisiología , Adulto Joven , Estimulación Luminosa , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Cara/fisiología
10.
Sci Rep ; 14(1): 9794, 2024 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-38684721

RESUMEN

Face perception is a major topic in vision research. Most previous research has concentrated on (holistic) spatial representations of faces, often with static faces as stimuli. However, faces are highly dynamic stimuli containing important temporal information. How sensitive humans are regarding temporal information in dynamic faces is not well understood. Studies investigating temporal information in dynamic faces usually focus on the processing of emotional expressions. However, faces also contain relevant temporal information without any strong emotional expression. To investigate cues that modulate human sensitivity to temporal order, we utilized muted dynamic neutral face videos in two experiments. We varied the orientation of the faces (upright and inverted) and the presence/absence of eye blinks as partial dynamic cues. Participants viewed short, muted, monochromic videos of models vocalizing a widely known text (National Anthem). Videos were played either forward (in the correct temporal order) or backward. Participants were asked to determine the direction of the temporal order for each video, and (at the end of the experiment) whether they had understood the speech. We found that face orientation, and the presence/absence of an eye blink affected sensitivity, criterion (bias) and reaction time: Overall, sensitivity was higher for upright compared to inverted faces, and in the condition where an eye blink was present compared to the condition without an eye blink. Reaction times were mostly faster in the conditions with higher sensitivity. A bias to report inverted faces as 'backward' observed in Experiment I, where upright and inverted faces were presented randomly interleaved within each block, was absent when presenting upright and inverted faces in different blocks in Experiment II. Language comprehension results revealed that there was higher sensitivity when understanding the speech compared to not understanding the speech in both experiments. Taken together, our results showed higher sensitivity with upright compared to inverted faces, suggesting that the perception of dynamic, task-relevant information was superior with the canonical orientation of the faces. Furthermore, partial information coming from eye blinks, in addition to mouth movements, seemed to play a significant role in dynamic face perception, both when faces were presented upright and inverted. We suggest that studying the perception of facial dynamics beyond emotional expressions will help us to better understand the mechanisms underlying the temporal integration of facial information from different -partial and holistic- sources, and that our results show how different strategies, depending on the available information, are employed by human observers when judging the temporal order of faces.


Asunto(s)
Reconocimiento Facial , Humanos , Femenino , Masculino , Reconocimiento Facial/fisiología , Adulto , Adulto Joven , Tiempo de Reacción/fisiología , Expresión Facial , Parpadeo/fisiología , Estimulación Luminosa/métodos , Emociones/fisiología , Cara/fisiología , Señales (Psicología)
11.
Sensors (Basel) ; 24(8)2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38676067

RESUMEN

Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.


Asunto(s)
Expresión Facial , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Cara/fisiología , Emociones/fisiología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
12.
Sensors (Basel) ; 24(8)2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38676235

RESUMEN

Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants' emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera.


Asunto(s)
Aprendizaje Profundo , Emociones , Expresión Facial , Frecuencia Cardíaca , Humanos , Emociones/fisiología , Frecuencia Cardíaca/fisiología , Grabación en Video/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Cara/fisiología , Femenino , Masculino
13.
IEEE J Biomed Health Inform ; 28(5): 2955-2966, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38345952

RESUMEN

Video-based Photoplethysmography (VPPG) offers the capability to measure heart rate (HR) from facial videos. However, the reliability of the HR values extracted through this method remains uncertain, especially when videos are affected by various disturbances. Confronted by this challenge, we introduce an innovative framework for VPPG-based HR measurements, with a focus on capturing diverse sources of uncertainty in the predicted HR values. In this context, a neural network named HRUNet is structured for HR extraction from input facial videos. Departing from the conventional training approach of learning specific weight (and bias) values, we leverage the Bayesian posterior estimation to derive weight distributions within HRUNet. These distributions allow for sampling to encode uncertainty stemming from HRUNet's limited performance. On this basis, we redefine HRUNet's output as a distribution of potential HR values, as opposed to the traditional emphasis on the single most probable HR value. The underlying goal is to discover the uncertainty arising from inherent noise in the input video. HRUNet is evaluated across 1,098 videos from seven datasets, spanning three scenarios: undisturbed, motion-disturbed, and light-disturbed. The ensuing test outcomes demonstrate that uncertainty in the HR measurements increases significantly in the scenarios marked by disturbances, compared to that in the undisturbed scenario. Moreover, HRUNet outperforms state-of-the-art methods in HR accuracy when excluding HR values with 0.4 uncertainty. This underscores that uncertainty emerges as an informative indicator of potentially erroneous HR measurements. With enhanced reliability affirmed, the VPPG technique holds the promise for applications in safety-critical domains.


Asunto(s)
Cara , Frecuencia Cardíaca , Fotopletismografía , Procesamiento de Señales Asistido por Computador , Grabación en Video , Humanos , Frecuencia Cardíaca/fisiología , Fotopletismografía/métodos , Cara/fisiología , Grabación en Video/métodos , Incertidumbre , Redes Neurales de la Computación , Adulto , Teorema de Bayes , Masculino , Femenino , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Reproducibilidad de los Resultados
14.
Am J Hum Biol ; 36(6): e24040, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38174630

RESUMEN

OBJECTIVES: The capacity to assess male physical strength from facial cues may be adaptive given health and fitness-related associations with muscular strength. Our study complements recent research on strength-related face perceptions of male Maasai by applying the protocol to male European faces and assessors. METHODS: Five distinct facial morphs calibrated for handgrip strength (HGS) were manufactured with geometric morphometrics performing regressions of the Procrustes shape coordinates on HGS in a sample of 26 European men (18-32 years). Young adult men and women (n = 445) rated these morphs on physical strength, attractiveness, and aggressiveness. RESULTS: Facial morphs calibrated to lower HGS were rated as less strong, less attractive, and more aggressive than those calibrated to higher HGS. Medium levels of HGS were associated with the highest attractiveness ratings. CONCLUSIONS: The rating patterns of physical strength, attractiveness, and aggressiveness for European male facial morphs exhibit similarity to previous ratings of Maasai male faces. Therefore, the current findings corroborate the suggestion of a common mechanism for social attributions based on facial cues to physical strength, modulated by local ecology and societal context.


Asunto(s)
Señales (Psicología) , Cara , Fuerza de la Mano , Humanos , Masculino , Adulto , Adolescente , Femenino , Adulto Joven , Cara/anatomía & histología , Cara/fisiología , Fuerza de la Mano/fisiología , Europa (Continente) , Agresión , Belleza
15.
Cogn Emot ; 38(1): 59-70, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37712676

RESUMEN

Stimulating CT-afferents by forearm caresses produces the subjective experience of pleasantness in the receiver and modulates subjective evaluations of viewed affective images. Receiving touch from another person includes the social element of another person's presence, which has been found to influence affective image evaluations without involving touch. The current study investigated whether these modulations translate to facial muscle responses associated with positive and negative affect across touch-involving and mere presence conditions. Female participants (N = 40, M(age) = 22.4, SD = 5.3) watched affective images (neutral, positive, negative) while facial electromyography was recorded (sites: zygomaticus, corrugator). Results from ANOVAs showed that providing touch to another person or oneself modulated zygomaticus site responses when viewing positive images. Providing CT-afferent stimulating touch (i.e., forearm caresses) to another person or oneself dampened the positive affective facial muscle response to positive affective images. Providing touch to another person generally increased corrugator facial muscle activity related to negative affect. Receiving touch did not modulate affective facial muscle responses during the viewing of affective images but may have effects on later cognitive processes. Together, previously reported social and touch modulations of subjective evaluations of affective images do not translate to facial muscle responses during affective image viewing, which were differentially modulated.


Asunto(s)
Percepción del Tacto , Tacto , Humanos , Femenino , Adulto Joven , Adulto , Tacto/fisiología , Músculos Faciales/fisiología , Percepción del Tacto/fisiología , Emociones/fisiología , Cara/fisiología , Electromiografía
16.
Anxiety Stress Coping ; 37(1): 114-126, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37029987

RESUMEN

Previous research on physiological indices of social anxiety has offered unclear results. In this study, participants with low and high social anxiety performed five social interaction tasks while being recorded with a thermal camera. Each task was associated with a dimension assessed by the Social Anxiety Questionnaire for Adults (1 = Interactions with strangers. 2 = Speaking in public/Talking with people in authority, 3 = Criticism and embarrassment, 4 = Assertive expression of annoyance, disgust or displeasure, 5 = Interactions with the opposite sex). Mixed-effects models revealed that the temperature of the tip of the nose decreased significantly in participants with low (vs. high) social anxiety (p < 0.001), while no significant differences were found in other facial regions of interest: forehead (p = 0.999) and cheeks (p = 0.999). Furthermore, task 1 was the most effective at discriminating between the thermal change of the nose tip and social anxiety, with a trend for a higher nose temperature in participants with high social anxiety and a lower nose temperature for the low social anxiety group. We emphasize the importance of corroborating thermography with specific tasks as an ecological method, and tip of the nose thermal change as a psychophysiological index associated with social anxiety.


Asunto(s)
Cara , Termografía , Adulto , Humanos , Termografía/métodos , Cara/fisiología , Miedo , Ansiedad/diagnóstico , Encuestas y Cuestionarios
17.
PLoS One ; 18(11): e0286512, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37992062

RESUMEN

Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues. Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality. Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces. These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.


Asunto(s)
Reconocimiento Facial , Reconocimiento Visual de Modelos , Animales , Humanos , Reconocimiento Visual de Modelos/fisiología , Mapeo Encefálico/métodos , Cara/fisiología , Encéfalo/fisiología , Imagen por Resonancia Magnética , Estimulación Luminosa/métodos
18.
eNeuro ; 10(2)2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36759187

RESUMEN

Facial expressions are an increasingly used tool to assess emotional experience and affective state during experimental procedures in animal models. Previous studies have successfully related specific facial features with different positive and negative valence situations, most notably in relation to pain. However, characterizing and interpreting such expressions remains a major challenge. We identified seven easily visualizable facial parameters on mouse profiles, accounting for changes in eye, ear, mouth, snout and face orientation. We monitored their relative position on the face across time and throughout sequences of positive and aversive gustatory and somatosensory stimuli in freely moving mice. Facial parameters successfully captured response profiles to each stimulus and reflected spontaneous movements in response to stimulus valence, as well as contextual elements such as habituation. Notably, eye opening was increased by palatable tastants and innocuous touch, while this parameter was reduced by tasting a bitter solution and by painful stimuli. Mouse ear posture appears to convey a large part of emotional information. Facial expressions accurately depicted welfare and affective state in a time-sensitive manner, successfully correlating time-dependent stimulation. This study is the first to delineate rodent facial expression features in multiple positive valence situations, including in relation to affective touch. We suggest using this facial expression assay might provide mechanistic insights into emotional expression and improve the translational value of experimental studies in rodents on pain and other states.


Asunto(s)
Emociones , Expresión Facial , Ratones , Animales , Emociones/fisiología , Afecto/fisiología , Cara/fisiología , Dolor
19.
Sensors (Basel) ; 23(2)2023 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-36679603

RESUMEN

Previous research has demonstrated the potential to reconstruct human facial skin spectra based on the responses of RGB cameras to achieve high-fidelity color reproduction of human facial skin in various industrial applications. Nonetheless, the level of precision is still expected to improve. Inspired by the asymmetricity of human facial skin color in the CIELab* color space, we propose a practical framework, HPCAPR, for skin facial reflectance reconstruction based on calibrated datasets which reconstruct the facial spectra in subsets derived from clustering techniques in several spectrometric and colorimetric spaces, i.e., the spectral reflectance space, Principal Component (PC) space, CIELab*, and its three 2D subordinate color spaces, La*, Lb*, and ab*. The spectra reconstruction algorithm is optimized by combining state-of-art algorithms and thoroughly scanning the parameters. The results show that the hybrid of PCA and RGB polynomial regression algorithm with 3PCs plus 1st-order polynomial extension gives the best results. The performance can be improved substantially by operating the spectral reconstruction framework within the subset classified in the La* color subspace. Comparing with not conducting the clustering technique, it attains values of 25.2% and 57.1% for the median and maximum errors for the best cluster, respectively; for the worst, the maximum error was reduced by 42.2%.


Asunto(s)
Algoritmos , Piel , Humanos , Color , Colorimetría/métodos , Cara/fisiología
20.
Emotion ; 23(1): 163-181, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34843306

RESUMEN

Facial width-to height ratio (fWHR), presumed to be shaped by testosterone during puberty, has been linked with aggressive, dominant, and power-seeking behavioral traits in adult males, although the causal mediation is still being disputed. To investigate the role of mere observer attribution bias in the association, we instructed participants to draw, feature-assemble, or photo-edit faces of fictitious males with aggressive-dominant character (compared with peaceloving-submissive), or powerful social status (compared with powerless). Across three studies involving 1,100 modeled faces in total, we observed little evidence for attribution bias with regards to facial width. Only in the photo-edited faces did character condition seem to affect fWHR; this difference, however, relied on displayed state emotions, not on static facial features. Anger, in particular, was expressed by lowered or V-shaped eyebrows, whereby facial height was reduced so that fWHR increased, relative to the comparison condition where the opposite happened. Using Bayesian analyses and equivalence testing, we confirmed that, in the absence of state emotionality, there was no effect of character condition on facial width. Our results add to a number of recent studies stressing the role of emotion overgeneralization in the association of fWHR with personality traits, an attributional bias that may give rise to a self-fulfilling prophecy. Methodologically, we infer that static images may be of limited use for investigations of fWHR because they cannot sufficiently differentiate between transient muscular activation and identity-related bone structures. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Emociones , Cara , Masculino , Adulto , Humanos , Teorema de Bayes , Cara/fisiología , Emociones/fisiología , Ira , Agresión/psicología , Expresión Facial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA