Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 43
1.
Article En | MEDLINE | ID: mdl-33090835

The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

2.
Dev Psychol ; 55(9): 1965-1981, 2019 Sep.
Article En | MEDLINE | ID: mdl-31464498

Computer vision algorithms have made tremendous advances in recent years. We now have algorithms that can detect and recognize objects, faces, and even facial actions in still images and video sequences. This is wonderful news for researchers that need to code facial articulations in large data sets of images and videos, because this task is time consuming and can only be completed by expert coders, making it very expensive. The availability of computer algorithms that can automatically code facial actions in extremely large data sets also opens the door to studies in psychology and neuroscience that were not previously possible, for example, to study the development of the production of facial expressions from infancy to adulthood within and across cultures. Unfortunately, there is a lack of methodological understanding on how these algorithms should and should not be used, and on how to select the most appropriate algorithm for each study. This article aims to address this gap in the literature. Specifically, we present several methodologies for use in hypothesis-based and exploratory studies, explain how to select the computer algorithms that best fit to the requirements of our experimental design, and detail how to evaluate whether the automatic annotations provided by existing algorithms are trustworthy. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Algorithms , Emotions/physiology , Facial Expression , Machine Learning/standards , Research Design/standards , Child , Female , Humans , Male
3.
Psychol Sci Public Interest ; 20(1): 1-68, 2019 Jul.
Article En | MEDLINE | ID: mdl-31313636

It is commonly assumed that a person's emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.


Emotions , Facial Expression , Facial Recognition , Movement , Female , Humans , Interpersonal Relations , Judgment , Male , Psychomotor Performance
4.
Proc Natl Acad Sci U S A ; 116(15): 7169-7171, 2019 04 09.
Article En | MEDLINE | ID: mdl-30898883
5.
IEEE Trans Pattern Anal Mach Intell ; 41(12): 2835-2845, 2019 12.
Article En | MEDLINE | ID: mdl-30188814

Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in, turning some face areas red; or when one goes white in fear as a result of the drainage of blood from the face. Surprisingly, these image properties have not been exploited to recognize the facial action units (AUs) associated with these expressions. Herein, we present the first system to do recognition of AUs and their intensities using these functional color changes. These color features are shown to be robust to changes in identity, gender, race, ethnicity, and skin color. Specifically, we identify the chromaticity changes defining the transition of an AU from inactive to active and use an innovative Gabor transform-based algorithm to gain invariance to the timing of these changes. Because these image changes are given by functions rather than vectors, we use functional classifiers to identify the most discriminant color features of an AU and its intensities. We demonstrate that, using these discriminant color features, one can achieve results superior to those of the state-of-the-art. Finally, we define an algorithm that allows us to use the learned functional color representation in still images. This is done by learning the mapping between images and the identified functional color features in videos. Our algorithm works in realtime, i.e., 30 frames/second/CPU thread.


Face , Image Processing, Computer-Assisted/methods , Machine Learning , Algorithms , Color , Emotions/classification , Emotions/physiology , Face/anatomy & histology , Face/diagnostic imaging , Face/physiology , Humans , Skin Pigmentation/physiology , Video Recording
6.
Comput Vis ECCV ; 11214: 835-851, 2018 Sep.
Article En | MEDLINE | ID: mdl-30465044

Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN [4], that conditions GANs' generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild.

7.
IEEE Trans Pattern Anal Mach Intell ; 40(12): 3059-3066, 2018 12.
Article En | MEDLINE | ID: mdl-29990100

Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is , cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).


Algorithms , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Databases, Factual , Face/anatomy & histology , Humans , Video Recording
8.
Proc Natl Acad Sci U S A ; 115(14): 3581-3586, 2018 04 03.
Article En | MEDLINE | ID: mdl-29555780

Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.


Color , Emotions/physiology , Face/physiology , Facial Expression , Facial Muscles/physiology , Pattern Recognition, Visual , Adult , Female , Humans , Male , Young Adult
9.
Article En | MEDLINE | ID: mdl-31244515

We present a scalable weakly supervised clustering approach to learn facial action units (AUs) from large, freely available web images. Unlike most existing methods (e.g., CNNs) that rely on fully annotated data, our method exploits web images with inaccurate annotations. Specifically, we derive a weakly-supervised spectral algorithm that learns an embedding space to couple image appearance and semantics. The algorithm has efficient gradient update, and scales up to large quantities of images with a stochastic extension. With the learned embedding space, we adopt rank-order clustering to identify groups of visually and semantically similar images, and re-annotate these groups for training AU classifiers. Evaluation on the 1 millon EmotioNet dataset demonstrates the effectiveness of our approach: (1) our learned annotations reach on average 91.3% agreement with human annotations on 7 common AUs, (2) classifiers trained with re-annotated images perform comparably to, sometimes even better than, its supervised CNN-based counterpart, and (3) our method offers intuitive outlier/noise pruning instead of forcing one annotation to every image. Code is available.

10.
Curr Opin Psychol ; 17: 27-33, 2017 10.
Article En | MEDLINE | ID: mdl-28950969

Facial expressions of emotion are produced by contracting and relaxing the facial muscles in our face. I hypothesize that the human visual system solves the inverse problem of production, that is, to interpret emotion, the visual system attempts to identify the underlying muscle activations. I show converging computational, behavioral and imaging evidence in favor of this hypothesis. I detail the computations performed by the human visual system to achieve the decoding of these facial actions and identify a brain region where these computations likely take place. The resulting computational model explains how humans readily classify emotions into categories as well as continuous variables. This model also predicts the existence of a large number of previously unknown facial expressions, including compound emotions, affect attributes and mental states that are regularly used by people. I provide evidence in favor of this prediction.


Brain/physiology , Emotions , Facial Recognition/physiology , Brain/diagnostic imaging , Computer Simulation , Emotions/physiology , Facial Expression , Humans , Models, Neurological , Models, Psychological
11.
Curr Dir Psychol Sci ; 26(3): 263-269, 2017 Jun.
Article En | MEDLINE | ID: mdl-29307959

Faces are one of the most important means of communication in humans. For example, a short glance at a person's face provides information on identity and emotional state. What are the computations the brain uses to solve these problems so accurately and seemingly effortlessly? This article summarizes current research on computational modeling, a technique used to answer this question. Specifically, my research studies the hypothesis that this algorithm is tasked to solve the inverse problem of production. For example, to recognize identity, our brain needs to identify shape and shading image features that are invariant to facial expression, pose and illumination. Similarly, to recognize emotion, the brain needs to identify shape and shading features that are invariant to identity, pose and illumination. If one defines the physics equations that render an image under different identities, expressions, poses and illuminations, then gaining invariance to these factors is readily resolved by computing the inverse of this rendering function. I describe our current understanding of the algorithms used by our brains to resolve this inverse problem. I also discuss how these results are driving research in computer vision to design computer systems that are as accurate, robust and efficient as humans.

12.
J Neurosci ; 36(16): 4434-42, 2016 Apr 20.
Article En | MEDLINE | ID: mdl-27098688

By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT: Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment.


Brain/metabolism , Facial Expression , Facial Recognition/physiology , Photic Stimulation/methods , Adult , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male
13.
Cognition ; 150: 77-84, 2016 May.
Article En | MEDLINE | ID: mdl-26872248

Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.


Emotions/physiology , Facial Expression , Judgment , Photic Stimulation/methods , Adolescent , Adult , Female , Humans , Male , Young Adult
14.
IEEE Trans Pattern Anal Mach Intell ; 38(8): 1640-50, 2016 08.
Article En | MEDLINE | ID: mdl-26415154

Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.


Algorithms , Pattern Recognition, Automated , Support Vector Machine , Humans , Video Recording
15.
IEEE Trans Neural Netw Learn Syst ; 27(10): 2072-83, 2016 10.
Article En | MEDLINE | ID: mdl-26529784

Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a support vector machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or based on maximizing the minimum margin (i.e., a fixed-margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a sequential minimal optimization procedure. We demonstrate the accuracy of our solutions in several data sets. In addition, we provide a key application of our algorithms in estimating human subjects' ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature.


Neural Networks, Computer , Support Vector Machine , Algorithms , Decision Making , Humans , Learning
16.
Dialogues Clin Neurosci ; 17(4): 443-55, 2015 Dec.
Article En | MEDLINE | ID: mdl-26869845

Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions.


Algunas veces las emociones se revelan mediante las expresiones faciales. Cuando estas articulaciones faciales naturales involucran la contracción de los mismos grupos musculares en personas de distintas educaciones culturales, esto se considera como evidencia de un origen biológico de estas emociones. Aunque la investigación previa ha identificado expresiones faciales asociadas con categorías únicas sentidas internamente (por ejemplo, la expresión facial de felicidad cuando nos sentimos alegres), nosotros hemos estudiado recientemente expresiones faciales observadas cuando las personas experimentan emociones combinadas (por ejemplo, la expresión facial de agradable sorpresa cuando nos sentimos alegres de manera sorpresiva, como en una fiesta de cumpleaños sorpresa). Nuestra investigación ha identificado 17 expresiones combinadas que se producen consistentemente a través de las culturas, sugiriendo que el número de expresiones faciales de la emoción de origen biológico es mucho mayor que lo que se creía previamente. Este artículo aporta una panorámica de estos hallazgos y muestra la evidencia que sustenta la opinión que las expresiones espontáneas son producidas empleando las mismas articulaciones faciales identificadas previamente en experimentos de laboratorio. También se discuten las implicancias de nuestros resultados en el estudio de psicopatologías y se consideran varias preguntas abiertas para investigar.


Les émotions sont parfois révélées par les expressions faciales. Lorsque ces articulations faciales naturelles mettent en jeu la contraction des mêmes groupes musculaires chez des individus de culture différente, cela prouve l'origine biologique de ces émotions. Les recherches antérieures ont montré que les expressions faciales étaient associées à une catégorie unique de ressenti intérieur (par exemple, l'expression faciale de bonheur lorsque nous sommes joyeux), mais nous avons récemment étudié les expressions faciles des gens qui vivent des émotions complexes (ainsi l'expression faciale de surprise joyeuse lorsque nous ressentons de la joie en étant surpris, lors d'une fête anniversaire surprise par exemple). Nous avons identifié lors de nos recherches 17 expressions complexes reproduites régulièrement quelle que soit la culture, ce qui suggère que le nombre d'expressions faciales d'émotions d'origine biologique est beaucoup plus important que ce que l'on pensait. Ces résultats sont présentés dans cet article et confortent l'idée que les expressions spontanées sont réalisées par les mêmes articulations faciales que celles identifiées précédemment en laboratoire. Nous analysons aussi les implications de nos résultats en psychopathologie et nous envisageons plusieurs questions ouvertes de recherche.


Behavior/physiology , Emotions/physiology , Facial Expression , Recognition, Psychology/physiology , Animals , Biomedical Research , Humans
17.
IEEE Trans Neural Netw Learn Syst ; 25(10): 1879-93, 2014 Oct.
Article En | MEDLINE | ID: mdl-25291740

Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art.


Algorithms , Artificial Intelligence , Models, Theoretical , Pattern Recognition, Automated/methods , Regression Analysis , Computer Simulation , Humans , Pattern Recognition, Visual
18.
IEEE Trans Neural Netw Learn Syst ; 25(8): 1588-94, 2014 Aug.
Article En | MEDLINE | ID: mdl-25050954

In this brief, we show that minimizing nearest neighbor classification error (MNNE) is a favorable criterion for supervised linear dimension reduction (SLDR). We prove that MNNE is better than maximizing mutual information in the sense of being a proxy of the Bayes optimal criterion. Based on kernel density estimation, we derive a nonparametric algorithm for MNNE. Experiments on benchmark data sets show the superiority of MNNE over existing nonparametric SLDR methods.


Algorithms , Artificial Intelligence , Bayes Theorem , Linear Models , Models, Statistical , Pattern Recognition, Automated/methods , Computer Simulation
19.
Proc Natl Acad Sci U S A ; 111(15): E1454-62, 2014 Apr 15.
Article En | MEDLINE | ID: mdl-24706770

Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.


Emotions/classification , Emotions/physiology , Facial Expression , Models, Biological , Adult , Discrimination, Psychological/physiology , Facial Muscles/physiology , Female , Humans , Male , Ohio , Photography
20.
PLoS One ; 9(2): e86268, 2014.
Article En | MEDLINE | ID: mdl-24516528

To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic-computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences--Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions--plus their polarities--positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches.


Manuals as Topic , Sign Language , Computer Simulation , Discriminant Analysis , Humans , Software , Time Factors , United States , Video Recording
...