Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
JCI Insight ; 9(15)2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38900571

RESUMEN

Men who have sex with men (MSM) with HIV are at high risk for squamous intraepithelial lesion (SIL) and anal cancer. Identifying local immunological mechanisms involved in the development of anal dysplasia could aid treatment and diagnostics. Here, we studied 111 anal biopsies obtained from 101 MSM with HIV, who participated in an anal screening program. We first assessed multiple immune subsets by flow cytometry, in addition to histological examination, in a discovery cohort. Selected molecules were further evaluated by immunohistochemistry in a validation cohort. Pathological samples were characterized by the presence of resident memory T cells with low expression of CD103 and by changes in natural killer cell subsets, affecting residency and activation. Furthermore, potentially immunosuppressive subsets, including CD15+CD16+ mature neutrophils, gradually increased as the anal lesion progressed. Immunohistochemistry verified the association between the presence of CD15 in the epithelium and SIL diagnosis for the correlation with high-grade SIL. A complex immunological environment with imbalanced proportions of resident effectors and immune-suppressive subsets characterized pathological samples. Neutrophil infiltration, determined by CD15 staining, may represent a valuable pathological marker associated with the grade of dysplasia.


Asunto(s)
Neoplasias del Ano , Infecciones por VIH , Antígeno Lewis X , Humanos , Masculino , Infecciones por VIH/inmunología , Infecciones por VIH/complicaciones , Infecciones por VIH/patología , Neoplasias del Ano/patología , Neoplasias del Ano/inmunología , Adulto , Persona de Mediana Edad , Antígeno Lewis X/metabolismo , Homosexualidad Masculina , Lesiones Intraepiteliales Escamosas/patología , Canal Anal/patología , Células Asesinas Naturales/inmunología , Células Asesinas Naturales/metabolismo , Antígenos CD/metabolismo , Neutrófilos/inmunología , Neutrófilos/patología , Neutrófilos/metabolismo , Biopsia , Inmunohistoquímica , Cadenas alfa de Integrinas/metabolismo
2.
Curr Opin HIV AIDS ; 19(2): 69-78, 2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38169333

RESUMEN

PURPOSE OF REVIEW: The complex nature and distribution of the HIV reservoir in tissue of people with HIV remains one of the major obstacles to achieve the elimination of HIV persistence. Challenges include the tissue-specific states of latency and viral persistence, which translates into high levels of reservoir heterogeneity. Moreover, the best strategies to reach and eliminate these reservoirs may differ based on the intrinsic characteristics of the cellular and anatomical reservoir to reach. RECENT FINDINGS: While major focus has been undertaken for lymphoid tissues and follicular T helper cells, evidence of viral persistence in HIV and non-HIV antigen-specific CD4 + T cells and macrophages resident in multiple tissues providing long-term protection presents new challenges in the quest for an HIV cure. Considering the microenvironments where these cellular reservoirs persist opens new venues for the delivery of drugs and immunotherapies to target these niches. New tools, such as single-cell RNA sequencing, CRISPR screenings, mRNA technology or tissue organoids are quickly developing and providing detailed information about the complex nature of the tissue reservoirs. SUMMARY: Targeting persistence in tissue reservoirs represents a complex but essential step towards achieving HIV cure. Combinatorial strategies, particularly during the early phases of infection to impact initial reservoirs, capable of reaching and reactivating multiple long-lived reservoirs in the body may lead the path.


Asunto(s)
Infecciones por VIH , Humanos , Infecciones por VIH/tratamiento farmacológico , Latencia del Virus , Linfocitos T CD4-Positivos
3.
Open Biol ; 13(1): 220200, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36629019

RESUMEN

Microglia are very sensitive to changes in the environment and respond through morphological, functional and metabolic adaptations. To depict the modifications microglia undergo under healthy and pathological conditions, we developed free access image analysis scripts to quantify microglia morphologies and phagocytosis. Neuron-glia cultures, in which microglia express the reporter tdTomato, were exposed to excitotoxicity or excitotoxicity + inflammation and analysed 8 h later. Neuronal death was assessed by SYTOX staining of nucleus debris and phagocytosis was measured through the engulfment of SYTOX+ particles in microglia. We identified seven morphologies: round, hypertrophic, fried egg, bipolar and three 'inflamed' morphologies. We generated a classifier able to separate them and assign one of the seven classes to each microglia in sample images. In control cultures, round and hypertrophic morphologies were predominant. Excitotoxicity had a limited effect on the composition of the populations. By contrast, excitotoxicity + inflammation promoted an enrichment in inflamed morphologies and increased the percentage of phagocytosing microglia. Our data suggest that inflammation is critical to promote phenotypical changes in microglia. We also validated our tools for the segmentation of microglia in brain slices and performed morphometry with the obtained mask. Our method is versatile and useful to correlate microglia sub-populations and behaviour with environmental changes.


Asunto(s)
Microglía , Fagocitosis , Humanos , Microglía/metabolismo , Inflamación/metabolismo , Muerte Celular , Neuronas/metabolismo
4.
Mach Learn Appl ; 102022 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-36578375

RESUMEN

The breast cosmetic outcome after breast conserving therapy is essential for evaluating breast treatment and determining patient's remedy selection. This prompts the need of objective and efficient methods for breast cosmesis evaluations. However, current evaluation methods rely on ratings from a small group of physicians or semi-automated pipelines, making the processes time-consuming and their results inconsistent. To solve the problem, in this study, we proposed: 1. a fully-automatic Machine Learning Breast Cosmetic evaluation algorithm leveraging the state-of-the-art Deep Learning algorithms for breast detection and contour annotation, 2. a novel set of Breast Cosmesis features, 3. a new Breast Cosmetic dataset consisting 3k+ images from three clinical trials with human annotations on both breast components and their cosmesis scores. We show our fully-automatic framework can achieve comparable performance to state-of-the-art without the need of human inputs, leading to a more objective, low-cost and scalable solution for breast cosmetic evaluation in breast cancer treatment.

5.
Int J Comput Vis ; 129(4): 942-959, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34211258

RESUMEN

Computer vision algorithms performance are near or superior to humans in the visual problems including object recognition (especially those of fine-grained categories), segmentation, and 3D object reconstruction from 2D views. Humans are, however, capable of higher-level image analyses. A clear example, involving theory of mind, is our ability to determine whether a perceived behavior or action was performed intentionally or not. In this paper, we derive an algorithm that can infer whether the behavior of an agent in a scene is intentional or unintentional based on its 3D kinematics, using the knowledge of self-propelled motion, Newtonian motion and their relationship. We show how the addition of this basic knowledge leads to a simple, unsupervised algorithm. To test the derived algorithm, we constructed three dedicated datasets from abstract geometric animation to realistic videos of agents performing intentional and non-intentional actions. Experiments on these datasets show that our algorithm can recognize whether an action is intentional or not, even without training data. The performance is comparable to various supervised baselines quantitatively, with sensible intentionality segmentation qualitatively.

6.
Artículo en Inglés | MEDLINE | ID: mdl-33090835

RESUMEN

The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

7.
Dev Psychol ; 55(9): 1965-1981, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31464498

RESUMEN

Computer vision algorithms have made tremendous advances in recent years. We now have algorithms that can detect and recognize objects, faces, and even facial actions in still images and video sequences. This is wonderful news for researchers that need to code facial articulations in large data sets of images and videos, because this task is time consuming and can only be completed by expert coders, making it very expensive. The availability of computer algorithms that can automatically code facial actions in extremely large data sets also opens the door to studies in psychology and neuroscience that were not previously possible, for example, to study the development of the production of facial expressions from infancy to adulthood within and across cultures. Unfortunately, there is a lack of methodological understanding on how these algorithms should and should not be used, and on how to select the most appropriate algorithm for each study. This article aims to address this gap in the literature. Specifically, we present several methodologies for use in hypothesis-based and exploratory studies, explain how to select the computer algorithms that best fit to the requirements of our experimental design, and detail how to evaluate whether the automatic annotations provided by existing algorithms are trustworthy. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Algoritmos , Emociones/fisiología , Expresión Facial , Aprendizaje Automático/normas , Proyectos de Investigación/normas , Niño , Femenino , Humanos , Masculino
8.
Psychol Sci Public Interest ; 20(1): 1-68, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31313636

RESUMEN

It is commonly assumed that a person's emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Movimiento , Femenino , Humanos , Relaciones Interpersonales , Juicio , Masculino , Desempeño Psicomotor
9.
Proc Natl Acad Sci U S A ; 116(15): 7169-7171, 2019 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-30898883

Asunto(s)
Emociones
10.
IEEE Trans Pattern Anal Mach Intell ; 41(12): 2835-2845, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-30188814

RESUMEN

Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in, turning some face areas red; or when one goes white in fear as a result of the drainage of blood from the face. Surprisingly, these image properties have not been exploited to recognize the facial action units (AUs) associated with these expressions. Herein, we present the first system to do recognition of AUs and their intensities using these functional color changes. These color features are shown to be robust to changes in identity, gender, race, ethnicity, and skin color. Specifically, we identify the chromaticity changes defining the transition of an AU from inactive to active and use an innovative Gabor transform-based algorithm to gain invariance to the timing of these changes. Because these image changes are given by functions rather than vectors, we use functional classifiers to identify the most discriminant color features of an AU and its intensities. We demonstrate that, using these discriminant color features, one can achieve results superior to those of the state-of-the-art. Finally, we define an algorithm that allows us to use the learned functional color representation in still images. This is done by learning the mapping between images and the identified functional color features in videos. Our algorithm works in realtime, i.e., 30 frames/second/CPU thread.


Asunto(s)
Cara , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Algoritmos , Color , Emociones/clasificación , Emociones/fisiología , Cara/anatomía & histología , Cara/diagnóstico por imagen , Cara/fisiología , Humanos , Pigmentación de la Piel/fisiología , Grabación en Video
11.
Comput Vis ECCV ; 11214: 835-851, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30465044

RESUMEN

Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN [4], that conditions GANs' generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild.

12.
IEEE Trans Pattern Anal Mach Intell ; 40(12): 3059-3066, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29990100

RESUMEN

Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is , cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Bases de Datos Factuales , Cara/anatomía & histología , Humanos , Grabación en Video
13.
Proc Natl Acad Sci U S A ; 115(14): 3581-3586, 2018 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-29555780

RESUMEN

Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.


Asunto(s)
Color , Emociones/fisiología , Cara/fisiología , Expresión Facial , Músculos Faciales/fisiología , Reconocimiento Visual de Modelos , Adulto , Femenino , Humanos , Masculino , Adulto Joven
14.
Artículo en Inglés | MEDLINE | ID: mdl-31244515

RESUMEN

We present a scalable weakly supervised clustering approach to learn facial action units (AUs) from large, freely available web images. Unlike most existing methods (e.g., CNNs) that rely on fully annotated data, our method exploits web images with inaccurate annotations. Specifically, we derive a weakly-supervised spectral algorithm that learns an embedding space to couple image appearance and semantics. The algorithm has efficient gradient update, and scales up to large quantities of images with a stochastic extension. With the learned embedding space, we adopt rank-order clustering to identify groups of visually and semantically similar images, and re-annotate these groups for training AU classifiers. Evaluation on the 1 millon EmotioNet dataset demonstrates the effectiveness of our approach: (1) our learned annotations reach on average 91.3% agreement with human annotations on 7 common AUs, (2) classifiers trained with re-annotated images perform comparably to, sometimes even better than, its supervised CNN-based counterpart, and (3) our method offers intuitive outlier/noise pruning instead of forcing one annotation to every image. Code is available.

15.
Curr Opin Psychol ; 17: 27-33, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28950969

RESUMEN

Facial expressions of emotion are produced by contracting and relaxing the facial muscles in our face. I hypothesize that the human visual system solves the inverse problem of production, that is, to interpret emotion, the visual system attempts to identify the underlying muscle activations. I show converging computational, behavioral and imaging evidence in favor of this hypothesis. I detail the computations performed by the human visual system to achieve the decoding of these facial actions and identify a brain region where these computations likely take place. The resulting computational model explains how humans readily classify emotions into categories as well as continuous variables. This model also predicts the existence of a large number of previously unknown facial expressions, including compound emotions, affect attributes and mental states that are regularly used by people. I provide evidence in favor of this prediction.


Asunto(s)
Encéfalo/fisiología , Emociones , Reconocimiento Facial/fisiología , Encéfalo/diagnóstico por imagen , Simulación por Computador , Emociones/fisiología , Expresión Facial , Humanos , Modelos Neurológicos , Modelos Psicológicos
16.
Int J Colorectal Dis ; 32(2): 255-264, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27757541

RESUMEN

PURPOSE: Patients with locally advanced rectal cancer and pathologic complete response to neoadjuvant chemoradiation therapy have lower rates of recurrence compared to those who do not. However, the influences of the pathologic response on surgical complications and survival remain unclear. This study aimed to investigate the influence of neoadjuvant therapy for rectal cancer on postoperative morbidity and long-term survival. METHODS: This was a comparative study of consecutive patients who underwent laparoscopic total mesorectal excision for rectal cancer in two European tertiary hospitals between 2004 and 2014. Patients with and without pathologic complete responses were compared in terms of postoperative morbidity, mortality, and survival. RESULTS: Fifty patients with complete response (ypT0N0) were compared with 141 patients who exhibited non-complete response. No group differences were observed in the postoperative mortality or morbidity rates. The median follow-up time was 57 months (range 1-121). Over this period, 11 (5.8 %) patients, all of whom were in the non-complete response group, exhibited local recurrence. The 5-year overall survival and disease-free survival were significantly better in the complete response group, 92.5 vs. 75.3 % (p = 0.004) and 89 vs. 73.4 % (p = 0.002), respectively. CONCLUSIONS: Postoperative complication rate after laparoscopic total mesorectal excision is not associated with the pathologic response grade to neoadjuvant chemoradiation therapy.


Asunto(s)
Quimioradioterapia , Laparoscopía , Terapia Neoadyuvante , Neoplasias del Recto/patología , Neoplasias del Recto/terapia , Adulto , Anciano , Anciano de 80 o más Años , Supervivencia sin Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Morbilidad , Estadificación de Neoplasias , Cuidados Posoperatorios , Neoplasias del Recto/epidemiología , Resultado del Tratamiento
17.
Curr Dir Psychol Sci ; 26(3): 263-269, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29307959

RESUMEN

Faces are one of the most important means of communication in humans. For example, a short glance at a person's face provides information on identity and emotional state. What are the computations the brain uses to solve these problems so accurately and seemingly effortlessly? This article summarizes current research on computational modeling, a technique used to answer this question. Specifically, my research studies the hypothesis that this algorithm is tasked to solve the inverse problem of production. For example, to recognize identity, our brain needs to identify shape and shading image features that are invariant to facial expression, pose and illumination. Similarly, to recognize emotion, the brain needs to identify shape and shading features that are invariant to identity, pose and illumination. If one defines the physics equations that render an image under different identities, expressions, poses and illuminations, then gaining invariance to these factors is readily resolved by computing the inverse of this rendering function. I describe our current understanding of the algorithms used by our brains to resolve this inverse problem. I also discuss how these results are driving research in computer vision to design computer systems that are as accurate, robust and efficient as humans.

18.
J Neurosci ; 36(16): 4434-42, 2016 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-27098688

RESUMEN

By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT: Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment.


Asunto(s)
Encéfalo/metabolismo , Expresión Facial , Reconocimiento Facial/fisiología , Estimulación Luminosa/métodos , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino
19.
Cognition ; 150: 77-84, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26872248

RESUMEN

Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.


Asunto(s)
Emociones/fisiología , Expresión Facial , Juicio , Estimulación Luminosa/métodos , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
20.
IEEE Trans Neural Netw Learn Syst ; 27(10): 2072-83, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-26529784

RESUMEN

Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a support vector machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or based on maximizing the minimum margin (i.e., a fixed-margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a sequential minimal optimization procedure. We demonstrate the accuracy of our solutions in several data sets. In addition, we provide a key application of our algorithms in estimating human subjects' ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature.


Asunto(s)
Redes Neurales de la Computación , Máquina de Vectores de Soporte , Algoritmos , Toma de Decisiones , Humanos , Aprendizaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA