Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
Int J Comput Vis ; 129(4): 942-959, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34211258

RESUMO

Computer vision algorithms performance are near or superior to humans in the visual problems including object recognition (especially those of fine-grained categories), segmentation, and 3D object reconstruction from 2D views. Humans are, however, capable of higher-level image analyses. A clear example, involving theory of mind, is our ability to determine whether a perceived behavior or action was performed intentionally or not. In this paper, we derive an algorithm that can infer whether the behavior of an agent in a scene is intentional or unintentional based on its 3D kinematics, using the knowledge of self-propelled motion, Newtonian motion and their relationship. We show how the addition of this basic knowledge leads to a simple, unsupervised algorithm. To test the derived algorithm, we constructed three dedicated datasets from abstract geometric animation to realistic videos of agents performing intentional and non-intentional actions. Experiments on these datasets show that our algorithm can recognize whether an action is intentional or not, even without training data. The performance is comparable to various supervised baselines quantitatively, with sensible intentionality segmentation qualitatively.

2.
Proc Natl Acad Sci U S A ; 115(14): 3581-3586, 2018 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-29555780

RESUMO

Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.


Assuntos
Cor , Emoções/fisiologia , Face/fisiologia , Expressão Facial , Músculos Faciais/fisiologia , Reconhecimento Visual de Modelos , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
3.
J Neurosci ; 36(16): 4434-42, 2016 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-27098688

RESUMO

By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT: Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment.


Assuntos
Encéfalo/metabolismo , Expressão Facial , Reconhecimento Facial/fisiologia , Estimulação Luminosa/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino
4.
Int J Colorectal Dis ; 32(2): 255-264, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27757541

RESUMO

PURPOSE: Patients with locally advanced rectal cancer and pathologic complete response to neoadjuvant chemoradiation therapy have lower rates of recurrence compared to those who do not. However, the influences of the pathologic response on surgical complications and survival remain unclear. This study aimed to investigate the influence of neoadjuvant therapy for rectal cancer on postoperative morbidity and long-term survival. METHODS: This was a comparative study of consecutive patients who underwent laparoscopic total mesorectal excision for rectal cancer in two European tertiary hospitals between 2004 and 2014. Patients with and without pathologic complete responses were compared in terms of postoperative morbidity, mortality, and survival. RESULTS: Fifty patients with complete response (ypT0N0) were compared with 141 patients who exhibited non-complete response. No group differences were observed in the postoperative mortality or morbidity rates. The median follow-up time was 57 months (range 1-121). Over this period, 11 (5.8 %) patients, all of whom were in the non-complete response group, exhibited local recurrence. The 5-year overall survival and disease-free survival were significantly better in the complete response group, 92.5 vs. 75.3 % (p = 0.004) and 89 vs. 73.4 % (p = 0.002), respectively. CONCLUSIONS: Postoperative complication rate after laparoscopic total mesorectal excision is not associated with the pathologic response grade to neoadjuvant chemoradiation therapy.


Assuntos
Quimiorradioterapia , Laparoscopia , Terapia Neoadjuvante , Neoplasias Retais/patologia , Neoplasias Retais/terapia , Adulto , Idoso , Idoso de 80 Anos ou mais , Intervalo Livre de Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Morbidade , Estadiamento de Neoplasias , Cuidados Pós-Operatórios , Neoplasias Retais/epidemiologia , Resultado do Tratamento
5.
Proc Natl Acad Sci U S A ; 116(15): 7169-7171, 2019 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-30898883

Assuntos
Emoções
6.
Proc Natl Acad Sci U S A ; 111(15): E1454-62, 2014 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-24706770

RESUMO

Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.


Assuntos
Emoções/classificação , Emoções/fisiologia , Expressão Facial , Modelos Biológicos , Adulto , Discriminação Psicológica/fisiologia , Músculos Faciais/fisiologia , Feminino , Humanos , Masculino , Ohio , Fotografação
7.
Pattern Recognit ; 47(1)2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24187386

RESUMO

Deformable shape detection is an important problem in computer vision and pattern recognition. However, standard detectors are typically limited to locating only a few salient landmarks such as landmarks near edges or areas of high contrast, often conveying insufficient shape information. This paper presents a novel statistical pattern recognition approach to locate a dense set of salient and non-salient landmarks in images of a deformable object. We explore the fact that several object classes exhibit a homogeneous structure such that each landmark position provides some information about the position of the other landmarks. In our model, the relationship between all pairs of landmarks is naturally encoded as a probabilistic graph. Dense landmark detections are then obtained with a new sampling algorithm that, given a set of candidate detections, selects the most likely positions as to maximize the probability of the graph. Our experimental results demonstrate accurate, dense landmark detections within and across different databases.

8.
Curr Opin HIV AIDS ; 19(2): 69-78, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38169333

RESUMO

PURPOSE OF REVIEW: The complex nature and distribution of the HIV reservoir in tissue of people with HIV remains one of the major obstacles to achieve the elimination of HIV persistence. Challenges include the tissue-specific states of latency and viral persistence, which translates into high levels of reservoir heterogeneity. Moreover, the best strategies to reach and eliminate these reservoirs may differ based on the intrinsic characteristics of the cellular and anatomical reservoir to reach. RECENT FINDINGS: While major focus has been undertaken for lymphoid tissues and follicular T helper cells, evidence of viral persistence in HIV and non-HIV antigen-specific CD4 + T cells and macrophages resident in multiple tissues providing long-term protection presents new challenges in the quest for an HIV cure. Considering the microenvironments where these cellular reservoirs persist opens new venues for the delivery of drugs and immunotherapies to target these niches. New tools, such as single-cell RNA sequencing, CRISPR screenings, mRNA technology or tissue organoids are quickly developing and providing detailed information about the complex nature of the tissue reservoirs. SUMMARY: Targeting persistence in tissue reservoirs represents a complex but essential step towards achieving HIV cure. Combinatorial strategies, particularly during the early phases of infection to impact initial reservoirs, capable of reaching and reactivating multiple long-lived reservoirs in the body may lead the path.


Assuntos
Infecções por HIV , Humanos , Infecções por HIV/tratamento farmacológico , Latência Viral , Linfócitos T CD4-Positivos
9.
JCI Insight ; 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38900571

RESUMO

Men who have sex with men (MSM) with HIV are at high risk for squamous intraepithelial lesion (SIL) and anal cancer. Identifying local immunological mechanisms involved in the development of anal dysplasia could aid treatment and diagnostics. Here we studied 111 anal biopsies obtained from 101 MSM with HIV, who participated in an anal screening program. We first assessed multiple immune subsets by flow cytometry, in addition to histological examination, in a discovery cohort (n = 54). Selected molecules were further evaluated by immunohistochemistry in a validation cohort (n = 47). Pathological samples were characterized by the presence of Resident Memory T cells with low expression of CD103 and by changes in Natural Killer cell subsets, affecting residency and activation. Furthermore, potentially immune suppressive subsets, including CD15+CD16+ mature neutrophils, gradually increased as the anal lesion progressed. Immunohistochemistry confirmed the association between the presence of CD15 in the epithelium and SIL diagnosis, with a sensitivity of 80% and specificity of 71% (AUC 0.762) for the correlation with high-grade SIL. A complex immunological environment with imbalanced proportions of resident effectors and immune suppressive subsets characterizes pathological samples. Neutrophil infiltration, determined by CD15 staining, may represent a valuable pathological marker associated with the grade of dysplasia.

10.
J Vis ; 13(4): 13, 2013 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-23509409

RESUMO

Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10-20 ms), even at low resolutions. Fear and anger are recognized the slowest (100-250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70-200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models.


Assuntos
Emoções , Expressão Facial , Reconhecimento Psicológico , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Limiar Sensorial/fisiologia , Fatores de Tempo , Adulto Jovem
11.
Open Biol ; 13(1): 220200, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36629019

RESUMO

Microglia are very sensitive to changes in the environment and respond through morphological, functional and metabolic adaptations. To depict the modifications microglia undergo under healthy and pathological conditions, we developed free access image analysis scripts to quantify microglia morphologies and phagocytosis. Neuron-glia cultures, in which microglia express the reporter tdTomato, were exposed to excitotoxicity or excitotoxicity + inflammation and analysed 8 h later. Neuronal death was assessed by SYTOX staining of nucleus debris and phagocytosis was measured through the engulfment of SYTOX+ particles in microglia. We identified seven morphologies: round, hypertrophic, fried egg, bipolar and three 'inflamed' morphologies. We generated a classifier able to separate them and assign one of the seven classes to each microglia in sample images. In control cultures, round and hypertrophic morphologies were predominant. Excitotoxicity had a limited effect on the composition of the populations. By contrast, excitotoxicity + inflammation promoted an enrichment in inflamed morphologies and increased the percentage of phagocytosing microglia. Our data suggest that inflammation is critical to promote phenotypical changes in microglia. We also validated our tools for the segmentation of microglia in brain slices and performed morphometry with the obtained mask. Our method is versatile and useful to correlate microglia sub-populations and behaviour with environmental changes.


Assuntos
Microglia , Fagocitose , Humanos , Microglia/metabolismo , Inflamação/metabolismo , Morte Celular , Neurônios/metabolismo
12.
Pattern Recognit ; 45(4): 1792-1801, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22308002

RESUMO

We propose an approach to shape detection of highly deformable shapes in images via manifold learning with regression. Our method does not require shape key points be defined at high contrast image regions, nor do we need an initial estimate of the shape. We only require sufficient representative training data and a rough initial estimate of the object position and scale. We demonstrate the method for face shape learning, and provide a comparison to nonlinear Active Appearance Model. Our method is extremely accurate, to nearly pixel precision and is capable of accurately detecting the shape of faces undergoing extreme expression changes. The technique is robust to occlusions such as glasses and gives reasonable results for extremely degraded image resolutions.

13.
Mach Learn Appl ; 102022 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-36578375

RESUMO

The breast cosmetic outcome after breast conserving therapy is essential for evaluating breast treatment and determining patient's remedy selection. This prompts the need of objective and efficient methods for breast cosmesis evaluations. However, current evaluation methods rely on ratings from a small group of physicians or semi-automated pipelines, making the processes time-consuming and their results inconsistent. To solve the problem, in this study, we proposed: 1. a fully-automatic Machine Learning Breast Cosmetic evaluation algorithm leveraging the state-of-the-art Deep Learning algorithms for breast detection and contour annotation, 2. a novel set of Breast Cosmesis features, 3. a new Breast Cosmetic dataset consisting 3k+ images from three clinical trials with human annotations on both breast components and their cosmesis scores. We show our fully-automatic framework can achieve comparable performance to state-of-the-art without the need of human inputs, leading to a more objective, low-cost and scalable solution for breast cosmetic evaluation in breast cancer treatment.

14.
J Vis ; 11(13): 24, 2011 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-22131445

RESUMO

Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.


Assuntos
Emoções/fisiologia , Expressão Facial , Reconhecimento Visual de Modelos/fisiologia , Acomodação Ocular/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
15.
Int J Comput Vis ; 88(3): 404-424, 2010 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-23682206

RESUMO

We present an information theoretic approach to define the problem of structure from motion (SfM) as a blind source separation one. Given that for almost all practical joint densities of shape points, the marginal densities are non-Gaussian, we show how higher-order statistics can be used to provide improvements in shape estimates over the methods of factorization via Singular Value Decomposition (SVD), bundle adjustment and Bayesian approaches. Previous techniques have either explicitly or implicitly used only second-order statistics in models of shape or noise. A further advantage of viewing SfM as a blind source problem is that it easily allows for the inclusion of noise and shape models, resulting in Maximum Likelihood (ML) or Maximum a Posteriori (MAP) shape and motion estimates. A key result is that the blind source separation approach has the ability to recover the motion and shape matrices without the need to explicitly know the motion or shape pdf. We demonstrate that it suffices to know whether the pdf is sub-or super-Gaussian (i.e., semi-parametric estimation) and derive a simple formulation to determine this from the data. We provide extensive experimental results on synthetic and real tracked points in order to quantify the improvement obtained from this technique.

16.
Artigo em Inglês | MEDLINE | ID: mdl-33090835

RESUMO

The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

17.
IEEE Trans Pattern Anal Mach Intell ; 31(5): 841-54, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19299859

RESUMO

The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Movimento , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Aumento da Imagem/métodos , Movimento (Física) , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
J Vis ; 9(1): 5.1-11, 2009 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-19271875

RESUMO

Perception of facial expressions of emotion is generally assumed to correspond to underlying muscle movement. However, it is often observed that some individuals have sadder or angrier faces, even for neutral, motionless faces. Here, we report on one such effect caused by simple static configural changes. In particular, we show four variations in the relative vertical position of the nose, mouth, eyes, and eyebrows that affect the perception of emotion in neutral faces. The first two configurations make the vertical distance between the eyes and mouth shorter than average, resulting in the perception of an angrier face. The other two configurations make this distance larger than average, resulting in the perception of sadness. These perceptions increase with the amount of configural change, suggesting a representation based on variations from a norm (prototypical) face.


Assuntos
Emoções , Expressão Facial , Percepção Visual/fisiologia , Adulto , Ira , Humanos , Estimulação Luminosa/métodos
19.
Image Vis Comput ; 27(12): 1826-1844, 2009 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-20161003

RESUMO

The manual signs in sign languages are generated and interpreted using three basic building blocks: handshape, motion, and place of articulation. When combined, these three components (together with palm orientation) uniquely determine the meaning of the manual sign. This means that the use of pattern recognition techniques that only employ a subset of these components is inappropriate for interpreting the sign or to build automatic recognizers of the language. In this paper, we define an algorithm to model these three basic components form a single video sequence of two-dimensional pictures of a sign. Recognition of these three components are then combined to determine the class of the signs in the videos. Experiments are performed on a database of (isolated) American Sign Language (ASL) signs. The results demonstrate that, using semi-automatic detection, all three components can be reliably recovered from two-dimensional video sequences, allowing for an accurate representation and recognition of the signs.

20.
Dev Psychol ; 55(9): 1965-1981, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31464498

RESUMO

Computer vision algorithms have made tremendous advances in recent years. We now have algorithms that can detect and recognize objects, faces, and even facial actions in still images and video sequences. This is wonderful news for researchers that need to code facial articulations in large data sets of images and videos, because this task is time consuming and can only be completed by expert coders, making it very expensive. The availability of computer algorithms that can automatically code facial actions in extremely large data sets also opens the door to studies in psychology and neuroscience that were not previously possible, for example, to study the development of the production of facial expressions from infancy to adulthood within and across cultures. Unfortunately, there is a lack of methodological understanding on how these algorithms should and should not be used, and on how to select the most appropriate algorithm for each study. This article aims to address this gap in the literature. Specifically, we present several methodologies for use in hypothesis-based and exploratory studies, explain how to select the computer algorithms that best fit to the requirements of our experimental design, and detail how to evaluate whether the automatic annotations provided by existing algorithms are trustworthy. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Algoritmos , Emoções/fisiologia , Expressão Facial , Aprendizado de Máquina/normas , Projetos de Pesquisa/normas , Criança , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa