Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Eur J Orthod ; 36(5): 506-11, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25257926

RESUMEN

Three-dimensional (3D) imaging technology has been widely used to analyse facial morphology and has revealed an influence of some medical conditions on craniofacial growth and morphology. The aim of the study is to investigate whether craniofacial morphology is different in atopic Caucasian children compared with controls. Study design included observational longitudinal cohort study. Atopy was diagnosed via skin-prick tests performed at 7.5 years of age. The cohort was followed to 15 years of age as part of the Avon Longitudinal Study of Parents and Children (ALSPAC). A total of 734 atopic and 2829 controls were identified. 3D laser surface facial scans were obtained at 15 years of age. Twenty-one reproducible facial landmarks (x, y, z co-ordinates) were identified on each facial scan. Inter-landmark distances and average facial shells for atopic and non-atopic children were compared with explore differences in face shape between the groups. Both total anterior face height (pg-g, pg-men) and mid-face height (Is-men, sn-men, n-sn) were longer (0.6 and 0.4mm respectively) in atopic children when compared with non-atopic children. No facial differences were detected in the transverse and antero-posterior relationships. Small but statistically significant differences were detected in the total and mid-face height between atopic and non-atopic children. No differences were detected in the transverse and antero-posterior relationships.


Asunto(s)
Cefalometría/métodos , Dermatitis Atópica/patología , Cara , Imagenología Tridimensional/métodos , Puntos Anatómicos de Referencia/patología , Estatura , Peso Corporal , Niño , Estudios de Cohortes , Femenino , Estudios de Seguimiento , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Rayos Láser , Estudios Longitudinales , Masculino , Pruebas Cutáneas , Dimensión Vertical
2.
Eur J Orthod ; 36(4): 373-80, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25074563

RESUMEN

Respiratory activity may have an influence on craniofacial development and interact with genetic and environmental factors. It has been suggested that certain medical conditions such as asthma have an influence on face shape. The aim of the study is to investigate whether facial shape is different in individuals diagnosed as having asthma compared with controls. Study design included observational longitudinal cohort study. Asthma was defined as reported wheezing diagnosed at age 7 years and 6 months. The cohort was followed to 15 years of age as part of the Avon Longitudinal Study of Parents and Children. A total of 418 asthmatics and 3010 controls were identified. Three-dimensional laser surface facial scans were obtained. Twenty-one reproducible facial landmarks (x, y, z co-ordinates) were identified. Average facial shells were created for asthmatic and non-asthmatic males and females to explore surface differences. The inter-ala distance was 0.4mm wider (95% CI) and mid-face height was 0.4mm (95% CI) shorter in asthmatic females when compared with non-asthmatic females. No facial differences were detected in male subjects. Small but statistically significant differences were detected in mid-face height and inter-ala width between asthmatic and non-asthmatic females. No differences were detected in males. The lack of detection of any facial differences in males may be explained by significant facial variation as a result of achieving different stages of facial growth due to pubertal changes, which may mask any underlying condition effect.


Asunto(s)
Asma/patología , Cara , Imagenología Tridimensional/métodos , Puntos Anatómicos de Referencia/patología , Índice de Masa Corporal , Cefalometría/métodos , Niño , Estudios de Cohortes , Ojo/patología , Femenino , Estudios de Seguimiento , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Rayos Láser , Labio/patología , Estudios Longitudinales , Masculino , Desarrollo Maxilofacial/fisiología , Cartílagos Nasales/patología , Nariz/patología , Dimensión Vertical
3.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 13083-13099, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37335789

RESUMEN

While 3D visual saliency aims to predict regional importance of 3D surfaces in agreement with human visual perception and has been well researched in computer vision and graphics, latest work with eye-tracking experiments shows that state-of-the-art 3D visual saliency methods remain poor at predicting human fixations. Cues emerging prominently from these experiments suggest that 3D visual saliency might associate with 2D image saliency. This paper proposes a framework that combines a Generative Adversarial Network and a Conditional Random Field for learning visual saliency of both a single 3D object and a scene composed of multiple 3D objects with image saliency ground truth to 1) investigate whether 3D visual saliency is an independent perceptual measure or just a derivative of image saliency and 2) provide a weakly supervised method for more accurately predicting 3D visual saliency. Through extensive experiments, we not only demonstrate that our method significantly outperforms the state-of-the-art approaches, but also manage to answer the interesting and worthy question proposed within the title of this paper.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(1): 905-918, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35104210

RESUMEN

Face portrait line drawing is a unique style of art which is highly abstract and expressive. However, due to its high semantic constraints, many existing methods learn to generate portrait drawings using paired training data, which is costly and time-consuming to obtain. In this paper, we propose a novel method to automatically transform face photos to portrait drawings using unpaired training data with two new features; i.e., our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data. To achieve these benefits, we (1) propose a novel quality metric for portrait drawings which is learned from human perception, and (2) introduce a quality loss to guide the network toward generating better looking portrait drawings. We observe that existing unpaired translation methods such as CycleGAN tend to embed invisible reconstruction information indiscriminately in the whole drawings due to significant information imbalance between the photo and portrait drawing domains, which leads to important facial features missing. To address this problem, we propose a novel asymmetric cycle mapping that enforces the reconstruction information to be visible and only embedded in the selected facial regions. Along with localized discriminators for important facial regions, our method well preserves all important facial features in the generated drawings. Generator dissection further explains that our model learns to incorporate face semantic information during drawing generation. Extensive experiments including a user study show that our model outperforms state-of-the-art methods.

5.
Eur J Orthod ; 34(6): 655-64, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21934112

RESUMEN

The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.


Asunto(s)
Puntos Anatómicos de Referencia/anatomía & histología , Cara/anatomía & histología , Imagenología Tridimensional/métodos , Adolescente , Femenino , Humanos , Rayos Láser , Estudios Longitudinales , Masculino , Análisis de Componente Principal , Reino Unido , Población Blanca
6.
IEEE Trans Vis Comput Graph ; 28(2): 1317-1327, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32755863

RESUMEN

3D models are commonly used in computer vision and graphics. With the wider availability of mesh data, an efficient and intrinsic deep learning approach to processing 3D meshes is in great need. Unlike images, 3D meshes have irregular connectivity, requiring careful design to capture relations in the data. To utilize the topology information while staying robust under different triangulations, we propose to encode mesh connectivity using Laplacian spectral analysis, along with mesh feature aggregation blocks (MFABs) that can split the surface domain into local pooling patches and aggregate global information amongst them. We build a mesh hierarchy from fine to coarse using Laplacian spectral clustering, which is flexible under isometric transformations. Inside the MFABs there are pooling layers to collect local information and multi-layer perceptrons to compute vertex features of increasing complexity. To obtain the relationships among different clusters, we introduce a Correlation Net to compute a correlation matrix, which can aggregate the features globally by matrix multiplication with cluster features. Our network architecture is flexible enough to be used on meshes with different numbers of vertices. We conduct several experiments including shape segmentation and classification, and our method outperforms state-of-the-art algorithms for these tasks on the ShapeNet and COSEG datasets.

7.
IEEE Trans Vis Comput Graph ; 27(1): 151-164, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31329121

RESUMEN

Recently, effort has been made to apply deep learning to the detection of mesh saliency. However, one major barrier is to collect a large amount of vertex-level annotation as saliency ground truth for training the neural networks. Quite a few pilot studies showed that this task is difficult. In this work, we solve this problem by developing a novel network trained in a weakly supervised manner. The training is end-to-end and does not require any saliency ground truth but only the class membership of meshes. Our Classification-for-Saliency CNN (CfS-CNN) employs a multi-view setup and contains a newly designed two-channel structure which integrates view-based features of both classification and saliency. It essentially transfers knowledge from 3D object classification to mesh saliency. Our approach significantly outperforms the existing state-of-the-art methods according to extensive experimental results. Also, the CfS-CNN can be directly used for scene saliency. We showcase two novel applications based on scene saliency to demonstrate its utility.

8.
IEEE Trans Pattern Anal Mach Intell ; 43(10): 3462-3475, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-32310761

RESUMEN

Despite significant effort and notable success of neural style transfer, it remains challenging for highly abstract styles, in particular line drawings. In this paper, we propose APDrawingGAN++, a generative adversarial network (GAN) for transforming face photos to artistic portrait drawings (APDrawings), which addresses substantial challenges including highly abstract style, different drawing techniques for different facial features, and high perceptual sensitivity to artifacts. To address these, we propose a composite GAN architecture that consists of local networks (to learn effective representations for specific facial features) and a global network (to capture the overall content). We provide a theoretical explanation for the necessity of this composite GAN structure by proving that any GAN with a single generator cannot generate artistic styles like APDrawings. We further introduce a classification-and-synthesis approach for lips and hair where different drawing styles are used by artists, which applies suitable styles for a given input. To capture the highly abstract art form inherent in APDrawings, we address two challenging operations-(1) coping with lines with small misalignments while penalizing large discrepancy and (2) generating more continuous lines-by introducing two novel loss terms: one is a novel distance transform loss with nonlinear mapping and the other is a novel line continuity loss, both of which improve the line quality. We also develop dedicated data augmentation and pre-training to further improve results. Extensive experiments, including a user study, show that our method outperforms state-of-the-art methods, both qualitatively and quantitatively.

9.
Opt Express ; 18(14): 14730-44, 2010 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-20639959

RESUMEN

A novel statistical model based on texture and shape for fully automatic intraretinal layer segmentation of normal retinal tomograms obtained by a commercial 800nm optical coherence tomography (OCT) system is developed. While existing algorithms often fail dramatically due to strong speckle noise, non-optimal imaging conditions, shadows and other artefacts, the novel algorithm's accuracy only slowly deteriorates when progressively increasing segmentation task difficulty. Evaluation against a large set of manual segmentations shows unprecedented robustness, even in the presence of additional strong speckle noise, with dynamic range tested down to 12dB, enabling segmentation of almost all intraretinal layers in cases previously inaccessible to the existing algorithms. For the first time, an error measure is computed from a large, representative manually segmented data set (466 B-scans from 17 eyes, segmented twice by different operators) and compared to the automatic segmentation with a difference of only 2.6% against the inter-observer variability.


Asunto(s)
Algoritmos , Fóvea Central/anatomía & histología , Modelos Estadísticos , Humanos , Análisis de Componente Principal , Tomografía de Coherencia Óptica
10.
IEEE Trans Pattern Anal Mach Intell ; 42(6): 1394-1407, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30762528

RESUMEN

In this paper we have developed a family of shape measures. All the measures from the family evaluate the degree to which a shape looks like a predefined convex polygon. A quite new approach in designing object shape based measures has been applied. In most cases such measures were defined by exploiting some shape properties. Such properties are optimized (e.g., maximized or minimized) by certain shapes and based on this, the new shape measures were defined. An illustrative example might be the shape circularity measure derived by exploiting the well-known result that the circle has the largest area among all the shapes with the same perimeter. Of course, there are many more such examples (e.g., ellipticity, linearity, elongation, and squareness measures are some of them). There are different approaches as well. In the approach applied here, no desired property is needed and no optimizing shape has to be found. We start from a desired convex polygon, and develop the related shape measure. The method also allows a tuning parameter. Thus, there is a new 2-fold family of shape measures, dependent on a predefined convex polygon, and a tuning parameter, that controls the measure's behavior. The measures obtained range over the interval (0,1] and pick the maximal possible value, equal to 1, if and only if the measured shape coincides with the selected convex polygon that was used to develop the particular measure. All the measures are invariant with respect to translations, rotations, and scaling transformations. An extension of the method leads to a family of new shape convexity measures.

11.
IEEE Trans Vis Comput Graph ; 26(6): 2204-2218, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-30530330

RESUMEN

An importance measure of 3D objects inspired by human perception has a range of applications since people want computers to behave like humans in many tasks. This paper revisits a well-defined measure, distinction of 3D surface mesh, which indicates how important a region of a mesh is with respect to classification. We develop a method to compute it based on a classification network and a Markov Random Field (MRF). The classification network learns view-based distinction by handling multiple views of a 3D object. Using a classification network has an advantage of avoiding the training data problem which has become a major obstacle of applying deep learning to 3D object understanding tasks. The MRF estimates the parameters of a linear model for combining the view-based distinction maps. The experiments using several publicly accessible datasets show that the distinctive regions detected by our method are not just significantly different from those detected by methods based on handcrafted features, but more consistent with human perception. We also compare it with other perceptual measures and quantitatively evaluate its performance in the context of two applications. Furthermore, due to the view-based nature of our method, we are able to easily extend mesh distinction to 3D scenes containing multiple objects.

12.
Artículo en Inglés | MEDLINE | ID: mdl-32203021

RESUMEN

Mesh color edit propagation aims to propagate the color from a few color strokes to the whole mesh, which is useful for mesh colorization, color enhancement and color editing, etc. Compared with image edit propagation, luminance information is not available for 3D mesh data, so the color edit propagation is more difficult on 3D meshes than images, with far less research carried out. This paper proposes a novel solution based on sparse graph regularization. Firstly, a few color strokes are interactively drawn by the user, and then the color will be propagated to the whole mesh by minimizing a sparse graph regularized nonlinear energy function. The proposed method effectively measures geometric similarity over shapes by using a set of complementary multiscale feature descriptors, and effectively controls color bleeding via a sparse ℓ1 optimization rather than quadratic minimization used in existing work. The proposed framework can be applied for the task of interactive mesh colorization, mesh color enhancement and mesh color editing. Extensive qualitative and quantitative experiments show that the proposed method outperforms the state-of-the-art methods.

13.
IEEE Trans Pattern Anal Mach Intell ; 42(6): 1537-1544, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31056488

RESUMEN

Finding the informative subspaces of high-dimensional datasets is at the core of numerous applications in computer vision, where spectral-based subspace clustering is arguably the most widely studied method due to its strong empirical performance. Such algorithms first compute an affinity matrix to construct a self-representation for each sample using other samples as a dictionary. Sparsity and connectivity of the self-representation play important roles in effective subspace clustering. However, simultaneous optimization of both factors is difficult due to their conflicting nature, and most existing methods are designed to address only one factor. In this paper, we propose a post-processing technique to optimize both sparsity and connectivity by finding good neighbors. Good neighbors induce key connections among samples within a subspace and not only have large affinity coefficients but are also strongly connected to each other. We reassign the coefficients of the good neighbors and eliminate other entries to generate a new coefficient matrix. We show that the few good neighbors can effectively recover the subspace, and the proposed post-processing step of finding good neighbors is complementary to most existing subspace clustering algorithms. Experiments on five benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods with negligible additional computation cost.

14.
PLoS One ; 15(9): e0239840, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32970775

RESUMEN

The association between alcohol outlets and violence has long been recognised, and is commonly used to inform policing and licensing policies (such as staggered closing times and zoning). Less investigated, however, is the association between violent crime and other urban points of interest, which while associated with the city centre alcohol consumption economy, are not explicitly alcohol outlets. Here, machine learning (specifically, LASSO regression) is used to model the distribution of violent crime for the central 9 km2 of ten large UK cities. Densities of 620 different Point of Interest types (sourced from Ordnance Survey) are used as predictors, with the 10 most explanatory variables being automatically selected for each city. Cross validation is used to test generalisability of each model. Results show that the inclusion of additional point of interest types produces a more accurate model, with significant increases in performance over a baseline univariate alcohol-outlet only model. Analysis of chosen variables for city-specific models shows potential candidates for new strategies on a per-city basis, with combined-model variables showing the general trend in POI/violence association across the UK. Although alcohol outlets remain the best individual predictor of violence, other points of interest should also be considered when modelling the distribution of violence in city centres. The presented method could be used to develop targeted, city-specific initiatives that go beyond alcohol outlets and also consider other locations.


Asunto(s)
Crimen/estadística & datos numéricos , Población Urbana/estadística & datos numéricos , Ciudades/estadística & datos numéricos , Crimen/clasificación , Vivienda/estadística & datos numéricos , Humanos , Restaurantes/estadística & datos numéricos , Análisis Espacio-Temporal , Reino Unido
15.
IEEE Trans Neural Netw Learn Syst ; 31(8): 2832-2846, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31199274

RESUMEN

Class imbalance is a challenging problem in many classification tasks. It induces biased classification results for minority classes that contain less training samples than others. Most existing approaches aim to remedy the imbalanced number of instances among categories by resampling the majority and minority classes accordingly. However, the imbalanced level of difficulty of recognizing different categories is also crucial, especially for distinguishing samples with many classes. For example, in the task of clinical skin disease recognition, several rare diseases have a small number of training samples, but they are easy to diagnose because of their distinct visual properties. On the other hand, some common skin diseases, e.g., eczema, are hard to recognize due to the lack of special symptoms. To address this problem, we propose a self-paced balance learning (SPBL) algorithm in this paper. Specifically, we introduce a comprehensive metric termed the complexity of image category that is a combination of both sample number and recognition difficulty. First, the complexity is initialized using the model of the first pace, where the pace indicates one iteration in the self-paced learning paradigm. We then assign each class a penalty weight that is larger for more complex categories and smaller for easier ones, after which the curriculum is reconstructed by rearranging the training samples. Consequently, the model can iteratively learn discriminative representations via balancing the complexity in each pace. Experimental results on the SD-198 and SD-260 benchmark data sets demonstrate that the proposed SPBL algorithm performs favorably against the state-of-the-art methods. We also demonstrate the effectiveness of the SPBL algorithm's generalization capacity on various tasks, such as indoor scene image recognition and object classification.


Asunto(s)
Algoritmos , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas/métodos , Enfermedades de la Piel/diagnóstico , Bases de Datos Factuales/estadística & datos numéricos , Humanos , Reconocimiento de Normas Patrones Automatizadas/estadística & datos numéricos
16.
Biodivers Data J ; 8: e47051, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32269476

RESUMEN

Digitisation of natural history collections has evolved from creating databases for the recording of specimens' catalogue and label data to include digital images of specimens. This has been driven by several important factors, such as a need to increase global accessibility to specimens and to preserve the original specimens by limiting their manual handling. The size of the collections pointed to the need of high throughput digitisation workflows. However, digital imaging of large numbers of fragile specimens is an expensive and time-consuming process that should be performed only once. To achieve this, the digital images produced need to be useful for the largest set of applications possible and have a potentially unlimited shelf life. The constraints on digitisation speed need to be balanced against the applicability and longevity of the images, which, in turn, depend directly on the quality of those images. As a result, the quality criteria that specimen images need to fulfil influence the design, implementation and execution of digitisation workflows. Different standards and guidelines for producing quality research images from specimens have been proposed; however, their actual adaptation to suit the needs of different types of specimens requires further analysis. This paper presents the digitisation workflow implemented by Meise Botanic Garden (MBG). This workflow is relevant because of its modular design, its strong focus on image quality assessment, its flexibility that allows combining in-house and outsourced digitisation, processing, preservation and publishing facilities and its capacity to evolve for integrating alternative components from different sources. The design and operation of the digitisation workflow is provided to showcase how it was derived, with particular attention to the built-in audit trail within the workflow, which ensures the scalable production of high-quality specimen images and how this audit trail ensures that new modules do not affect either the speed of imaging or the quality of the images produced.

17.
IEEE Trans Vis Comput Graph ; 15(4): 642-53, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19423888

RESUMEN

An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

18.
Artículo en Inglés | MEDLINE | ID: mdl-31034411

RESUMEN

Given a reference colour image and a destination grayscale image, this paper presents a novel automatic colourisation algorithm that transfers colour information from the reference image to the destination image. Since the reference and destination images may contain content at different or even varying scales (due to changes of distance between objects and the camera), existing texture matching based methods can often perform poorly. We propose a novel cross-scale texture matching method to improve the robustness and quality of the colourisation results. Suitable matching scales are considered locally, which are then fused using global optimisation that minimises both the matching errors and spatial change of scales. The minimisation is efficiently solved using a multi-label graph-cut algorithm. Since only low-level texture features are used, texture matching based colourisation can still produce semantically incorrect results, such as meadow appearing above the sky. We consider a class of semantic violation where the statistics of up-down relationships learnt from the reference image are violated and propose an effective method to identify and correct unreasonable colourisation. Finally, a novel nonlocal ℓ1 optimisation framework is developed to propagate high confidence micro-scribbles to regions of lower confidence to produce a fully colourised image. Qualitative and quantitative evaluations show that our method outperforms several state-of-the-art methods.

19.
IEEE Trans Image Process ; 28(8): 3973-3985, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-30843836

RESUMEN

In this paper, we propose a unified framework to discover the number of clusters and group the data points into different clusters using subspace clustering simultaneously. Real data distributed in a high-dimensional space can be disentangled into a union of low-dimensional subspaces, which can benefit various applications. To explore such intrinsic structure, state-of-the-art subspace clustering approaches often optimize a self-representation problem among all samples, to construct a pairwise affinity graph for spectral clustering. However, a graph with pairwise similarities lacks robustness for segmentation, especially for samples which lie on the intersection of two subspaces. To address this problem, we design a hyper-correlation-based data structure termed as the triplet relationship, which reveals high relevance and local compactness among three samples. The triplet relationship can be derived from the self-representation matrix, and be utilized to iteratively assign the data points to clusters. Based on the triplet relationship, we propose a unified optimizing scheme to automatically calculate clustering assignments. Specifically, we optimize a model selection reward and a fusion reward by simultaneously maximizing the similarity of triplets from different clusters while minimizing the correlation of triplets from the same cluster. The proposed algorithm also automatically reveals the number of clusters and fuses groups to avoid over-segmentation. Extensive experimental results on both synthetic and real-world datasets validate the effectiveness and robustness of the proposed method.

20.
Emotion ; 19(4): 746-750, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30080075

RESUMEN

Recent research has linked facial expressions to mind perception. Specifically, Bowling and Banissy (2017) found that ambiguous doll-human morphs were judged as more likely to have a mind when smiling. Herein, we investigate 3 key potential boundary conditions of this "expression-to-mind" effect. First, we demonstrate that face inversion impairs the ability of happy expressions to signal mindful states in static faces; however, inversion does not disrupt this effect for dynamic displays of emotion. Finally, we demonstrate that not all emotions have equivalent effects. Whereas happy faces generate more mind ascription compared to neutral faces, we find that expressions of disgust actually generate less mind ascription than those of happiness. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Emociones/fisiología , Expresión Facial , Adulto , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda