Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Anat ; 243(2): 274-283, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36943032

RESUMEN

The effects of sex on human facial morphology have been widely documented. Because sexual dimorphism is relevant to a variety of scientific and applied disciplines, it is imperative to have a complete and accurate account of how and where male and female faces differ. We apply a comprehensive facial phenotyping strategy to a large set of existing 3D facial surface images. We investigate facial sexual dimorphism in terms of size, shape, and shape variance. We also assess the ability to correctly assign sex based on shape, both for the whole face and for subregions. We applied a predefined data-driven segmentation to partition the 3D facial surfaces of 2446 adults into 63 hierarchically linked regions, ranging from global (whole face) to highly localized subparts. Each facial region was then analyzed with spatially dense geometric morphometrics. To describe the major modes of shape variation, principal components analysis was applied to the Procrustes aligned 3D points comprising each of the 63 facial regions. Both nonparametric and permutation-based statistics were then used to quantify the facial size and shape differences and visualizations were generated. Males were significantly larger than females for all 63 facial regions. Statistically significant sex differences in shape were also seen in all regions and the effects tended to be more pronounced for the upper lip and forehead, with more subtle changes emerging as the facial regions became more granular. Males also showed greater levels of shape variance, with the largest effect observed for the central forehead. Classification accuracy was highest for the full face (97%), while most facial regions showed an accuracy of 75% or greater. In summary, sex differences in both size and shape were present across every part of the face. By breaking the face into subparts, some shape differences emerged that were not apparent when analyzing the face as a whole. The increase in facial shape variance suggests possible evolutionary origins and may offer insights for understanding congenital facial malformations. Our classification results indicate that a high degree of accuracy is possible with only parts of the face, which may have implications for biometrics applications.


Asunto(s)
Cara , Labio , Adulto , Humanos , Femenino , Masculino , Cara/anatomía & histología , Labio/anatomía & histología , Imagenología Tridimensional/métodos , Caracteres Sexuales
2.
IEEE Trans Biom Behav Identity Sci ; 4(2): 163-172, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36338273

RESUMEN

Face recognition is a widely accepted biometric identifier, as the face contains a lot of information about the identity of a person. The goal of this study is to match the 3D face of an individual to a set of demographic properties (sex, age, BMI, and genomic background) that are extracted from unidentified genetic material. We introduce a triplet loss metric learner that compresses facial shape into a lower dimensional embedding while preserving information about the property of interest. The metric learner is trained for multiple facial segments to allow a global-to-local part-based analysis of the face. To learn directly from 3D mesh data, spiral convolutions are used along with a novel mesh-sampling scheme, which retains uniformly sampled points at different resolutions. The capacity of the model for establishing identity from facial shape against a list of probe demographics is evaluated by enrolling the embeddings for all properties into a support vector machine classifier or regressor and then combining them using a naive Bayes score fuser. Results obtained by a 10-fold cross-validation for biometric verification and identification show that part-based learning significantly improves the systems performance for both encoding with our geometric metric learner or with principal component analysis.

3.
Orthod Craniofac Res ; 24 Suppl 2: 134-143, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34310057

RESUMEN

OBJECTIVES: Palatal shape contains a lot of information that is of clinical interest. Moreover, palatal shape analysis can be used to guide or evaluate orthodontic treatments. A statistical shape model (SSM) is a tool that, by means of dimensionality reduction, aims at compactly modeling the variance of complex shapes for efficient analysis. In this report, we evaluate several competing approaches to constructing SSMs for the human palate. SETTING AND SAMPLE POPULATION: This study used a sample comprising digitized 3D maxillary dental casts from 1,324 individuals. MATERIALS AND METHODS: Principal component analysis (PCA) and autoencoders (AE) are popular approaches to construct SSMs. PCA is a dimension reduction technique that provides a compact description of shapes by uncorrelated variables. AEs are situated in the field of deep learning and provide a non-linear framework for dimension reduction. This work introduces the singular autoencoder (SAE), a hybrid approach that combines the most important properties of PCA and AEs. We assess the performance of the SAE using standard evaluation tools for SSMs, including accuracy, generalization, and specificity. RESULTS: We found that the SAE obtains equivalent results to PCA and AEs for all evaluation metrics. SAE scores were found to be uncorrelated and provided an optimally compact representation of the shapes. CONCLUSION: We conclude that the SAE is a promising tool for 3D palatal shape analysis, which effectively combines the power of PCA with the flexibility of deep learning. This opens future AI driven applications of shape analysis in orthodontics and other related clinical disciplines.


Asunto(s)
Aprendizaje Profundo , Ortodoncia , Humanos , Maxilar , Modelos Estadísticos , Hueso Paladar
4.
Orthod Craniofac Res ; 24 Suppl 2: 144-152, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34169645

RESUMEN

OBJECTIVES: To develop and evaluate a geometric deep-learning network to automatically place seven palatal landmarks on digitized maxillary dental casts. SETTINGS AND SAMPLE POPULATION: The sample comprised individuals with permanent dentition of various ethnicities. The network was trained from manual landmark annotations on 732 dental casts and evaluated on 104 dental casts. MATERIALS AND METHODS: A geometric deep-learning network was developed to hierarchically learn features from point-clouds representing the 3D surface of each cast. These features predict the locations of seven palatal landmarks. RESULTS: Repeat-measurement reliability was <0.3 mm for all landmarks on all casts. Accuracy is promising. The proportion of test subjects with errors less than 2 mm was between 0.93 and 0.68, depending on the landmark. Unusually shaped and large palates generate the highest errors. There was no evidence for a difference in mean palatal shape estimated from manual compared to the automatic landmarking. The automatic landmarking reduces sample variation around the mean and reduces measurements of palatal size. CONCLUSIONS: The automatic landmarking method shows excellent repeatability and promising accuracy, which can streamline patient assessment and research studies. However, landmark indications should be subject to visual quality control.


Asunto(s)
Aprendizaje Profundo , Humanos , Imagenología Tridimensional , Maxilar , Hueso Paladar , Reproducibilidad de los Resultados
5.
Autism Res ; 14(7): 1404-1420, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33704930

RESUMEN

Difficulties in automatic emotion processing in individuals with autism spectrum disorder (ASD) might remain concealed in behavioral studies due to compensatory strategies. To gain more insight in the mechanisms underlying facial emotion recognition, we recorded eye tracking and facial mimicry data of 20 school-aged boys with ASD and 20 matched typically developing controls while performing an explicit emotion recognition task. Proportional looking times to specific face regions (eyes, nose, and mouth) and face exploration dynamics were analyzed. In addition, facial mimicry was assessed. Boys with ASD and controls were equally capable to recognize expressions and did not differ in proportional looking times, and number and duration of fixations. Yet, specific facial expressions elicited particular gaze patterns, especially within the control group. Both groups showed similar face scanning dynamics, although boys with ASD demonstrated smaller saccadic amplitudes. Regarding the facial mimicry, we found no emotion specific facial responses and no group differences in the responses to the displayed facial expressions. Our results indicate that boys with and without ASD employ similar eye gaze strategies to recognize facial expressions. Smaller saccadic amplitudes in boys with ASD might indicate a less exploratory face processing strategy. Yet, this slightly more persistent visual scanning behavior in boys with ASD does not imply less efficient emotion information processing, given the similar behavioral performance. Results on the facial mimicry data indicate similar facial responses to emotional faces in boys with and without ASD. LAY SUMMARY: We investigated (i) whether boys with and without autism apply different face exploration strategies when recognizing facial expressions and (ii) whether they mimic the displayed facial expression to a similar extent. We found that boys with and without ASD recognize facial expressions equally well, and that both groups show similar facial reactions to the displayed facial emotions. Yet, boys with ASD visually explored the faces slightly less than the boys without ASD.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Reconocimiento Facial , Trastorno del Espectro Autista/complicaciones , Trastorno Autístico/complicaciones , Niño , Emociones , Tecnología de Seguimiento Ocular , Expresión Facial , Humanos , Masculino
6.
J Child Psychol Psychiatry ; 61(9): 1019-1029, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32003011

RESUMEN

BACKGROUND: Difficulties with facial expression processing may be associated with the characteristic social impairments in individuals with autism spectrum disorder (ASD). Emotional face processing in ASD has been investigated in an abundance of behavioral and EEG studies, yielding, however, mixed and inconsistent results. METHODS: We combined fast periodic visual stimulation (FPVS) with EEG to assess the neural sensitivity to implicitly detect briefly presented facial expressions among a stream of neutral faces, in 23 boys with ASD and 23 matched typically developing (TD) boys. Neutral faces with different identities were presented at 6 Hz, periodically interleaved with an expressive face (angry, fearful, happy, sad in separate sequences) every fifth image (i.e., 1.2 Hz oddball frequency). These distinguishable frequency tags for neutral and expressive stimuli allowed direct and objective quantification of the expression-categorization responses, needing only four sequences of 60 s of recording per condition. RESULTS: Both groups show equal neural synchronization to the general face stimulation and similar neural responses to happy and sad faces. However, the ASD group displays significantly reduced responses to angry and fearful faces, compared to TD boys. At the individual subject level, these neural responses allow to predict membership of the ASD group with an accuracy of 87%. Whereas TD participants show a significantly lower sensitivity to sad faces than to the other expressions, ASD participants show an equally low sensitivity to all the expressions. CONCLUSIONS: Our results indicate an emotion-specific processing deficit, instead of a general emotion-processing problem: Boys with ASD are less sensitive than TD boys to rapidly and implicitly detect angry and fearful faces. The implicit, fast, and straightforward nature of FPVS-EEG opens new perspectives for clinical diagnosis.


Asunto(s)
Ira , Trastorno del Espectro Autista/fisiopatología , Trastorno del Espectro Autista/psicología , Expresión Facial , Reconocimiento Facial , Miedo , Niño , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...