Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Sci Justice ; 64(4): 421-442, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39025567

RESUMO

In today's biometric and commercial settings, state-of-the-art image processing relies solely on artificial intelligence and machine learning which provides a high level of accuracy. However, these principles are deeply rooted in abstract, complex "black-box systems". When applied to forensic image identification, concerns about transparency and accountability emerge. This study explores the impact of two challenging factors in automated facial identification: facial expressions and head poses. The sample comprised 3D faces with nine prototype expressions, collected from 41 participants (13 males, 28 females) of European descent aged 19.96 to 50.89 years. Pre-processing involved converting 3D models to 2D color images (256 × 256 px). Probes included a set of 9 images per individual with head poses varying by 5° in both left-to-right (yaw) and up-and-down (pitch) directions for neutral expressions. A second set of 3,610 images per individual covered viewpoints in 5° increments from -45° to 45° for head movements and different facial expressions, forming the targets. Pair-wise comparisons using ArcFace, a state-of-the-art face identification algorithm yielded 54,615,690 dissimilarity scores. Results indicate that minor head deviations in probes have minimal impact. However, the performance diminished as targets deviated from the frontal position. Right-to-left movements were less influential than up and down, with downward pitch showing less impact than upward movements. The lowest accuracy was for upward pitch at 45°. Dissimilarity scores were consistently higher for males than for females across all studied factors. The performance particularly diverged in upward movements, starting at 15°. Among tested facial expressions, happiness and contempt performed best, while disgust exhibited the lowest AUC values.


Assuntos
Algoritmos , Reconhecimento Facial Automatizado , Expressão Facial , Humanos , Masculino , Feminino , Adulto , Reconhecimento Facial Automatizado/métodos , Adulto Jovem , Pessoa de Meia-Idade , Imageamento Tridimensional , Processamento de Imagem Assistida por Computador/métodos , Identificação Biométrica/métodos , Face/anatomia & histologia , Movimentos da Cabeça/fisiologia , Postura/fisiologia
2.
Sensors (Basel) ; 24(13)2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-39000930

RESUMO

Convolutional neural networks (CNNs) have made significant progress in the field of facial expression recognition (FER). However, due to challenges such as occlusion, lighting variations, and changes in head pose, facial expression recognition in real-world environments remains highly challenging. At the same time, methods solely based on CNN heavily rely on local spatial features, lack global information, and struggle to balance the relationship between computational complexity and recognition accuracy. Consequently, the CNN-based models still fall short in their ability to address FER adequately. To address these issues, we propose a lightweight facial expression recognition method based on a hybrid vision transformer. This method captures multi-scale facial features through an improved attention module, achieving richer feature integration, enhancing the network's perception of key facial expression regions, and improving feature extraction capabilities. Additionally, to further enhance the model's performance, we have designed the patch dropping (PD) module. This module aims to emulate the attention allocation mechanism of the human visual system for local features, guiding the network to focus on the most discriminative features, reducing the influence of irrelevant features, and intuitively lowering computational costs. Extensive experiments demonstrate that our approach significantly outperforms other methods, achieving an accuracy of 86.51% on RAF-DB and nearly 70% on FER2013, with a model size of only 3.64 MB. These results demonstrate that our method provides a new perspective for the field of facial expression recognition.


Assuntos
Expressão Facial , Redes Neurais de Computação , Humanos , Reconhecimento Facial Automatizado/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Face , Reconhecimento Automatizado de Padrão/métodos
3.
PLoS One ; 19(7): e0301908, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38990958

RESUMO

Real-time security surveillance and identity matching using face detection and recognition are central research areas within computer vision. The classical facial detection techniques include Haar-like, MTCNN, AdaBoost, and others. These techniques employ template matching and geometric facial features for detecting faces, striving for a balance between detection time and accuracy. To address this issue, the current research presents an enhanced FaceNet network. The RetinaFace is employed to perform expeditious face detection and alignment. Subsequently, FaceNet, with an improved loss function is used to achieve face verification and recognition with high accuracy. The presented work involves a comparative evaluation of the proposed network framework against both traditional and deep learning techniques in terms of face detection and recognition performance. The experimental findings demonstrate that an enhanced FaceNet can successfully meet the real-time facial recognition requirements, and the accuracy of face recognition is 99.86% which fulfills the actual requirement. Consequently, the proposed solution holds significant potential for applications in face detection and recognition within the education sector for real-time security surveillance.


Assuntos
Aprendizado Profundo , Humanos , Face , Segurança Computacional , Medidas de Segurança , Reconhecimento Facial Automatizado/métodos , Reconhecimento Facial , Algoritmos
4.
Sci Rep ; 14(1): 12763, 2024 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-38834661

RESUMO

With the continuous progress of technology, the subject of life science plays an increasingly important role, among which the application of artificial intelligence in the medical field has attracted more and more attention. Bell facial palsy, a neurological ailment characterized by facial muscle weakness or paralysis, exerts a profound impact on patients' facial expressions and masticatory abilities, thereby inflicting considerable distress upon their overall quality of life and mental well-being. In this study, we designed a facial attribute recognition model specifically for individuals with Bell's facial palsy. The model utilizes an enhanced SSD network and scientific computing to perform a graded assessment of the patients' condition. By replacing the VGG network with a more efficient backbone, we improved the model's accuracy and significantly reduced its computational burden. The results show that the improved SSD network has an average precision of 87.9% in the classification of light, middle and severe facial palsy, and effectively performs the classification of patients with facial palsy, where scientific calculations also increase the precision of the classification. This is also one of the most significant contributions of this article, which provides intelligent means and objective data for future research on intelligent diagnosis and treatment as well as progressive rehabilitation.


Assuntos
Paralisia de Bell , Humanos , Paralisia de Bell/diagnóstico , Paralisia de Bell/fisiopatologia , Redes Neurais de Computação , Feminino , Masculino , Expressão Facial , Adulto , Inteligência Artificial , Pessoa de Meia-Idade , Paralisia Facial/diagnóstico , Paralisia Facial/fisiopatologia , Paralisia Facial/psicologia , Reconhecimento Facial , Reconhecimento Facial Automatizado/métodos
5.
PLoS One ; 19(5): e0304610, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38820451

RESUMO

Face Morphing Attacks pose a threat to the security of identity documents, especially with respect to a subsequent access control process, because they allow both involved individuals to use the same document. Several algorithms are currently being developed to detect Morphing Attacks, often requiring large data sets of morphed face images for training. In the present study, face embeddings are used for two different purposes: first, to pre-select images for the subsequent large-scale generation of Morphing Attacks, and second, to detect potential Morphing Attacks. Previous studies have demonstrated the power of embeddings in both use cases. However, we aim to build on these studies by adding the more powerful MagFace model to both use cases, and by performing comprehensive analyses of the role of embeddings in pre-selection and attack detection in terms of the vulnerability of face recognition systems and attack detection algorithms. In particular, we use recent developments to assess the attack potential, but also investigate the influence of morphing algorithms. For the first objective, an algorithm is developed that pairs individuals based on the similarity of their face embeddings. Different state-of-the-art face recognition systems are used to extract embeddings in order to pre-select the face images and different morphing algorithms are used to fuse the face images. The attack potential of the differently generated morphed face images will be quantified to compare the usability of the embeddings for automatically generating a large number of successful Morphing Attacks. For the second objective, we compare the performance of the embeddings of two state-of-the-art face recognition systems with respect to their ability to detect morphed face images. Our results demonstrate that ArcFace and MagFace provide valuable face embeddings for image pre-selection. Various open-source and commercial-off-the-shelf face recognition systems are vulnerable to the generated Morphing Attacks, and their vulnerability increases when image pre-selection is based on embeddings compared to random pairing. In particular, landmark-based closed-source morphing algorithms generate attacks that pose a high risk to any tested face recognition system. Remarkably, more accurate face recognition systems show a higher vulnerability to Morphing Attacks. Among the systems tested, commercial-off-the-shelf systems were the most vulnerable to Morphing Attacks. In addition, MagFace embeddings stand out as a robust alternative for detecting morphed face images compared to the previously used ArcFace embeddings. The results endorse the benefits of face embeddings for more effective image pre-selection for face morphing and for more accurate detection of morphed face images, as demonstrated by extensive analysis of various designed attacks. The MagFace model is a powerful alternative to the often-used ArcFace model in detecting attacks and can increase performance depending on the use case. It also highlights the usability of embeddings to generate large-scale morphed face databases for various purposes, such as training Morphing Attack Detection algorithms as a countermeasure against attacks.


Assuntos
Algoritmos , Segurança Computacional , Humanos , Face , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Facial Automatizado/métodos , Reconhecimento Facial
6.
Sensors (Basel) ; 24(10)2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38794068

RESUMO

Most facial analysis methods perform well in standardized testing but not in real-world testing. The main reason is that training models cannot easily learn various human features and background noise, especially for facial landmark detection and head pose estimation tasks with limited and noisy training datasets. To alleviate the gap between standardized and real-world testing, we propose a pseudo-labeling technique using a face recognition dataset consisting of various people and background noise. The use of our pseudo-labeled training dataset can help to overcome the lack of diversity among the people in the dataset. Our integrated framework is constructed using complementary multitask learning methods to extract robust features for each task. Furthermore, introducing pseudo-labeling and multitask learning improves the face recognition performance by enabling the learning of pose-invariant features. Our method achieves state-of-the-art (SOTA) or near-SOTA performance on the AFLW2000-3D and BIWI datasets for facial landmark detection and head pose estimation, with competitive face verification performance on the IJB-C test dataset for face recognition. We demonstrate this through a novel testing methodology that categorizes cases as soft, medium, and hard based on the pose values of IJB-C. The proposed method achieves stable performance even when the dataset lacks diverse face identifications.


Assuntos
Reconhecimento Facial Automatizado , Face , Cabeça , Humanos , Face/anatomia & histologia , Face/diagnóstico por imagem , Cabeça/diagnóstico por imagem , Reconhecimento Facial Automatizado/métodos , Algoritmos , Aprendizado de Máquina , Reconhecimento Facial , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos
7.
BMC Pediatr ; 24(1): 361, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38783283

RESUMO

BACKGROUND: Noonan syndrome (NS) is a rare genetic disease, and patients who suffer from it exhibit a facial morphology that is characterized by a high forehead, hypertelorism, ptosis, inner epicanthal folds, down-slanting palpebral fissures, a highly arched palate, a round nasal tip, and posteriorly rotated ears. Facial analysis technology has recently been applied to identify many genetic syndromes (GSs). However, few studies have investigated the identification of NS based on the facial features of the subjects. OBJECTIVES: This study develops advanced models to enhance the accuracy of diagnosis of NS. METHODS: A total of 1,892 people were enrolled in this study, including 233 patients with NS, 863 patients with other GSs, and 796 healthy children. We took one to 10 frontal photos of each subject to build a dataset, and then applied the multi-task convolutional neural network (MTCNN) for data pre-processing to generate standardized outputs with five crucial facial landmarks. The ImageNet dataset was used to pre-train the network so that it could capture generalizable features and minimize data wastage. We subsequently constructed seven models for facial identification based on the VGG16, VGG19, VGG16-BN, VGG19-BN, ResNet50, MobileNet-V2, and squeeze-and-excitation network (SENet) architectures. The identification performance of seven models was evaluated and compared with that of six physicians. RESULTS: All models exhibited a high accuracy, precision, and specificity in recognizing NS patients. The VGG19-BN model delivered the best overall performance, with an accuracy of 93.76%, precision of 91.40%, specificity of 98.73%, and F1 score of 78.34%. The VGG16-BN model achieved the highest AUC value of 0.9787, while all models based on VGG architectures were superior to the others on the whole. The highest scores of six physicians in terms of accuracy, precision, specificity, and the F1 score were 74.00%, 75.00%, 88.33%, and 61.76%, respectively. The performance of each model of facial recognition was superior to that of the best physician on all metrics. CONCLUSION: Models of computer-assisted facial recognition can improve the rate of diagnosis of NS. The models based on VGG19-BN and VGG16-BN can play an important role in diagnosing NS in clinical practice.


Assuntos
Síndrome de Noonan , Humanos , Síndrome de Noonan/diagnóstico , Criança , Feminino , Masculino , Pré-Escolar , Redes Neurais de Computação , Lactente , Adolescente , Reconhecimento Facial Automatizado/métodos , Diagnóstico por Computador/métodos , Sensibilidade e Especificidade , Estudos de Casos e Controles
8.
Neural Netw ; 175: 106275, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38653078

RESUMO

Face Anti-Spoofing (FAS) seeks to protect face recognition systems from spoofing attacks, which is applied extensively in scenarios such as access control, electronic payment, and security surveillance systems. Face anti-spoofing requires the integration of local details and global semantic information. Existing CNN-based methods rely on small stride or image patch-based feature extraction structures, which struggle to capture spatial and cross-layer feature correlations effectively. Meanwhile, Transformer-based methods have limitations in extracting discriminative detailed features. To address the aforementioned issues, we introduce a multi-stage CNN-Transformer-based framework, which extracts local features through the convolutional layer and long-distance feature relationships via self-attention. Based on this, we proposed a cross-attention multi-stage feature fusion, employing semantically high-stage features to query task-relevant features in low-stage features for further cross-stage feature fusion. To enhance the discrimination of local features for subtle differences, we design pixel-wise material classification supervision and add a auxiliary branch in the intermediate layers of the model. Moreover, to address the limitations of a single acquisition environment and scarcity of acquisition devices in the existing Near-Infrared dataset, we create a large-scale Near-Infrared Face Anti-Spoofing dataset with 380k pictures of 1040 identities. The proposed method could achieve the state-of-the-art in OULU-NPU and our proposed Near-Infrared dataset at just 1.3GFlops and 3.2M parameter numbers, which demonstrate the effective of the proposed method.


Assuntos
Redes Neurais de Computação , Humanos , Reconhecimento Facial Automatizado/métodos , Processamento de Imagem Assistida por Computador/métodos , Face , Segurança Computacional , Algoritmos
9.
IEEE Trans Pattern Anal Mach Intell ; 46(8): 5209-5226, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38315605

RESUMO

Demographic biases in source datasets have been shown as one of the causes of unfairness and discrimination in the predictions of Machine Learning models. One of the most prominent types of demographic bias are statistical imbalances in the representation of demographic groups in the datasets. In this article, we study the measurement of these biases by reviewing the existing metrics, including those that can be borrowed from other disciplines. We develop a taxonomy for the classification of these metrics, providing a practical guide for the selection of appropriate metrics. To illustrate the utility of our framework, and to further understand the practical characteristics of the metrics, we conduct a case study of 20 datasets used in Facial Emotion Recognition (FER), analyzing the biases present in them. Our experimental results show that many metrics are redundant and that a reduced subset of metrics may be sufficient to measure the amount of demographic bias. The article provides valuable insights for researchers in AI and related fields to mitigate dataset bias and improve the fairness and accuracy of AI models.


Assuntos
Bases de Dados Factuais , Expressão Facial , Humanos , Reconhecimento Facial Automatizado/métodos , Algoritmos , Viés , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Demografia , Face/anatomia & histologia , Face/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos
10.
Comput Math Methods Med ; 2021: 7748350, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34824599

RESUMO

The application of face detection and recognition technology in security monitoring systems has made a huge contribution to public security. Face detection is an essential first step in many face analysis systems. In complex scenes, the accuracy of face detection would be limited because of the missing and false detection of small faces, due to image quality, face scale, light, and other factors. In this paper, a two-level face detection model called SR-YOLOv5 is proposed to address some problems of dense small faces in actual scenarios. The research first optimized the backbone and loss function of YOLOv5, which is aimed at achieving better performance in terms of mean average precision (mAP) and speed. Then, to improve face detection in blurred scenes or low-resolution situations, we integrated image superresolution technology on the detection head. In addition, some representative deep-learning algorithm based on face detection is discussed by grouping them into a few major categories, and the popular face detection benchmarks are enumerated in detail. Finally, the wider face dataset is used to train and test the SR-YOLOv5 model. Compared with multitask convolutional neural network (MTCNN), Contextual Multi-Scale Region-based CNN (CMS-RCNN), Finding Tiny Faces (HR), Single Shot Scale-invariant Face Detector (S3FD), and TinaFace algorithms, it is verified that the proposed model has higher detection precision, which is 0.7%, 0.6%, and 2.9% higher than the top one. SR-YOLOv5 can effectively use face information to accurately detect hard-to-detect face targets in complex scenes.


Assuntos
Algoritmos , Reconhecimento Facial Automatizado/métodos , Face/anatomia & histologia , Redes Neurais de Computação , Reconhecimento Facial Automatizado/estatística & dados numéricos , Biologia Computacional , Aprendizado Profundo , Humanos , Medidas de Segurança/estatística & dados numéricos
11.
PLoS One ; 16(10): e0257923, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34648520

RESUMO

Facial imaging and facial recognition technologies, now common in our daily lives, also are increasingly incorporated into health care processes, enabling touch-free appointment check-in, matching patients accurately, and assisting with the diagnosis of certain medical conditions. The use, sharing, and storage of facial data is expected to expand in coming years, yet little is documented about the perspectives of patients and participants regarding these uses. We developed a pair of surveys to gather public perspectives on uses of facial images and facial recognition technologies in healthcare and in health-related research in the United States. We used Qualtrics Panels to collect responses from general public respondents using two complementary and overlapping survey instruments; one focused on six types of biometrics (including facial images and DNA) and their uses in a wide range of societal contexts (including healthcare and research) and the other focused on facial imaging, facial recognition technology, and related data practices in health and research contexts specifically. We collected responses from a diverse group of 4,048 adults in the United States (2,038 and 2,010, from each survey respectively). A majority of respondents (55.5%) indicated they were equally worried about the privacy of medical records, DNA, and facial images collected for precision health research. A vignette was used to gauge willingness to participate in a hypothetical precision health study, with respondents split as willing to (39.6%), unwilling to (30.1%), and unsure about (30.3%) participating. Nearly one-quarter of respondents (24.8%) reported they would prefer to opt out of the DNA component of a study, and 22.0% reported they would prefer to opt out of both the DNA and facial imaging component of the study. Few indicated willingness to pay a fee to opt-out of the collection of their research data. Finally, respondents were offered options for ideal governance design of their data, as "open science"; "gated science"; and "closed science." No option elicited a majority response. Our findings indicate that while a majority of research participants might be comfortable with facial images and facial recognition technologies in healthcare and health-related research, a significant fraction expressed concern for the privacy of their own face-based data, similar to the privacy concerns of DNA data and medical records. A nuanced approach to uses of face-based data in healthcare and health-related research is needed, taking into consideration storage protection plans and the contexts of use.


Assuntos
Reconhecimento Facial Automatizado/métodos , Pesquisa Biomédica/métodos , Gerenciamento de Dados/métodos , Atenção à Saúde/métodos , Reconhecimento Facial , Disseminação de Informação/métodos , Opinião Pública , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Prontuários Médicos , Pessoa de Meia-Idade , Privacidade , Inquéritos e Questionários , Estados Unidos , Adulto Jovem
12.
PLoS One ; 16(10): e0258672, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34665834

RESUMO

The aim of this study was to develop and evaluate a machine vision algorithm to assess the pain level in horses, using an automatic computational classifier based on the Horse Grimace Scale (HGS) and trained by machine learning method. The use of the Horse Grimace Scale is dependent on a human observer, who most of the time does not have availability to evaluate the animal for long periods and must also be well trained in order to apply the evaluation system correctly. In addition, even with adequate training, the presence of an unknown person near an animal in pain can result in behavioral changes, making the evaluation more complex. As a possible solution, the automatic video-imaging system will be able to monitor pain responses in horses more accurately and in real-time, and thus allow an earlier diagnosis and more efficient treatment for the affected animals. This study is based on assessment of facial expressions of 7 horses that underwent castration, collected through a video system positioned on the top of the feeder station, capturing images at 4 distinct timepoints daily for two days before and four days after surgical castration. A labeling process was applied to build a pain facial image database and machine learning methods were used to train the computational pain classifier. The machine vision algorithm was developed through the training of a Convolutional Neural Network (CNN) that resulted in an overall accuracy of 75.8% while classifying pain on three levels: not present, moderately present, and obviously present. While classifying between two categories (pain not present and pain present) the overall accuracy reached 88.3%. Although there are some improvements to be made in order to use the system in a daily routine, the model appears promising and capable of measuring pain on images of horses automatically through facial expressions, collected from video images.


Assuntos
Reconhecimento Facial Automatizado/métodos , Orquiectomia/efeitos adversos , Medição da Dor/veterinária , Algoritmos , Animais , Bases de Dados Factuais , Aprendizado Profundo , Reconhecimento Facial , Cavalos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Orquiectomia/veterinária , Gravação em Vídeo
13.
IEEE Trans Image Process ; 30: 7636-7648, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34469297

RESUMO

Convolutional neural networks are capable of extracting powerful representations for face recognition. However, they tend to suffer from poor generalization due to imbalanced data distributions where a small number of classes are over-represented (e.g. frontal or non-occluded faces) and some of the remaining rarely appear (e.g. profile or heavily occluded faces). This is the reason why the performance is dramatically degraded in minority classes. For example, this issue is serious for recognizing masked faces in the scenario of ongoing pandemic of the COVID-19. In this work, we propose an Attention Augmented Network, called AAN-Face, to handle this issue. First, an attention erasing (AE) scheme is proposed to randomly erase units in attention maps. This well prepares models towards occlusions or pose variations. Second, an attention center loss (ACL) is proposed to learn a center for each attention map, so that the same attention map focuses on the same facial part. Consequently, discriminative facial regions are emphasized, while useless or noisy ones are suppressed. Third, the AE and the ACL are incorporated to form the AAN-Face. Since the discriminative parts are randomly removed by the AE, the ACL is encouraged to learn different attention centers, leading to the localization of diverse and complementary facial parts. Comprehensive experiments on various test datasets, especially on masked faces, demonstrate that our AAN-Face models outperform the state-of-the-art methods, showing the importance and effectiveness.


Assuntos
Reconhecimento Facial Automatizado/métodos , Face/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , COVID-19 , Humanos , Máscaras
14.
Plast Reconstr Surg ; 148(1): 45-54, 2021 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-34181603

RESUMO

BACKGROUND: Patients desire face-lifting procedures primarily to appear younger, more refreshed, and attractive. Because there are few objective studies assessing the success of face-lift surgery, the authors used artificial intelligence, in the form of convolutional neural network algorithms alongside FACE-Q patient-reported outcomes, to evaluate perceived age reduction and patient satisfaction following face-lift surgery. METHODS: Standardized preoperative and postoperative (1 year) images of 50 consecutive patients who underwent face-lift procedures (platysmaplasty, superficial musculoaponeurotic system-ectomy, cheek minimal access cranial suspension malar lift, or fat grafting) were used by four neural networks (trained to identify age based on facial features) to estimate age reduction after surgery. In addition, FACE-Q surveys were used to measure patient-reported facial aesthetic outcome. Patient satisfaction was compared to age reduction. RESULTS: The neural network preoperative age accuracy score demonstrated that all four neural networks were accurate in identifying ages (mean score, 100.8). Patient self-appraisal age reduction reported a greater age reduction than neural network age reduction after a face lift (-6.7 years versus -4.3 years). FACE-Q scores demonstrated a high level of patient satisfaction for facial appearance (75.1 ± 8.1), quality of life (82.4 ± 8.3), and satisfaction with outcome (79.0 ± 6.3). Finally, there was a positive correlation between neural network age reduction and patient satisfaction. CONCLUSION: Artificial intelligence algorithms can reliably estimate the reduction in apparent age after face-lift surgery; this estimated age reduction correlates with patient satisfaction. CLINICAL QUESTION/LEVEL OF EVIDENCE: Diagnostic, IV.


Assuntos
Reconhecimento Facial Automatizado/estatística & dados numéricos , Aprendizado Profundo/estatística & dados numéricos , Satisfação do Paciente/estatística & dados numéricos , Rejuvenescimento , Ritidoplastia/estatística & dados numéricos , Idoso , Reconhecimento Facial Automatizado/métodos , Face/diagnóstico por imagem , Face/cirurgia , Estudos de Viabilidade , Feminino , Seguimentos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Pessoa de Meia-Idade , Medidas de Resultados Relatados pelo Paciente , Período Pós-Operatório , Período Pré-Operatório , Qualidade de Vida , Reprodutibilidade dos Testes , Resultado do Tratamento
15.
Plast Reconstr Surg ; 148(1): 162-169, 2021 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-34181613

RESUMO

BACKGROUND: Despite the wide range of cleft lip morphology, consistent scales to categorize preoperative severity do not exist. Machine learning has been used to increase accuracy and efficiency in detection and rating of multiple conditions, yet it has not been applied to cleft disease. The authors tested a machine learning approach to automatically detect and measure facial landmarks and assign severity grades using preoperative photographs. METHODS: Preoperative images were collected from 800 unilateral cleft lip patients, manually annotated for cleft-specific landmarks, and rated using a previously validated severity scale by eight expert reviewers. Five convolutional neural network models were trained for landmark detection and severity grade assignment. Mean squared error loss and Pearson correlation coefficient for cleft width ratio, nostril width ratio, and severity grade assignment were calculated. RESULTS: All five models performed well in landmark detection and severity grade assignment, with the largest and most complex model, Residual Network, performing best (mean squared error, 24.41; cleft width ratio correlation, 0.943; nostril width ratio correlation, 0.879; severity correlation, 0.892). The mobile device-compatible network, MobileNet, also showed a high degree of accuracy (mean squared error, 36.66; cleft width ratio correlation, 0.901; nostril width ratio correlation, 0.705; severity correlation, 0.860). CONCLUSIONS: Machine learning models demonstrate the ability to accurately measure facial features and assign severity grades according to validated scales. Such models hold promise for the creation of a simple, automated approach to classifying cleft lip morphology. Further potential exists for a mobile telephone-based application to provide real-time feedback to improve clinical decision making and patient counseling.


Assuntos
Fenda Labial/diagnóstico , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Nariz/anormalidades , Índice de Gravidade de Doença , Pontos de Referência Anatômicos , Reconhecimento Facial Automatizado/métodos , Fenda Labial/complicações , Fenda Labial/cirurgia , Tomada de Decisão Clínica , Aconselhamento , Conjuntos de Dados como Assunto , Estudos de Viabilidade , Humanos , Aplicativos Móveis , Nariz/diagnóstico por imagem , Nariz/cirurgia , Fotografação , Período Pré-Operatório , Consulta Remota , Rinoplastia
16.
Eur J Med Genet ; 64(9): 104267, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34161860

RESUMO

Down syndrome is one of the most common chromosomal anomalies affecting the world's population, with an estimated frequency of 1 in 700 live births. Despite its relatively high prevalence, diagnostic rates based on clinical features have remained under 70% for most of the developed world and even lower in countries with limited resources. While genetic and cytogenetic confirmation greatly increases the diagnostic rate, such resources are often non-existent in many low- and middle-income countries, particularly in Sub-Saharan Africa. To address the needs of countries with limited resources, the implementation of mobile, user-friendly and affordable technologies that aid in diagnosis would greatly increase the odds of success for a child born with a genetic condition. Given that the Democratic Republic of the Congo is estimated to have one of the highest rates of birth defects in the world, our team sought to determine if smartphone-based facial analysis technology could accurately detect Down syndrome in individuals of Congolese descent. Prior to technology training, we confirmed the presence of trisomy 21 using low-cost genomic applications that do not need advanced expertise to utilize and are available in many low-resourced countries. Our software technology trained on 132 Congolese subjects had a significantly improved performance (91.67% accuracy, 95.45% sensitivity, 87.88% specificity) when compared to previous technology trained on individuals who are not of Congolese origin (p < 5%). In addition, we provide the list of most discriminative facial features of Down syndrome and their ranges in the Congolese population. Collectively, our technology provides low-cost and accurate diagnosis of Down syndrome in the local population.


Assuntos
Reconhecimento Facial Automatizado/métodos , Síndrome de Down/patologia , Fácies , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Facial Automatizado/economia , Reconhecimento Facial Automatizado/normas , República Democrática do Congo , Países em Desenvolvimento , Síndrome de Down/genética , Testes Genéticos , Humanos , Processamento de Imagem Assistida por Computador/economia , Processamento de Imagem Assistida por Computador/normas , Aprendizado de Máquina , Sensibilidade e Especificidade
17.
Plast Surg Nurs ; 41(2): 112-116, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34033638

RESUMO

The number of applications for facial recognition technology is increasing due to the improvement in image quality, artificial intelligence, and computer processing power that has occurred during the last decades. Algorithms can be used to convert facial anthropometric landmarks into a computer representation, which can be used to help identify nonverbal information about an individual's health status. This article discusses the potential ways a facial recognition tool can perform a health assessment. Because facial attributes may be considered biometric data, clinicians should be informed about the clinical, ethical, and legal issues associated with its use.


Assuntos
Reconhecimento Facial Automatizado/instrumentação , Nível de Saúde , Avaliação em Enfermagem/métodos , Inteligência Artificial/tendências , Reconhecimento Facial Automatizado/métodos , Humanos , Avaliação em Enfermagem/normas
18.
IEEE Trans Image Process ; 30: 5313-5326, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34038362

RESUMO

In this paper, we propose a structure-coherent deep feature learning method for face alignment. Unlike most existing face alignment methods which overlook the facial structure cues, we explicitly exploit the relation among facial landmarks to make the detector robust to hard cases such as occlusion and large pose. Specifically, we leverage a landmark-graph relational network to enforce the structural relationships among landmarks. We consider the facial landmarks as structural graph nodes and carefully design the neighborhood to passing features among the most related nodes. Our method dynamically adapts the weights of node neighborhood to eliminate distracted information from noisy nodes, such as occluded landmark point. Moreover, different from most previous works which only tend to penalize the landmarks absolute position during the training, we propose a relative location loss to enhance the information of relative location of landmarks. This relative location supervision further regularizes the facial structure. Our approach considers the interactions among facial landmarks and can be easily implemented on top of any convolutional backbone to boost the performance. Extensive experiments on three popular benchmarks, including WFLW, COFW and 300W, demonstrate the effectiveness of the proposed method. In particular, due to explicit structure modeling, our approach is especially robust to challenging cases resulting in impressive low failure rate on COFW and WFLW datasets. The model and code are publicly available at https://github.com/BeierZhu/Sturcture-Coherency-Face-Alignment.


Assuntos
Reconhecimento Facial Automatizado/métodos , Aprendizado Profundo , Face/anatomia & histologia , Pontos de Referência Anatômicos/anatomia & histologia , Bases de Dados Factuais , Humanos
19.
Mol Genet Genomic Med ; 9(5): e1636, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33773094

RESUMO

INTRODUCTION: Patients with Noonan and Williams-Beuren syndrome present similar facial phenotypes modulated by their ethnic background. Although distinctive facial features have been reported, studies show a variable incidence of those characteristics in populations with diverse ancestry. Hence, a differential diagnosis based on reported facial features can be challenging. Although accurate diagnoses are possible with genetic testing, they are not available in developing and remote regions. METHODS: We used a facial analysis technology to identify the most discriminative facial metrics between 286 patients with Noonan and 161 with Williams-Beuren syndrome with diverse ethnic background. We quantified the most discriminative metrics, and their ranges both globally and in different ethnic groups. We also created population-based appearance images that are useful not only as clinical references but also for training purposes. Finally, we trained both global and ethnic-specific machine learning models with previous metrics to distinguish between patients with Noonan and Williams-Beuren syndromes. RESULTS: We obtained a classification accuracy of 85.68% in the global population evaluated using cross-validation, which improved to 90.38% when we adapted the facial metrics to the ethnicity of the patients (p = 0.024). CONCLUSION: Our facial analysis provided for the first time quantitative reference facial metrics for the differential diagnosis Noonan and Williams-Beuren syndromes in diverse populations.


Assuntos
Reconhecimento Facial Automatizado/métodos , Diagnóstico por Computador/métodos , Face/patologia , Síndrome de Noonan/diagnóstico , Fenótipo , Síndrome de Williams/diagnóstico , Adolescente , Adulto , Reconhecimento Facial Automatizado/normas , Criança , Pré-Escolar , Diagnóstico por Computador/normas , Diagnóstico Diferencial , Feminino , Humanos , Lactente , Aprendizado de Máquina , Masculino , Sensibilidade e Especificidade
20.
Biomed Res Int ; 2021: 6696357, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33778081

RESUMO

BACKGROUND: Sedentary lifestyle and work from home schedules due to the ongoing COVID-19 pandemic in 2020 have caused a significant rise in obesity across adults. With limited visits to the doctors during this period to avoid possible infections, there is currently no way to measure or track obesity. METHODS: We reviewed the literature on relationships between obesity and facial features, in white, black, hispanic-latino, and Korean populations and validated them against a cohort of Indian participants (n = 106). The body mass index (BMI) and waist-to-hip ratio (WHR) were obtained using anthropometric measurements, and body fat mass (BFM), percentage body fat (PBF), and visceral fat area (VFA) were measured using body composition analysis. Facial pictures were also collected and processed to characterize facial geometry. Regression analysis was conducted to determine correlations between body fat parameters and facial model parameters. RESULTS: Lower facial geometry was highly correlated with BMI (R 2 = 0.77) followed by PBF (R 2 = 0.72), VFA (R 2 = 0.65), WHR (R 2 = 0.60), BFM (R 2 = 0.59), and weight (R 2 = 0.54). CONCLUSIONS: The ability to predict obesity using facial images through mobile application or telemedicine can help with early diagnosis and timely medical intervention for people with obesity during the pandemic.


Assuntos
Antropometria/métodos , Reconhecimento Facial Automatizado/métodos , COVID-19/epidemiologia , Obesidade/diagnóstico , Adulto , Composição Corporal , Índice de Massa Corporal , Peso Corporal , Reconhecimento Facial/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Obesidade/epidemiologia , Obesidade/metabolismo , Pandemias , Valor Preditivo dos Testes , Prognóstico , Fatores de Risco , SARS-CoV-2/isolamento & purificação , Circunferência da Cintura , Relação Cintura-Quadril
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA