Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 224
Filtrar
1.
Q J Exp Psychol (Hove) ; : 17470218241275977, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39138399

RESUMO

Developmental co-ordination disorder (DCD) is characterised by difficulties in motor control and coordination from early childhood. While problems processing facial identity are often associated with neurodevelopmental conditions, such issues have never been directly tested in adults with DCD. We tested this possibility through a range of tasks, and assessed the prevalence of developmental prosopagnosia (i.e., lifelong difficulties with faces), in a group comprising individuals who self-reported a diagnosis of, or suspected that they had, DCD. Strikingly, we found 53% of this probable DCD group met recently recommended criteria for a diagnosis of prosopagnosia, with 22% acquiring a diagnosis using traditional cognitive task-based methods. Moreover, their problems with faces were apparent on both unfamiliar and familiar face memory tests, as well as on a facial perception task (i.e., could they tell faces apart). Positive correlations were found between self-report measures assessing movement and coordination problems, and objective difficulties on experimental face identity processing tasks, suggesting widespread neurocognitive disruption in DCD. Importantly, issues in identity processing in our probable DCD group remained even after excluding participants with comorbid conditions traditionally associated with difficulties in face recognition, i.e., autism and dyslexia. We recommend that any diagnostic test for DCD should include an assessment for prosopagnosia. Given the high prevalence of prosopagnosia in our probable DCD group, and the positive correlations between DCD and prosopagnosia symptoms, there may be a stronger link between movement and facial identity abilities than previously thought.

2.
Postgrad Med J ; 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39075977

RESUMO

BACKGROUND: Williams-Beuren syndrome, Noonan syndrome, and Alagille syndrome are common types of genetic syndromes (GSs) characterized by distinct facial features, pulmonary stenosis, and delayed growth. In clinical practice, differentiating these three GSs remains a challenge. Facial gestalts serve as a diagnostic tool for recognizing Williams-Beuren syndrome, Noonan syndrome, and Alagille syndrome. Pretrained foundation models (PFMs) can be considered the foundation for small-scale tasks. By pretraining with a foundation model, we propose facial recognition models for identifying these syndromes. METHODS: A total of 3297 (n = 1666) facial photos were obtained from children diagnosed with Williams-Beuren syndrome (n = 174), Noonan syndrome (n = 235), and Alagille syndrome (n = 51), and from children without GSs (n = 1206). The photos were randomly divided into five subsets, with each syndrome and non-GS equally and randomly distributed in each subset. The proportion of the training set and the test set was 4:1. The ResNet-100 architecture was employed as the backbone model. By pretraining with a foundation model, we constructed two face recognition models: one utilizing the ArcFace loss function, and the other employing the CosFace loss function. Additionally, we developed two models using the same architecture and loss function but without pretraining. The accuracy, precision, recall, and F1 score of each model were evaluated. Finally, we compared the performance of the facial recognition models to that of five pediatricians. RESULTS: Among the four models, ResNet-100 with a PFM and CosFace loss function achieved the best accuracy (84.8%). Of the same loss function, the performance of the PFMs significantly improved (from 78.5% to 84.5% for the ArcFace loss function, and from 79.8% to 84.8% for the CosFace loss function). With and without the PFM, the performance of the CosFace loss function models was similar to that of the ArcFace loss function models (79.8% vs 78.5% without PFM; 84.8% vs 84.5% with PFM). Among the five pediatricians, the highest accuracy (0.700) was achieved by the senior-most pediatrician with genetics training. The accuracy and F1 scores of the pediatricians were generally lower than those of the models. CONCLUSIONS: A facial recognition-based model has the potential to improve the identification of three common GSs with pulmonary stenosis. PFMs might be valuable for building screening models for facial recognition. Key messages What is already known on this topic:  Early identification of genetic syndromes (GSs) is crucial for the management and prognosis of children with pulmonary stenosis (PS). Facial phenotyping with convolutional neural networks (CNNs) often requires large-scale training data, limiting its usefulness for GSs. What this study adds:  We successfully built multi-classification models based on face recognition using a CNN to accurately identify three common PS-associated GSs. ResNet-100 with a pretrained foundation model (PFM) and CosFace loss function achieved the best accuracy (84.8%). Pretrained with the foundation model, the performance of the models significantly improved, although the impact of the type of loss function appeared to be minimal. How this study might affect research, practice, or policy:  A facial recognition-based model has the potential to improve the identification of GSs in children with PS. The PFM might be valuable for building identification models for facial detection.

3.
J Biomed Inform ; 157: 104669, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38880237

RESUMO

BACKGROUND: Studies confirm that significant biases exist in online recommendation platforms, exacerbating pre-existing disparities and leading to less-than-optimal outcomes for underrepresented demographics. We study issues of bias in inclusion and representativeness in the context of healthcare information disseminated via videos on the YouTube social media platform, a widely used online channel for multi-media rich information. With one in three US adults using the Internet to learn about a health concern, it is critical to assess inclusivity and representativeness regarding how health information is disseminated by digital platforms such as YouTube. METHODS: Leveraging methods from fair machine learning (ML), natural language processing and voice and facial recognition methods, we examine inclusivity and representativeness of video content presenters using a large corpus of videos and their metadata on a chronic condition (diabetes) extracted from the YouTube platform. Regression models are used to determine whether presenter demographics impact video popularity, measured by the video's average daily view count. A video that generates a higher view count is considered to be more popular. RESULTS: The voice and facial recognition methods predicted the gender and race of the presenter with reasonable success. Gender is predicted through voice recognition (accuracy = 78%, AUC = 76%), while the gender and race predictions use facial recognition (accuracy = 93%, AUC = 92% and accuracy = 82%, AUC = 80%, respectively). The gender of the presenter is more significant for video views only when the face of the presenter is not visible while videos with male presenters with no face visibility have a positive relationship with view counts. Furthermore, videos with white and male presenters have a positive influence on view counts while videos with female and non - white group have high view counts. CONCLUSION: Presenters' demographics do have an influence on average daily view count of videos viewed on social media platforms as shown by advanced voice and facial recognition algorithms used for assessing inclusion and representativeness of the video content. Future research can explore short videos and those at the channel level because popularity of the channel name and the number of videos associated with that channel do have an influence on view counts.

4.
Forensic Sci Int ; 361: 112108, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38908069

RESUMO

Mass disaster events can result in high levels of casualties that need to be identified. Whilst disaster victim identification (DVI) relies on primary identifiers of DNA, fingerprints, and dental, these require ante-mortem data that may not exist or be easily obtainable. Facial recognition technology may be able to assist. Automated facial recognition has advanced considerably and access to ante-mortem facial images are readily available. Facial recognition could therefore be used to expedite the DVI process by narrowing down leads before primary identifiers are made available. This research explores the feasibility of using automated facial recognition technology to support DVI. We evaluated the performance of a commercial-off-the-self facial recognition algorithm on post-mortem images (representing images taken after a mass disaster) against ante-mortem images (representing a database that may exist within agencies who hold face databases for identity documents (such as passports or driver's licenses). We explored facial recognition performance for different operational scenarios, with different levels of face image quality, and by cause of death. Our research is the largest facial recognition evaluation of post-mortem and ante-mortem images to date. We demonstrated that facial recognition technology would be valuable for DVI and that the performance varies by image quality and cause of death. We provide recommendations for future research.


Assuntos
Algoritmos , Reconhecimento Facial Automatizado , Vítimas de Desastres , Humanos , Face/anatomia & histologia , Face/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Masculino , Feminino , Fotografação
5.
Eur J Pediatr ; 183(9): 3797-3808, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38871980

RESUMO

Williams-Beuren syndrome (WBS) is a rare genetic disorder characterized by special facial gestalt, delayed development, and supravalvular aortic stenosis or/and stenosis of the branches of the pulmonary artery. We aim to develop and optimize accurate models of facial recognition to assist in the diagnosis of WBS, and to evaluate their effectiveness by using both five-fold cross-validation and an external test set. We used a total of 954 images from 135 patients with WBS, 124 patients suffering from other genetic disorders, and 183 healthy children. The training set comprised 852 images of 104 WBS cases, 91 cases of other genetic disorders, and 145 healthy children from September 2017 to December 2021 at the Guangdong Provincial People's Hospital. We constructed six binary classification models of facial recognition for WBS by using EfficientNet-b3, ResNet-50, VGG-16, VGG-16BN, VGG-19, and VGG-19BN. Transfer learning was used to pre-train the models, and each model was modified with a variable cosine learning rate. Each model was first evaluated by using five-fold cross-validation and then assessed on the external test set. The latter contained 102 images of 31 children suffering from WBS, 33 children with other genetic disorders, and 38 healthy children. To compare the capabilities of these models of recognition with those of human experts in terms of identifying cases of WBS, we recruited two pediatricians, a pediatric cardiologist, and a pediatric geneticist to identify the WBS patients based solely on their facial images. We constructed six models of facial recognition for diagnosing WBS using EfficientNet-b3, ResNet-50, VGG-16, VGG-16BN, VGG-19, and VGG-19BN. The model based on VGG-19BN achieved the best performance in terms of five-fold cross-validation, with an accuracy of 93.74% ± 3.18%, precision of 94.93% ± 4.53%, specificity of 96.10% ± 4.30%, and F1 score of 91.65% ± 4.28%, while the VGG-16BN model achieved the highest recall value of 91.63% ± 5.96%. The VGG-19BN model also achieved the best performance on the external test set, with an accuracy of 95.10%, precision of 100%, recall of 83.87%, specificity of 93.42%, and F1 score of 91.23%. The best performance by human experts on the external test set yielded values of accuracy, precision, recall, specificity, and F1 scores of 77.45%, 60.53%, 77.42%, 83.10%, and 66.67%, respectively. The F1 score of each human expert was lower than those of the EfficientNet-b3 (84.21%), ResNet-50 (74.51%), VGG-16 (85.71%), VGG-16BN (85.71%), VGG-19 (83.02%), and VGG-19BN (91.23%) models. CONCLUSION: The results showed that facial recognition technology can be used to accurately diagnose patients with WBS. Facial recognition models based on VGG-19BN can play a crucial role in its clinical diagnosis. Their performance can be improved by expanding the size of the training dataset, optimizing the CNN architectures applied, and modifying them with a variable cosine learning rate. WHAT IS KNOWN: • The facial gestalt of WBS, often described as "elfin," includes a broad forehead, periorbital puffiness, a flat nasal bridge, full cheeks, and a small chin. • Recent studies have demonstrated the potential of deep convolutional neural networks for facial recognition as a diagnostic tool for WBS. WHAT IS NEW: • This study develops six models of facial recognition, EfficientNet-b3, ResNet-50, VGG-16, VGG-16BN, VGG-19, and VGG-19BN, to improve WBS diagnosis. • The VGG-19BN model achieved the best performance, with an accuracy of 95.10% and specificity of 93.42%. The facial recognition model based on VGG-19BN can play a crucial role in the clinical diagnosis of WBS.


Assuntos
Síndrome de Williams , Humanos , Síndrome de Williams/diagnóstico , Síndrome de Williams/genética , Criança , Feminino , Masculino , Pré-Escolar , Lactente , Estudos de Casos e Controles , Adolescente , Reconhecimento Facial , Reconhecimento Facial Automatizado/métodos
6.
Digit Health ; 10: 20552076241259664, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38846372

RESUMO

Objective: Assessing pain in individuals with neurological conditions like cerebral palsy is challenging due to limited self-reporting and expression abilities. Current methods lack sensitivity and specificity, underlining the need for a reliable evaluation protocol. An automated facial recognition system could revolutionize pain assessment for such patients.The research focuses on two primary goals: developing a dataset of facial pain expressions for individuals with cerebral palsy and creating a deep learning-based automated system for pain assessment tailored to this group. Methods: The study trained ten neural networks using three pain image databases and a newly curated CP-PAIN Dataset of 109 images from cerebral palsy patients, classified by experts using the Facial Action Coding System. Results: The InceptionV3 model demonstrated promising results, achieving 62.67% accuracy and a 61.12% F1 score on the CP-PAIN dataset. Explainable AI techniques confirmed the consistency of crucial features for pain identification across models. Conclusion: The study underscores the potential of deep learning in developing reliable pain detection systems using facial recognition for individuals with communication impairments due to neurological conditions. A more extensive and diverse dataset could further enhance the models' sensitivity to subtle pain expressions in cerebral palsy patients and possibly extend to other complex neurological disorders. This research marks a significant step toward more empathetic and accurate pain management for vulnerable populations.

7.
Front Big Data ; 7: 1354659, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38895177

RESUMO

Despite their pronounced potential, unacceptable risk AI systems, such as facial recognition, have been used as tools for, inter alia, digital surveillance, and policing. This usage raises concerns in relation to the protection of basic freedoms and liberties and upholding the rule of law. This article contributes to the legal discussion by investigating how the law must intervene, control, and regulate the use of unacceptable risk AI systems that concern biometric data from a human-rights and rule of law perspective. In doing so, the article first examines the collection of biometric data and the use of facial recognition technology. Second, it describes the nature of the obligation or duty of states to regulate in relation to new technologies. The article, lastly, assesses the legal implications resulting from the failure of states to regulate new technologies and investigates possible legal remedies. The article uses some relevant EU regulations as an illustrative example.

8.
Front Psychiatry ; 15: 1375751, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38938460

RESUMO

Background: Individuals with anxiety disorders (ADs) often display hypervigilance to threat information, although this response may be less pronounced following psychotherapy. This study aims to investigate the unconscious recognition performance of facial expressions in patients with panic disorder (PD) post-treatment, shedding light on alterations in their emotional processing biases. Methods: Patients with PD (n=34) after (exposure-based) cognitive behavior therapy and healthy controls (n=43) performed a subliminal affective recognition task. Emotional facial expressions (fearful, happy, or mirrored) were displayed for 33 ms and backwardly masked by a neutral face. Participants completed a forced choice task to discriminate the briefly presented facial stimulus and an uncovered condition where only the neutral mask was shown. We conducted a secondary analysis to compare groups based on their four possible response types under the four stimulus conditions and examined the correlation of the false alarm rate for fear responses to non-fearful (happy, mirrored, and uncovered) stimuli with clinical anxiety symptoms. Results: The patient group showed a unique selection pattern in response to happy expressions, with significantly more correct "happy" responses compared to controls. Additionally, lower severity of anxiety symptoms after psychotherapy was associated with a decreased false fear response rate with non-threat presentations. Conclusion: These data suggest that patients with PD exhibited a "happy-face recognition advantage" after psychotherapy. Less symptoms after treatment were related to a reduced fear bias. Thus, a differential facial emotion detection task could be a suitable tool to monitor response patterns and biases in individuals with ADs in the context of psychotherapy.

9.
J Gambl Stud ; 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724824

RESUMO

Computer technology has long been touted as a means of increasing the effectiveness of voluntary self-exclusion schemes - especially in terms of relieving gaming venue staff of the task of manually identifying and verifying the status of new customers. This paper reports on the government-led implementation of facial recognition technology as part of an automated self-exclusion program in the city of Adelaide in South Australia-one of the first jurisdiction-wide enforcements of this controversial technology in small venue gambling. Drawing on stakeholder interviews, site visits and documentary analysis over a two year period, the paper contrasts initial claims that facial recognition offered a straightforward and benign improvement to the efficiency of the city's long-running self-excluded gambler program, with subsequent concerns that the new technology was associated with heightened inconsistencies, inefficiencies and uncertainties. As such, the paper contends that regardless of the enthusiasms of government, tech industry and gaming lobby, facial recognition does not offer a ready 'technical fix' to problem gambling. The South Australian case illustrates how this technology does not appear to better address the core issues underpinning problem gambling, and/or substantially improve conditions for problem gamblers to refrain from gambling. As such, it is concluded that the gambling sector needs to pay close attention to the practical outcomes arising from initial cases such as this, and resist industry pressures for the wider replication of this technology in other jurisdictions.

10.
BMC Pediatr ; 24(1): 361, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38783283

RESUMO

BACKGROUND: Noonan syndrome (NS) is a rare genetic disease, and patients who suffer from it exhibit a facial morphology that is characterized by a high forehead, hypertelorism, ptosis, inner epicanthal folds, down-slanting palpebral fissures, a highly arched palate, a round nasal tip, and posteriorly rotated ears. Facial analysis technology has recently been applied to identify many genetic syndromes (GSs). However, few studies have investigated the identification of NS based on the facial features of the subjects. OBJECTIVES: This study develops advanced models to enhance the accuracy of diagnosis of NS. METHODS: A total of 1,892 people were enrolled in this study, including 233 patients with NS, 863 patients with other GSs, and 796 healthy children. We took one to 10 frontal photos of each subject to build a dataset, and then applied the multi-task convolutional neural network (MTCNN) for data pre-processing to generate standardized outputs with five crucial facial landmarks. The ImageNet dataset was used to pre-train the network so that it could capture generalizable features and minimize data wastage. We subsequently constructed seven models for facial identification based on the VGG16, VGG19, VGG16-BN, VGG19-BN, ResNet50, MobileNet-V2, and squeeze-and-excitation network (SENet) architectures. The identification performance of seven models was evaluated and compared with that of six physicians. RESULTS: All models exhibited a high accuracy, precision, and specificity in recognizing NS patients. The VGG19-BN model delivered the best overall performance, with an accuracy of 93.76%, precision of 91.40%, specificity of 98.73%, and F1 score of 78.34%. The VGG16-BN model achieved the highest AUC value of 0.9787, while all models based on VGG architectures were superior to the others on the whole. The highest scores of six physicians in terms of accuracy, precision, specificity, and the F1 score were 74.00%, 75.00%, 88.33%, and 61.76%, respectively. The performance of each model of facial recognition was superior to that of the best physician on all metrics. CONCLUSION: Models of computer-assisted facial recognition can improve the rate of diagnosis of NS. The models based on VGG19-BN and VGG16-BN can play an important role in diagnosing NS in clinical practice.


Assuntos
Síndrome de Noonan , Humanos , Síndrome de Noonan/diagnóstico , Criança , Feminino , Masculino , Pré-Escolar , Redes Neurais de Computação , Lactente , Adolescente , Reconhecimento Facial Automatizado/métodos , Diagnóstico por Computador/métodos , Sensibilidade e Especificidade , Estudos de Casos e Controles
11.
Nano Lett ; 24(22): 6673-6682, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38779991

RESUMO

Reliably discerning real human faces from fake ones, known as antispoofing, is crucial for facial recognition systems. While neuromorphic systems offer integrated sensing-memory-processing functions, they still struggle with efficient antispoofing techniques. Here we introduce a neuromorphic facial recognition system incorporating multidimensional deep ultraviolet (DUV) optoelectronic synapses to address these challenges. To overcome the complexity and high cost of producing DUV synapses using traditional wide-bandgap semiconductors, we developed a low-temperature (≤70 °C) solution process for fabricating DUV synapses based on PEA2PbBr4/C8-BTBT heterojunction field-effect transistors. This method enables the large-scale (4-in.), uniform, and transparent production of DUV synapses. These devices respond to both DUV and visible light, showing multidimensional features. Leveraging the unique ability of the multidimensional DUV synapse (MDUVS) to discriminate real human skin from artificial materials, we have achieved robust neuromorphic facial recognition with antispoofing capability, successfully identifying genuine human faces with an accuracy exceeding 92%.

12.
Diabetes Metab Syndr ; 18(4): 103003, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38615568

RESUMO

AIM: To build a facial image database and to explore the diagnostic efficacy and influencing factors of the artificial intelligence-based facial recognition (AI-FR) system for multiple endocrine and metabolic syndromes. METHODS: Individuals with multiple endocrine and metabolic syndromes and healthy controls were included from public literature and databases. In this facial image database, facial images and clinical data were collected for each participant and dFRI (disease facial recognition intensity) was calculated to quantify facial complexity of each syndrome. AI-FR diagnosis models were trained for each disease using three algorithms: support vector machine (SVM), principal component analysis k-nearest neighbor (PCA-KNN), and adaptive boosting (AdaBoost). Diagnostic performance was evaluated. Optimal efficacy was achieved as the best index among the three models. Effect factors of AI-FR diagnosis were explored with regression analysis. RESULTS: 462 cases of 10 endocrine and metabolic syndromes and 2310 controls were included into the facial image database. The AI-FR diagnostic models showed diagnostic accuracies of 0.827-0.920 with SVM, 0.766-0.890 with PCA-KNN, and 0.818-0.935 with AdaBoost. Higher dFRI was associated with higher optimal area under the curve (AUC) (P = 0.035). No significant correlation was observed between the sample size of the training set and diagnostic performance. CONCLUSIONS: A multi-ethnic, multi-regional, and multi-disease facial database for 10 endocrine and metabolic syndromes was built. AI-FR models displayed ideal diagnostic performance. dFRI proved associated with the diagnostic performance, suggesting inherent facial features might contribute to the performance of AI-FR models.


Assuntos
Inteligência Artificial , Bases de Dados Factuais , Síndrome Metabólica , Humanos , Síndrome Metabólica/diagnóstico , Feminino , Masculino , Pessoa de Meia-Idade , Doenças do Sistema Endócrino/diagnóstico , Adulto , Face/diagnóstico por imagem , Reconhecimento Facial , Prognóstico , Algoritmos , Estudos de Casos e Controles , Seguimentos
13.
Bioengineering (Basel) ; 11(4)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38671805

RESUMO

BACKGROUND: Facial recognition systems utilizing deep learning techniques can improve the accuracy of facial recognition technology. However, it remains unclear whether these systems should be available for patient identification in a hospital setting. METHODS: We evaluated a facial recognition system using deep learning and the built-in camera of an iPad to identify patients. We tested the system under different conditions to assess its authentication scores (AS) and determine its efficacy. Our evaluation included 100 patients in four postures: sitting, supine, and lateral positions, with and without masks, and under nighttime sleeping conditions. RESULTS: Our results show that the unmasked certification rate of 99.7% was significantly higher than the masked rate of 90.8% (p < 0.0001). In addition, we found that the authentication rate exceeded 99% even during nighttime sleeping. Furthermore, the facial recognition system was safe and acceptable for patient identification within a hospital environment. Even for patients wearing masks, we achieved a 100% success rate for authentication regardless of illumination if they were sitting with their eyes open. CONCLUSIONS: This is the first systematical study to evaluate facial recognition among hospitalized patients under different situations. The facial recognition system using deep learning for patient identification shows promising results, proving its safety and acceptability, especially in hospital settings where accurate patient identification is crucial.

14.
MedComm (2020) ; 5(4): e526, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38606361

RESUMO

Malnutrition is a prevalent and severe issue in hospitalized patients with chronic diseases. However, malnutrition screening is often overlooked or inaccurate due to lack of awareness and experience among health care providers. This study aimed to develop and validate a novel digital smartphone-based self-administered tool that uses facial features, especially the ocular area, as indicators of malnutrition in inpatient patients with chronic diseases. Facial photographs and malnutrition screening scales were collected from 619 patients in four different hospitals. A machine learning model based on back propagation neural network was trained, validated, and tested using these data. The model showed a significant correlation (p < 0.05) and a high accuracy (area under the curve 0.834-0.927) in different patient groups. The point-of-care mobile tool can be used to screen malnutrition with good accuracy and accessibility, showing its potential for screening malnutrition in patients with chronic diseases.

15.
Neuroimage ; 291: 120591, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38552812

RESUMO

Functional imaging has helped to understand the role of the human insula as a major processing network for integrating input with the current state of the body. However, these studies remain at a correlative level. Studies that have examined insula damage show lesion-specific performance deficits. Case reports have provided anecdotal evidence for deficits following insula damage, but group lesion studies offer a number of advances in providing evidence for functional representation of the insula. We conducted a systematic literature search to review group studies of patients with insula damage after stroke and identified 23 studies that tested emotional processing performance in these patients. Eight of these studies assessed emotional processing of visual (most commonly IAPS), auditory (e.g., prosody), somatosensory (emotional touch) and autonomic function (heart rate variability). Fifteen other studies looked at social processing, including emotional face recognition, gaming tasks and tests of empathy. Overall, there was a bias towards testing only patients with right-hemispheric lesions, making it difficult to consider hemisphere specificity. Although many studies included an overlay of lesion maps to characterise their patients, most did not differentiate lesion statistics between insula subunits and/or applied voxel-based associations between lesion location and impairment. This is probably due to small group sizes, which limit statistical comparisons. We conclude that multicentre analyses of lesion studies with comparable patients and performance tests are needed to definitively test the specific function of parts of the insula in emotional processing and social interaction.


Assuntos
Emoções , Córtex Insular , Acidente Vascular Cerebral , Humanos , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Emoções/fisiologia , Córtex Insular/diagnóstico por imagem , Córtex Insular/fisiopatologia , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiopatologia
16.
J Stomatol Oral Maxillofac Surg ; 125(3S): 101843, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38521241

RESUMO

OBJECTIVES: This work aims to introduce a Python-based algorithm and delve into the recent paradigm shift in Maxillofacial Surgery propelled by technological advancement. The provided code exemplifies the utilization of the MediaPipe library, created by Google in C++, with an additional Python interface available as a binding. TECHNICAL NOTE: The advent of FaceMesh coupled with artificial intelligence (AI), has brought about a transformative wave in contemporary maxillofacial surgery. This cutting-edge deep neural network, seamlessly integrated with Virtual Surgical Planning (VSP), offers surgeons precise 4D facial mapping capabilities. It accurately identifies facial landmarks, tailoring surgical interventions to individual patients, and streamlining the overall surgical procedure. CONCLUSION: FaceMesh emerges as a revolutionary tool in modern maxillofacial surgery. This deep neural network empowers surgeons with detailed insights into facial morphology, aiding in personalized interventions and optimizing surgical outcomes. The real-time assessment of facial dynamics contributes to improved aesthetic and functional results, particularly in complex cases like facial asymmetries or reconstructions. Additionally, FaceMesh has the potential for early detection of medical conditions and disease prediction, further enhancing patient care. Ongoing refinement and validation are essential to address limitations and ensure the reliability and effectiveness of FaceMesh in clinical settings.


Assuntos
Cirurgia Assistida por Computador , Humanos , Cirurgia Assistida por Computador/métodos , Face/cirurgia , Algoritmos , Inteligência Artificial , Pontos de Referência Anatômicos , Cirurgia Bucal/métodos , Redes Neurais de Computação , Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Bucais/métodos , Software
17.
J Imaging ; 10(3)2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38535139

RESUMO

Personal privacy protection has been extensively investigated. The privacy protection of face recognition applications combines face privacy protection with face recognition. Traditional face privacy-protection methods encrypt or perturb facial images for protection. However, the original facial images or parameters need to be restored during recognition. In this paper, it is found that faces can still be recognized correctly when only some of the high-order and local feature information from faces is retained, while the rest of the information is fuzzed. Based on this, a privacy-preserving face recognition method combining random convolution and self-learning batch normalization is proposed. This method generates a privacy-preserved scrambled facial image and an image fuzzy degree that is close to an encryption of the image. The server directly recognizes the scrambled facial image, and the recognition accuracy is equivalent to that of the normal facial image. The method ensures the revocability and irreversibility of the privacy preserving of faces at the same time. In this experiment, the proposed method is tested on the LFW, Celeba, and self-collected face datasets. On the three datasets, the proposed method outperforms the existing face privacy-preserving recognition methods in terms of face visual information elimination and recognition accuracy. The recognition accuracy is >99%, and the visual information elimination is close to an encryption effect.

18.
J Integr Neurosci ; 23(3): 48, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38538212

RESUMO

In the context of perceiving individuals within and outside of social groups, there are distinct cognitive processes and mechanisms in the brain. Extensive research in recent years has delved into the neural mechanisms that underlie differences in how we perceive individuals from different social groups. To gain a deeper understanding of these neural mechanisms, we present a comprehensive review from the perspectives of facial recognition and memory, intergroup identification, empathy, and pro-social behavior. Specifically, we focus on studies that utilize functional magnetic resonance imaging (fMRI) and event-related potential (ERP) techniques to explore the relationship between brain regions and behavior. Findings from fMRI studies reveal that the brain regions associated with intergroup differentiation in perception and behavior do not operate independently but instead exhibit dynamic interactions. Similarly, ERP studies indicate that the amplitude of neural responses shows various combinations in relation to perception and behavior.


Assuntos
Empatia , Reconhecimento Facial , Humanos , Imageamento por Ressonância Magnética , Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Mapeamento Encefálico , Comportamento Social
19.
J Med Internet Res ; 26: e42904, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38477981

RESUMO

BACKGROUND: While characteristic facial features provide important clues for finding the correct diagnosis in genetic syndromes, valid assessment can be challenging. The next-generation phenotyping algorithm DeepGestalt analyzes patient images and provides syndrome suggestions. GestaltMatcher matches patient images with similar facial features. The new D-Score provides a score for the degree of facial dysmorphism. OBJECTIVE: We aimed to test state-of-the-art facial phenotyping tools by benchmarking GestaltMatcher and D-Score and comparing them to DeepGestalt. METHODS: Using a retrospective sample of 4796 images of patients with 486 different genetic syndromes (London Medical Database, GestaltMatcher Database, and literature images) and 323 inconspicuous control images, we determined the clinical use of D-Score, GestaltMatcher, and DeepGestalt, evaluating sensitivity; specificity; accuracy; the number of supported diagnoses; and potential biases such as age, sex, and ethnicity. RESULTS: DeepGestalt suggested 340 distinct syndromes and GestaltMatcher suggested 1128 syndromes. The top-30 sensitivity was higher for DeepGestalt (88%, SD 18%) than for GestaltMatcher (76%, SD 26%). DeepGestalt generally assigned lower scores but provided higher scores for patient images than for inconspicuous control images, thus allowing the 2 cohorts to be separated with an area under the receiver operating characteristic curve (AUROC) of 0.73. GestaltMatcher could not separate the 2 classes (AUROC 0.55). Trained for this purpose, D-Score achieved the highest discriminatory power (AUROC 0.86). D-Score's levels increased with the age of the depicted individuals. Male individuals yielded higher D-scores than female individuals. Ethnicity did not appear to influence D-scores. CONCLUSIONS: If used with caution, algorithms such as D-score could help clinicians with constrained resources or limited experience in syndromology to decide whether a patient needs further genetic evaluation. Algorithms such as DeepGestalt could support diagnosing rather common genetic syndromes with facial abnormalities, whereas algorithms such as GestaltMatcher could suggest rare diagnoses that are unknown to the clinician in patients with a characteristic, dysmorphic face.


Assuntos
Algoritmos , Benchmarking , Humanos , Feminino , Masculino , Estudos Retrospectivos , Área Sob a Curva , Computadores
20.
Math Biosci Eng ; 21(3): 4165-4186, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38549323

RESUMO

In recent years, the extensive use of facial recognition technology has raised concerns about data privacy and security for various applications, such as improving security and streamlining attendance systems and smartphone access. In this study, a blockchain-based decentralized facial recognition system (DFRS) that has been designed to overcome the complexities of technology. The DFRS takes a trailblazing approach, focusing on finding a critical balance between the benefits of facial recognition and the protection of individuals' private rights in an era of increasing monitoring. First, the facial traits are segmented into separate clusters which are maintained by the specialized node that maintains the data privacy and security. After that, the data obfuscation is done by using generative adversarial networks. To ensure the security and authenticity of the data, the facial data is encoded and stored in the blockchain. The proposed system achieves significant results on the CelebA dataset, which shows the effectiveness of the proposed approach. The proposed model has demonstrated enhanced efficacy over existing methods, attaining 99.80% accuracy on the dataset. The study's results emphasize the system's efficacy, especially in biometrics and privacy-focused applications, demonstrating outstanding precision and efficiency during its implementation. This research provides a complete and novel solution for secure facial recognition and data security for privacy protection.


Assuntos
Blockchain , Aprendizado Profundo , Reconhecimento Facial , Humanos , Privacidade , Fenótipo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA