Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
PLoS Genet ; 20(2): e1011168, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38412177

RESUMEN

Artificial intelligence (AI) for facial diagnostics is increasingly used in the genetics clinic to evaluate patients with potential genetic conditions. Current approaches focus on one type of AI called Deep Learning (DL). While DL- based facial diagnostic platforms have a high accuracy rate for many conditions, less is understood about how this technology assesses and classifies (categorizes) images, and how this compares to humans. To compare human and computer attention, we performed eye-tracking analyses of geneticist clinicians (n = 22) and non-clinicians (n = 22) who viewed images of people with 10 different genetic conditions, as well as images of unaffected individuals. We calculated the Intersection-over-Union (IoU) and Kullback-Leibler divergence (KL) to compare the visual attentions of the two participant groups, and then the clinician group against the saliency maps of our deep learning classifier. We found that human visual attention differs greatly from DL model's saliency results. Averaging over all the test images, IoU and KL metric for the successful (accurate) clinician visual attentions versus the saliency maps were 0.15 and 11.15, respectively. Individuals also tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians (IoU and KL of clinicians versus non-clinicians were 0.47 and 2.73, respectively). This study shows that humans (at different levels of expertise) and a computer vision model examine images differently. Understanding these differences can improve the design and use of AI tools, and lead to more meaningful interactions between clinicians and AI technologies.


Asunto(s)
Inteligencia Artificial , Computadores , Humanos , Simulación por Computador
2.
Bioinformatics ; 40(Supplement_1): i110-i118, 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38940144

RESUMEN

Artificial intelligence (AI) is increasingly used in genomics research and practice, and generative AI has garnered significant recent attention. In clinical applications of generative AI, aspects of the underlying datasets can impact results, and confounders should be studied and mitigated. One example involves the facial expressions of people with genetic conditions. Stereotypically, Williams (WS) and Angelman (AS) syndromes are associated with a "happy" demeanor, including a smiling expression. Clinical geneticists may be more likely to identify these conditions in images of smiling individuals. To study the impact of facial expression, we analyzed publicly available facial images of approximately 3500 individuals with genetic conditions. Using a deep learning (DL) image classifier, we found that WS and AS images with non-smiling expressions had significantly lower prediction probabilities for the correct syndrome labels than those with smiling expressions. This was not seen for 22q11.2 deletion and Noonan syndromes, which are not associated with a smiling expression. To further explore the effect of facial expressions, we computationally altered the facial expressions for these images. We trained HyperStyle, a GAN-inversion technique compatible with StyleGAN2, to determine the vector representations of our images. Then, following the concept of InterfaceGAN, we edited these vectors to recreate the original images in a phenotypically accurate way but with a different facial expression. Through online surveys and an eye-tracking experiment, we examined how altered facial expressions affect the performance of human experts. We overall found that facial expression is associated with diagnostic accuracy variably in different genetic conditions.


Asunto(s)
Expresión Facial , Humanos , Aprendizaje Profundo , Inteligencia Artificial , Genética Médica/métodos , Síndrome de Williams/genética
3.
Stud Health Technol Inform ; 302: 932-936, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203539

RESUMEN

Computer vision has useful applications in precision medicine and recognizing facial phenotypes of genetic disorders is one of them. Many genetic disorders are known to affect faces' visual appearance and geometry. Automated classification and similarity retrieval aid physicians in decision-making to diagnose possible genetic conditions as early as possible. Previous work has addressed the problem as a classification problem; however, the sparse label distribution, having few labeled samples, and huge class imbalances across categories make representation learning and generalization harder. In this study, we used a facial recognition model trained on a large corpus of healthy individuals as a pre-task and transferred it to facial phenotype recognition. Furthermore, we created simple baselines of few-shot meta-learning methods to improve our base feature descriptor. Our quantitative results on GestaltMatcher Database (GMDB) show that our CNN baseline surpasses previous works, including GestaltMatcher, and few-shot meta-learning strategies improve retrieval performance in frequent and rare classes.


Asunto(s)
Diagnóstico por Computador , Cara , Enfermedades Genéticas Congénitas , Fenotipo , Humanos , Enfermedades Genéticas Congénitas/diagnóstico por imagen
4.
medRxiv ; 2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37577564

RESUMEN

Deep learning (DL) and other types of artificial intelligence (AI) are increasingly used in many biomedical areas, including genetics. One frequent use in medical genetics involves evaluating images of people with potential genetic conditions to help with diagnosis. A central question involves better understanding how AI classifiers assess images compared to humans. To explore this, we performed eye-tracking analyses of geneticist clinicians and non-clinicians. We compared results to DL-based saliency maps. We found that human visual attention when assessing images differs greatly from the parts of images weighted by the DL model. Further, individuals tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians.

5.
Data Brief ; 35: 106909, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33748360

RESUMEN

Extensive use of the internet has enabled easy access to many different sources, such as news and social media. Content shared on the internet cannot be fully fact-checked and, as a result, misinformation can spread in a fast and easy way. Recently, psychologists and economists have shown in many experiments that prior beliefs, knowledge, and the willingness to think deliberately are important determinants to explain who falls for fake news. Many of these studies only rely on self-reports, which suffer from social desirability. We need more objective measures of information processing, such as eye movements, to effectively analyze the reading of news. To provide the research community the opportunity to study human behaviors in relation to news truthfulness, we propose the FakeNewsPerception dataset. FakeNewsPerception consists of eye movements during reading, perceived believability scores, questionnaires including Cognitive Reflection Test (CRT) and News-Find-Me (NFM) perception, and political orientation, collected from 25 participants with 60 news items. Initial analyses of the eye movements reveal that human perception differs when viewing true and fake news.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA