Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
PLoS Genet ; 20(2): e1011168, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38412177

RESUMEN

Artificial intelligence (AI) for facial diagnostics is increasingly used in the genetics clinic to evaluate patients with potential genetic conditions. Current approaches focus on one type of AI called Deep Learning (DL). While DL- based facial diagnostic platforms have a high accuracy rate for many conditions, less is understood about how this technology assesses and classifies (categorizes) images, and how this compares to humans. To compare human and computer attention, we performed eye-tracking analyses of geneticist clinicians (n = 22) and non-clinicians (n = 22) who viewed images of people with 10 different genetic conditions, as well as images of unaffected individuals. We calculated the Intersection-over-Union (IoU) and Kullback-Leibler divergence (KL) to compare the visual attentions of the two participant groups, and then the clinician group against the saliency maps of our deep learning classifier. We found that human visual attention differs greatly from DL model's saliency results. Averaging over all the test images, IoU and KL metric for the successful (accurate) clinician visual attentions versus the saliency maps were 0.15 and 11.15, respectively. Individuals also tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians (IoU and KL of clinicians versus non-clinicians were 0.47 and 2.73, respectively). This study shows that humans (at different levels of expertise) and a computer vision model examine images differently. Understanding these differences can improve the design and use of AI tools, and lead to more meaningful interactions between clinicians and AI technologies.


Asunto(s)
Inteligencia Artificial , Computadores , Humanos , Simulación por Computador
2.
medRxiv ; 2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37577564

RESUMEN

Deep learning (DL) and other types of artificial intelligence (AI) are increasingly used in many biomedical areas, including genetics. One frequent use in medical genetics involves evaluating images of people with potential genetic conditions to help with diagnosis. A central question involves better understanding how AI classifiers assess images compared to humans. To explore this, we performed eye-tracking analyses of geneticist clinicians and non-clinicians. We compared results to DL-based saliency maps. We found that human visual attention when assessing images differs greatly from the parts of images weighted by the DL model. Further, individuals tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians.

3.
Stud Health Technol Inform ; 302: 917-921, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203536

RESUMEN

COVID-19 presence classification and severity prediction via (3D) thorax computed tomography scans have become important tasks in recent times. Especially for capacity planning of intensive care units, predicting the future severity of a COVID-19 patient is crucial. The presented approach follows state-of-theart techniques to aid medical professionals in these situations. It comprises an ensemble learning strategy via 5-fold cross-validation that includes transfer learning and combines pre-trained 3D-versions of ResNet34 and DenseNet121 for COVID19 classification and severity prediction respectively. Further, domain-specific preprocessing was applied to optimize model performance. In addition, medical information like the infection-lung-ratio, patient age, and sex were included. The presented model achieves an AUC of 79.0% to predict COVID-19 severity, and 83.7% AUC to classify the presence of an infection, which is comparable with other currently popular methods. This approach is implemented using the AUCMEDI framework and relies on well-known network architectures to ensure robustness and reproducibility.


Asunto(s)
COVID-19 , Humanos , Reproducibilidad de los Resultados , Unidades de Cuidados Intensivos , Aprendizaje , Proyectos de Investigación
4.
Stud Health Technol Inform ; 302: 932-936, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203539

RESUMEN

Computer vision has useful applications in precision medicine and recognizing facial phenotypes of genetic disorders is one of them. Many genetic disorders are known to affect faces' visual appearance and geometry. Automated classification and similarity retrieval aid physicians in decision-making to diagnose possible genetic conditions as early as possible. Previous work has addressed the problem as a classification problem; however, the sparse label distribution, having few labeled samples, and huge class imbalances across categories make representation learning and generalization harder. In this study, we used a facial recognition model trained on a large corpus of healthy individuals as a pre-task and transferred it to facial phenotype recognition. Furthermore, we created simple baselines of few-shot meta-learning methods to improve our base feature descriptor. Our quantitative results on GestaltMatcher Database (GMDB) show that our CNN baseline surpasses previous works, including GestaltMatcher, and few-shot meta-learning strategies improve retrieval performance in frequent and rare classes.


Asunto(s)
Diagnóstico por Computador , Cara , Enfermedades Genéticas Congénitas , Fenotipo , Humanos , Enfermedades Genéticas Congénitas/diagnóstico por imagen
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2896-2902, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891852

RESUMEN

Cancer is a major public health issue and takes the second-highest toll of deaths caused by non-communicable diseases worldwide. Automatically detecting lesions at an early stage is essential to increase the chance of a cure. This study proposes a novel dilated Faster R-CNN with modulated deformable convolution and modulated deformable positive-sensitive region of interest pooling to detect lesions in computer tomography images. A pre-trained VGG-16 is transferred as the backbone of Faster R-CNN, followed by a region proposal network and a region of interest pooling layer to achieve lesion detection. The modulated deformable convolutional layers are employed to learn deformable convolutional filters, while the modulated deformable positive-sensitive region of interest pooling provides an enhanced feature extraction on the feature maps. Moreover, dilated convolutions are combined with the modulated deformable convolutions to fine-tune the VGG-16 model with multi-scale receptive fields. In the experiments evaluated on the DeepLesion dataset, the modulated deformable positive-sensitive region of interest pooling model achieves the highest sensitivity score of 58.8 % on average with dilation of [4, 4, 4] and outperforms state-of-the-art models in the range of [2], [8] average false positives per image. This research demonstrates the suitability of dilation modifications and the possibility of enhancing the performance using a modulated deformable positive-sensitive region of interest pooling layer for universal lesion detectors.


Asunto(s)
Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...