Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Med Image Underst Anal ; 14122: 48-63, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39156493

RESUMEN

Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.

2.
Artículo en Inglés | MEDLINE | ID: mdl-37123015

RESUMEN

Label noise is inevitable in medical image databases developed for deep learning due to the inter-observer variability caused by the different levels of expertise of the experts annotating the images, and, in some cases, the automated methods that generate labels from medical reports. It is known that incorrect annotations or label noise can degrade the actual performance of supervised deep learning models and can bias the model's evaluation. Existing literature show that noise in one class has minimal impact on the model's performance for another class in natural image classification problems where different target classes have a relatively distinct shape and share minimal visual cues for knowledge transfer among the classes. However, it is not clear how class-dependent label noise affects the model's performance when operating on medical images, for which different output classes can be difficult to distinguish even for experts, and there is a high possibility of knowledge transfer across classes during the training period. We hypothesize that for medical image classification tasks where the different classes share a very similar shape with differences only in texture, the noisy label for one class might affect the performance across other classes, unlike the case when the target classes have different shapes and are visually distinct. In this paper, we study this hypothesis using two publicly available datasets: a 2D organ classification dataset with target organ classes being visually distinct, and a histopathology image classification dataset where the target classes look very similar visually. Our results show that the label noise in one class has a much higher impact on the model's performance on other classes for the histopathology dataset compared to the organ dataset.

3.
Data Eng Med Imaging (2023) ; 14314: 78-90, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39144367

RESUMEN

Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors. For natural image classification training with noisy labeled data, model initialization with contrastive self-supervised pretrained weights has shown to reduce feature corruption and improve classification performance. However, no works have explored: i) how other self-supervised approaches, such as pretext task-based pretraining, impact the learning with noisy label, and ii) any self-supervised pretraining methods alone for medical images in noisy label settings. Medical images often feature smaller datasets and subtle inter-class variations, requiring human expertise to ensure correct classification. Thus, it is not clear if the methods improving learning with noisy labels in natural image datasets such as CIFAR would also help with medical images. In this work, we explore contrastive and pretext task-based selfsupervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels-NCT-CRC-HE-100K tissue histological images and COVID-QU-Ex chest X-ray images. Our results show that models initialized with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.

4.
ACS Omega ; 6(49): 33837-33845, 2021 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-34926930

RESUMEN

Paper-based analytical devices (PADs) employing colorimetric detection and smartphone images have gained wider acceptance in a variety of measurement applications. PADs are primarily meant to be used in field settings where assay and imaging conditions greatly vary, resulting in less accurate results. Recently, machine-learning (ML)-assisted models have been used in image analysis. We evaluated a combination of four ML models-logistic regression, support vector machine (SVM), random forest, and artificial neural network (ANN)-as well as three image color spaces, RGB, HSV, and LAB, for their ability to accurately predict analyte concentrations. We used images of PADs taken at varying lighting conditions, with different cameras and users for food color and enzyme inhibition assays to create training and test datasets. The prediction accuracy was higher for food color than enzyme inhibition assays in most of the ML models and color space combinations. All models better predicted coarse-level classifications than fine-grained concentration classes. ML models using the sample color along with a reference color increased the models' ability to predict the result in which the reference color may have partially factored out the variation in ambient assay and imaging conditions. The best concentration class prediction accuracy obtained for food color was 0.966 when using the ANN model and LAB color space. The accuracy for enzyme inhibition assay was 0.908 when using the SVM model and LAB color space. Appropriate models and color space combinations can be useful to analyze large numbers of samples on PADs as a powerful low-cost quick field-testing tool.

5.
Med Image Anal ; 72: 102115, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34134084

RESUMEN

Scoliosis is a common medical condition, which occurs most often during the growth spurt just before puberty. Untreated Scoliosis may cause long-term sequelae. Therefore, accurate automated quantitative estimation of spinal curvature is an important task for the clinical evaluation and treatment planning of Scoliosis. A couple of attempts have been made for automated Cobb angle estimation on single-view x-rays. It is very challenging to achieve a highly accurate automated estimation of Cobb angles because it is difficult to utilize x-rays efficiently. With the idea of developing methods for accurate automated spinal curvature estimation, AASCE2019 challenge provides spinal anterior-posterior x-ray images with manual labels for training and testing the participating methods. We review eight top-ranked methods from 12 teams. Experimental results show that overall the best performing method achieved a symmetric mean absolute percentage (SMAPE) of 21.71%. Limitations and possible future directions are also described in the paper. We hope the dataset in AASCE2019 and this paper could provide insights into quantitative measurement of the spine.


Asunto(s)
Escoliosis , Columna Vertebral , Algoritmos , Humanos , Radiografía , Escoliosis/diagnóstico por imagen , Columna Vertebral/diagnóstico por imagen , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA