Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Phys Eng Sci Med ; 47(3): 967-979, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38573489

RESUMO

Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the Computed Tomography (CT) scan of a person to be in the training set, while other images of the same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada por Raios X , Humanos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
2.
Comput Methods Programs Biomed ; 229: 107318, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36592580

RESUMO

BACKGROUND AND OBJECTIVE: For early breast cancer detection, regular screening with mammography imaging is recommended. Routine examinations result in datasets with a predominant amount of negative samples. The limited representativeness of positive cases can be problematic for learning Computer-Aided Diagnosis (CAD) systems. Collecting data from multiple institutions is a potential solution to mitigate this problem. Recently, federated learning has emerged as an effective tool for collaborative learning. In this setting, local models perform computation on their private data to update the global model. The order and the frequency of local updates influence the final global model. In the context of federated adversarial learning to improve multi-site breast cancer classification, we investigate the role of the order in which samples are locally presented to the optimizers. METHODS: We define a novel memory-aware curriculum learning method for the federated setting. We aim to improve the consistency of the local models penalizing inconsistent predictions, i.e., forgotten samples. Our curriculum controls the order of the training samples prioritizing those that are forgotten after the deployment of the global model. Our approach is combined with unsupervised domain adaptation to deal with domain shift while preserving data privacy. RESULTS: Two classification metrics: area under the receiver operating characteristic curve (ROC-AUC) and area under the curve for the precision-recall curve (PR-AUC) are used to evaluate the performance of the proposed method. Our method is evaluated with three clinical datasets from different vendors. An ablation study showed the improvement of each component of our method. The AUC and PR-AUC are improved on average by 5% and 6%, respectively, compared to the conventional federated setting. CONCLUSIONS: We demonstrated the benefits of curriculum learning for the first time in a federated setting. Our results verified the effectiveness of the memory-aware curriculum federated learning for the multi-site breast cancer classification. Our code is publicly available at: https://github.com/ameliajimenez/curriculum-federated-learning.


Assuntos
Conscientização , Neoplasias , Cognição , Currículo , Aprendizagem , Mamografia
3.
J Med Imaging (Bellingham) ; 9(4): 044502, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35937560

RESUMO

Purpose: Vascular changes are observed from initial stages of breast cancer, and monitoring of vessel structures helps in early detection of malignancies. In recent years, thermal imaging is being evaluated as a low-cost imaging modality to visualize and analyze early vascularity. However, visual inspection of thermal vascularity is challenging and subjective. Therefore, there is a need for automated techniques to assist physicians in visualization and interpretation of vascularity by marking the vessel structures and by providing quantified qualitative parameters that helps in malignancy classification Approach: In the literature, there are very few approaches for vascular analysis and classification of breast thermal images using interpretable vascular features. One major challenge is the automated detection of breast vascularity due to diffused vessel boundaries. We first propose a deep learning-based semantic segmentation approach that generates heatmaps of vessel structures from two-dimensional breast thermal images for quantitative assessment of breast vascularity. Second, we extract interpretable vascular parameters and propose a classifier to predict likelihood of breast cancer purely from the extracted vascular parameters. Results: The results of the cancer classifier were validated using an independent clinical dataset consisting of 258 participants. The results were encouraging as the proposed approach segmented vessels well and gave a good classification performance with area under receiver operating characteristic curve of 0.85 with the proposed vascularity parameters. Conclusions: The detected vasculature and its associated high classification performance show the utility of the proposed approach in interpretation of breast vascularity.

4.
Med Phys ; 48(12): 7913-7929, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34674280

RESUMO

PURPOSE: Feature maps created from deep convolutional neural networks (DCNNs) have been widely used for visual explanation of DCNN-based classification tasks. However, many clinical applications such as benign-malignant classification of lung nodules normally require quantitative and objective interpretability, rather than just visualization. In this paper, we propose a novel interpretable multi-task attention learning network named IMAL-Net for early invasive adenocarcinoma screening in chest computed tomography images, which takes advantage of segmentation prior to assist interpretable classification. METHODS: Two sub-ResNets are firstly integrated together via a prior-attention mechanism for simultaneous nodule segmentation and invasiveness classification. Then, numerous radiomic features from the segmentation results are concatenated with high-level semantic features from the classification subnetwork by FC layers to achieve superior performance. Meanwhile, an end-to-end feature selection mechanism (named FSM) is designed to quantify crucial radiomic features greatly affecting the prediction of each sample, and thus it can provide clinically applicable interpretability to the prediction result. RESULTS: Nodule samples from a total of 1626 patients were collected from two grade-A hospitals for large-scale verification. Five-fold cross validation demonstrated that the proposed IMAL-Net can achieve an AUC score of 93.8% ± 1.1% and a recall score of 93.8% ± 2.8% for identification of invasive lung adenocarcinoma. CONCLUSIONS: It can be concluded that fusing semantic features and radiomic features can achieve obvious improvements in the invasiveness classification task. Moreover, by learning more fine-grained semantic features and highlighting the most important radiomics features, the proposed attention and FSM mechanisms not only can further improve the performance but also can be used for both visual explanations and objective analysis of the classification results.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Neoplasias Pulmonares , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma de Pulmão/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
5.
PeerJ ; 5: e3874, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29018612

RESUMO

Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA