RESUMEN
The accurate segmentation of individual muscles is essential for quantitative MRI analysis of thigh images. Deep learning methods have achieved state-of-the-art results in segmentation, but they require large numbers of labeled data to perform well. However, labeling individual thigh muscles slice by slice for numerous volumes is a laborious and time-consuming task, which limits the availability of annotated datasets. To address this challenge, self-supervised learning (SSL) emerges as a promising technique to enhance model performance by pretraining the model on unlabeled data. A recent approach, called positional contrastive learning, exploits the information given by the axial position of the slices to learn features transferable on the segmentation task. The aim of this work was to propose positional contrastive SSL for the segmentation of individual thigh muscles from MRI acquisitions in a population of elderly healthy subjects and to evaluate it on different levels of limited annotated data. An unlabeled dataset of 72 T1w MRI thigh acquisitions was available for SSL pretraining, while a labeled dataset of 52 volumes was employed for the final segmentation task, split into training and test sets. The effectiveness of SSL pretraining to fine-tune a U-Net architecture for thigh muscle segmentation was compared with that of a randomly initialized model (RND), considering an increasing number of annotated volumes (S = 1, 2, 5, 10, 20, 30, 40). Our results demonstrated that SSL yields substantial improvements in Dice similarity coefficient (DSC) when using a very limited number of labeled volumes (e.g., for S = 1, DSC 0.631 versus 0.530 for SSL and RND, respectively). Moreover, enhancements are achievable even when utilizing the full number of labeled subjects, with DSC = 0.927 for SSL and 0.924 for RND. In conclusion, positional contrastive SSL was effective in obtaining more accurate thigh muscle segmentation, even with a very low number of labeled data, with a potential impact of speeding up the annotation process in clinics.
Asunto(s)
Imagen por Resonancia Magnética , Músculo Esquelético , Muslo , Humanos , Imagen por Resonancia Magnética/métodos , Muslo/diagnóstico por imagen , Músculo Esquelético/diagnóstico por imagen , Masculino , Femenino , Anciano , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático Supervisado , Persona de Mediana EdadRESUMEN
Muscular dystrophies present diagnostic challenges, requiring accurate classification for effective diagnosis and treatment. This study investigates the efficacy of deep learning methodologies in classifying these disorders using skeletal muscle MRI scans. Specifically, we assess the performance of the Swin Transformer (SwinT) architecture against traditional convolutional neural networks (CNNs) in distinguishing between healthy individuals, Becker muscular dystrophy (BMD), and limb-girdle muscular Dystrophy type 2 (LGMD2) patients. Moreover, 3T MRI scans from a retrospective dataset of 75 scans (from 54 subjects) were utilized, with multiparametric protocols capturing various MRI contrasts, including T1-weighted and Dixon sequences. The dataset included 17 scans from healthy volunteers, 27 from BMD patients, and 31 from LGMD2 patients. SwinT and CNNs were trained and validated using a subset of the dataset, with the performance evaluated based on accuracy and F-score. Results indicate the superior accuracy of SwinT (0.96), particularly when employing fat fraction (FF) images as input; it served as a valuable parameter for enhancing classification accuracy. Despite limitations, including a modest cohort size, this study provides valuable insights into the application of AI-driven approaches for precise neuromuscular disorder classification, with potential implications for improving patient care.
RESUMEN
The segmentation and classification of cell nuclei are pivotal steps in the pipelines for the analysis of bioimages. Deep learning (DL) approaches are leading the digital pathology field in the context of nuclei detection and classification. Nevertheless, the features that are exploited by DL models to make their predictions are difficult to interpret, hindering the deployment of such methods in clinical practice. On the other hand, pathomic features can be linked to an easier description of the characteristics exploited by the classifiers for making the final predictions. Thus, in this work, we developed an explainable computer-aided diagnosis (CAD) system that can be used to support pathologists in the evaluation of tumor cellularity in breast histopathological slides. In particular, we compared an end-to-end DL approach that exploits the Mask R-CNN instance segmentation architecture with a two steps pipeline, where the features are extracted while considering the morphological and textural characteristics of the cell nuclei. Classifiers that are based on support vector machines and artificial neural networks are trained on top of these features in order to discriminate between tumor and non-tumor nuclei. Afterwards, the SHAP (Shapley additive explanations) explainable artificial intelligence technique was employed to perform a feature importance analysis, which led to an understanding of the features processed by the machine learning models for making their decisions. An expert pathologist validated the employed feature set, corroborating the clinical usability of the model. Even though the models resulting from the two-stage pipeline are slightly less accurate than those of the end-to-end approach, the interpretability of their features is clearer and may help build trust for pathologists to adopt artificial intelligence-based CAD systems in their clinical workflow. To further show the validity of the proposed approach, it has been tested on an external validation dataset, which was collected from IRCCS Istituto Tumori "Giovanni Paolo II" and made publicly available to ease research concerning the quantification of tumor cellularity.
RESUMEN
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.