Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Comput Methods Programs Biomed ; 257: 108408, 2024 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-39342876

RESUMO

BACKGROUND AND OBJECTIVE: In Pancreatic Ductal Adenocarcinoma (PDA), multi-omic models are emerging to answer unmet clinical needs to derive novel quantitative prognostic factors. We realized a pipeline that relies on survival machine-learning (SML) classifiers and explainability based on patients' follow-up (FU) to stratify prognosis from the public-available multi-omic datasets of the CPTAC-PDA project. MATERIALS AND METHODS: Analyzed datasets included tumor-annotated radiologic images, clinical, and mutational data. A feature selection was based on univariate (UV) and multivariate (MV) survival analyses according to Overall Survival (OS) and recurrence (REC). In this study, we considered seven multi-omic datasets and compared four SML classifiers: Cox, survival random forest, generalized boosted, and support vector machines (SVM). For each classifier, we assessed the concordance (C) index on the validation set. The best classifiers for the validation set on both OS and REC underwent explainability analyses using SurvSHAP(t), which extends SHapley Additive exPlanations (SHAP). RESULTS: According to OS, after UV and MV analyses we selected 18/37 and 10/37 multi-omic features, respectively. According to REC, based on UV and MV analyses we selected 10/35 and 5/35 determinants, respectively. Generally, SML classifiers including radiomics outperformed those modelled on clinical or mutational predictors. For OS, the Cox model encompassing radiomic, clinical, and mutational features reached 75 % of C index, outperforming other classifiers. On the other hand, for REC, the SVM model including only radiomics emerged as the best-performing, with 68 % of C index. For OS, SurvSHAP(t) identified the first order Median Gray Level (GL) intensities, the gender, the tumor grade, the Joint Energy GL Co-occurrence Matrix (GLCM), and the GLCM Informational Measures of Correlations of type 1 as the most important features. For REC, the first order Median GL intensities, the GL size zone matrix Small Area Low GL Emphasis, and first order variance of GL intensities emerged as the most discriminative. CONCLUSIONS: In this work, radiomics showed the potential for improving patients' risk stratification in PDA. Furthermore, a deeper understanding of how radiomics can contribute to prognosis in PDA was achieved with a time-dependent explainability of the top multi-omic predictors.

2.
Comput Methods Programs Biomed ; 244: 107966, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38091844

RESUMO

BACKGROUND: In Diffuse Large B-Cell Lymphoma (DLBCL), several methodologies are emerging to derive novel biomarkers to be incorporated in the risk assessment. We realized a pipeline that relies on autoencoders (AE) and Explainable Artificial Intelligence (XAI) to stratify prognosis and derive a gene-based signature. METHODS: AE was exploited to learn an unsupervised representation of the gene expression (GE) from three publicly available datasets, each with its own technology. Multi-layer perceptron (MLP) was used to classify prognosis from latent representation. GE data were preprocessed as normalized, scaled, and standardized. Four different AE architectures (Large, Medium, Small and Extra Small) were compared to find the most suitable for GE data. The joint AE-MLP classified patients on six different outcomes: overall survival at 12, 36, 60 months and progression-free survival (PFS) at 12, 36, 60 months. XAI techniques were used to derive a gene-based signature aimed at refining the Revised International Prognostic Index (R-IPI) risk, which was validated in a fourth independent publicly available dataset. We named our tool SurvIAE: Survival prediction with Interpretable AE. RESULTS: From the latent space of AEs, we observed that scaled and standardized data reduced the batch effect. SurvIAE models outperformed R-IPI with Matthews Correlation Coefficient up to 0.42 vs. 0.18 for the validation-set (PFS36) and to 0.30 vs. 0.19 for the test-set (PFS60). We selected the SurvIAE-Small-PFS36 as the best model and, from its gene signature, we stratified patients in three risk groups: R-IPI Poor patients with High levels of GAB1, R-IPI Poor patients with Low levels of GAB1 or R-IPI Good/Very Good patients with Low levels of GPR132, and R-IPI Good/Very Good patients with High levels of GPR132. CONCLUSIONS: SurvIAE showed the potential to derive a gene signature with translational purpose in DLBCL. The pipeline was made publicly available and can be reused for other pathologies.


Assuntos
Inteligência Artificial , Linfoma Difuso de Grandes Células B , Humanos , Protocolos de Quimioterapia Combinada Antineoplásica , Linfoma Difuso de Grandes Células B/genética , Linfoma Difuso de Grandes Células B/tratamento farmacológico , Prognóstico , Expressão Gênica , Estudos Retrospectivos
3.
Comput Methods Programs Biomed ; 242: 107814, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37722311

RESUMO

BACKGROUND AND OBJECTIVE: The Oxford Classification for IgA nephropathy is the most successful example of an evidence-based nephropathology classification system. The aim of our study was to replicate the glomerular components of Oxford scoring with an end-to-end deep learning pipeline that involves automatic glomerular segmentation followed by classification for mesangial hypercellularity (M), endocapillary hypercellularity (E), segmental sclerosis (S) and active crescents (C). METHODS: A total number of 1056 periodic acid-Schiff (PAS) whole slide images (WSIs), coming from 386 kidney biopsies, were annotated. Several detection models for glomeruli, based on the Mask R-CNN architecture, were trained on 587 WSIs, validated on 161 WSIs, and tested on 127 WSIs. For the development of segmentation models, 20,529 glomeruli were annotated, of which 16,571 as training and 3958 as validation set. The test set of the segmentation module comprised of 2948 glomeruli. For the Oxford classification, 6206 expert-annotated glomeruli from 308 PAS WSIs were labelled for M, E, S, C and split into a training set of 4298 glomeruli from 207 WSIs, and a test set of 1908 glomeruli. We chose the best-performing models to construct an end-to-end pipeline, which we named MESCnn (MESC classification by neural network), for the glomerular Oxford classification of WSIs. RESULTS: Instance segmentation yielded excellent results with an AP50 ranging between 78.2-80.1 % (79.4 ± 0.7 %) on the validation and 75.1-77.7 % (76.5 ± 0.9 %) on the test set. The aggregated Jaccard Index was between 73.4-75.9 % (75.0 ± 0.8 %) on the validation and 69.1-73.4 % (72.2 ± 1.4 %) on the test set. At granular glomerular level, Oxford Classification was best replicated for M with EfficientNetV2-L with a mean ROC-AUC of 90.2 % and a mean precision/recall area under the curve (PR-AUC) of 81.8 %, best for E with MobileNetV2 (ROC-AUC 94.7 %) and ResNet50 (PR-AUC 75.8 %), best for S with EfficientNetV2-M (mean ROC-AUC 92.7 %, mean PR-AUC 87.7 %), best for C with EfficientNetV2-L (ROC-AUC 92.3 %) and EfficientNetV2-S (PR-AUC 54.7 %). At biopsy-level, correlation between expert and deep learning labels fulfilled the demands of the Oxford Classification. CONCLUSION: We designed an end-to-end pipeline for glomerular Oxford Classification on both a granular glomerular and an entire biopsy level. Both the glomerular segmentation and the classification modules are freely available for further development to the renal medicine community.


Assuntos
Aprendizado Profundo , Glomerulonefrite por IGA , Humanos , Glomerulonefrite por IGA/diagnóstico , Glomerulonefrite por IGA/patologia , Taxa de Filtração Glomerular , Glomérulos Renais/patologia , Rim/diagnóstico por imagem
4.
Bioengineering (Basel) ; 10(7)2023 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-37508774

RESUMO

The complex pathobiology of lung cancer, and its spread worldwide, has prompted research studies that combine radiomic and genomic approaches. Indeed, the early identification of genetic alterations and driver mutations affecting the tumor is fundamental for correctly formulating the prognosis and therapeutic response. In this work, we propose a radiogenomic workflow to detect the presence of KRAS and EGFR mutations using radiomic features extracted from computed tomography images of patients affected by lung adenocarcinoma. To this aim, we investigated several feature selection algorithms to identify the most significant and uncorrelated sets of radiomic features and different classification models to reveal the mutational status. Then, we employed the SHAP (SHapley Additive exPlanations) technique to increase the understanding of the contribution given by specific radiomic features to the identification of the investigated mutations. Two cohorts of patients with lung adenocarcinoma were used for the study. The first one, obtained from the Cancer Imaging Archive (TCIA), consisted of 60 cases (25% EGFR, 23% KRAS); the second one, provided by the Azienda Ospedaliero-Universitaria 'Ospedali Riuniti' of Foggia, was composed of 55 cases (16% EGFR, 28% KRAS). The best-performing models proposed in our study achieved an AUC of 0.69 and 0.82 on the validation set for predicting the mutational status of EGFR and KRAS, respectively. The Multi-layer Perceptron model emerged as the top-performing model for both oncogenes, in some cases outperforming the state of the art. This study showed that radiomic features can be associated with EGFR and KRAS mutational status in patients with lung adenocarcinoma.

5.
Bioengineering (Basel) ; 10(4)2023 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-37106583

RESUMO

The segmentation and classification of cell nuclei are pivotal steps in the pipelines for the analysis of bioimages. Deep learning (DL) approaches are leading the digital pathology field in the context of nuclei detection and classification. Nevertheless, the features that are exploited by DL models to make their predictions are difficult to interpret, hindering the deployment of such methods in clinical practice. On the other hand, pathomic features can be linked to an easier description of the characteristics exploited by the classifiers for making the final predictions. Thus, in this work, we developed an explainable computer-aided diagnosis (CAD) system that can be used to support pathologists in the evaluation of tumor cellularity in breast histopathological slides. In particular, we compared an end-to-end DL approach that exploits the Mask R-CNN instance segmentation architecture with a two steps pipeline, where the features are extracted while considering the morphological and textural characteristics of the cell nuclei. Classifiers that are based on support vector machines and artificial neural networks are trained on top of these features in order to discriminate between tumor and non-tumor nuclei. Afterwards, the SHAP (Shapley additive explanations) explainable artificial intelligence technique was employed to perform a feature importance analysis, which led to an understanding of the features processed by the machine learning models for making their decisions. An expert pathologist validated the employed feature set, corroborating the clinical usability of the model. Even though the models resulting from the two-stage pipeline are slightly less accurate than those of the end-to-end approach, the interpretability of their features is clearer and may help build trust for pathologists to adopt artificial intelligence-based CAD systems in their clinical workflow. To further show the validity of the proposed approach, it has been tested on an external validation dataset, which was collected from IRCCS Istituto Tumori "Giovanni Paolo II" and made publicly available to ease research concerning the quantification of tumor cellularity.

7.
Comput Methods Programs Biomed ; 234: 107511, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37011426

RESUMO

BACKGROUND: Histological assessment of colorectal cancer (CRC) tissue is a crucial and demanding task for pathologists. Unfortunately, manual annotation by trained specialists is a burdensome operation, which suffers from problems like intra- and inter-pathologist variability. Computational models are revolutionizing the Digital Pathology field, offering reliable and fast approaches for challenges like tissue segmentation and classification. With this respect, an important obstacle to overcome consists in stain color variations among different laboratories, which can decrease the performance of classifiers. In this work, we investigated the role of Unpaired Image-to-Image Translation (UI2IT) models for stain color normalization in CRC histology and compared to classical normalization techniques for Hematoxylin-Eosin (H&E) images. METHODS: Five Deep Learning normalization models based on Generative Adversarial Networks (GANs) belonging to the UI2IT paradigm have been thoroughly compared to realize a robust stain color normalization pipeline. To avoid the need for training a style transfer GAN between each pair of data domains, in this paper we introduce the concept of training by exploiting a meta-domain, which contains data coming from a wide variety of laboratories. The proposed framework enables a huge saving in terms of training time, by allowing to train a single image normalization model for a target laboratory. To prove the applicability of the proposed workflow in the clinical practice, we conceived a novel perceptive quality measure, which we defined as Pathologist Perceptive Quality (PPQ). The second stage involved the classification of tissue types in CRC histology, where deep features extracted from Convolutional Neural Networks have been exploited to realize a Computer-Aided Diagnosis system based on a Support Vector Machine (SVM). To prove the reliability of the system on new data, an external validation set composed of N = 15,857 tiles has been collected at IRCCS Istituto Tumori "Giovanni Paolo II". RESULTS: The exploitation of a meta-domain consented to train normalization models that allowed achieving better classification results than normalization models explicitly trained on the source domain. PPQ metric has been found correlated to quality of distributions (Fréchet Inception Distance - FID) and to similarity of the transformed image to the original one (Learned Perceptual Image Patch Similarity - LPIPS), thus showing that GAN quality measures introduced in natural image processing tasks can be linked to pathologist evaluation of H&E images. Furthermore, FID has been found correlated to accuracies of the downstream classifiers. The SVM trained on DenseNet201 features allowed to obtain the highest classification results in all configurations. The normalization method based on the fast variant of CUT (Contrastive Unpaired Translation), FastCUT, trained with the meta-domain paradigm, allowed to achieve the best classification result for the downstream task and, correspondingly, showed the highest FID on the classification dataset. CONCLUSIONS: Stain color normalization is a difficult but fundamental problem in the histopathological setting. Several measures should be considered for properly assessing normalization methods, so that they can be introduced in the clinical practice. UI2IT frameworks offer a powerful and effective way to perform the normalization process, providing realistic images with proper colorization, unlike traditional normalization methods that introduce color artifacts. By adopting the proposed meta-domain framework, the training time can be reduced, and the accuracy of downstream classifiers can be increased.


Assuntos
Neoplasias Colorretais , Corantes , Humanos , Reprodutibilidade dos Testes , Redes Neurais de Computação , Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Colorretais/diagnóstico por imagem
8.
Bioengineering (Basel) ; 9(9)2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-36135021

RESUMO

Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.

9.
Bioengineering (Basel) ; 9(8)2022 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-35892756

RESUMO

In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface.

10.
Sensors (Basel) ; 21(24)2021 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-34960595

RESUMO

The coronavirus disease 2019 (COVID-19) pandemic has affected hundreds of millions of individuals and caused millions of deaths worldwide. Predicting the clinical course of the disease is of pivotal importance to manage patients. Several studies have found hematochemical alterations in COVID-19 patients, such as inflammatory markers. We retrospectively analyzed the anamnestic data and laboratory parameters of 303 patients diagnosed with COVID-19 who were admitted to the Polyclinic Hospital of Bari during the first phase of the COVID-19 global pandemic. After the pre-processing phase, we performed a survival analysis with Kaplan-Meier curves and Cox Regression, with the aim to discover the most unfavorable predictors. The target outcomes were mortality or admission to the intensive care unit (ICU). Different machine learning models were also compared to realize a robust classifier relying on a low number of strongly significant factors to estimate the risk of death or admission to ICU. From the survival analysis, it emerged that the most significant laboratory parameters for both outcomes was C-reactive protein min; HR=17.963 (95% CI 6.548-49.277, p < 0.001) for death, HR=1.789 (95% CI 1.000-3.200, p = 0.050) for admission to ICU. The second most important parameter was Erythrocytes max; HR=1.765 (95% CI 1.141-2.729, p < 0.05) for death, HR=1.481 (95% CI 0.895-2.452, p = 0.127) for admission to ICU. The best model for predicting the risk of death was the decision tree, which resulted in ROC-AUC of 89.66%, whereas the best model for predicting the admission to ICU was support vector machine, which had ROC-AUC of 95.07%. The hematochemical predictors identified in this study can be utilized as a strong prognostic signature to characterize the severity of the disease in COVID-19 patients.


Assuntos
COVID-19 , Mortalidade Hospitalar , Humanos , Aprendizado de Máquina , Prognóstico , Estudos Retrospectivos , SARS-CoV-2 , Análise de Sobrevida
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA