Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38865229

RESUMO

Developing AI models for digital pathology has traditionally relied on single-scale analysis of histopathology slides. However, a whole slide image is a rich digital representation of the tissue, captured at various magnification levels. Limiting our analysis to a single scale overlooks critical information, spanning from intricate high-resolution cellular details to broad low-resolution tissue structures. In this study, we propose a model-agnostic multiresolution feature aggregation framework tailored for the analysis of histopathology slides in the context of breast cancer, on a multicohort dataset of 2038 patient samples. We have adapted 9 state-of-the-art multiple instance learning models on our multi-scale methodology and evaluated their performance on grade prediction, TP53 mutation status prediction and survival prediction. The results prove the dominance of the multiresolution methodology, and specifically, concatenating or linearly transforming via a learnable layer the feature vectors of image patches from a high (20x) and low (10x) magnification factors achieve improved performance for all prediction tasks across domain-specific and imagenet-based features. On the contrary, the performance of uniresolution baseline models was not consistent across domain-specific and imagenet-based features. Moreover, we shed light on the inherent inconsistencies observed in models trained on whole-tissue-sections when validated against biopsy-based datasets. Despite these challenges, our findings underscore the superiority of multiresolution analysis over uniresolution methods. Finally, cross-scale analysis also benefits the explainability aspects of attention-based architectures, since one can extract attention maps at the tissue- and cell-levels, improving the interpretation of the model's decision. The code and results of this study can be found at github.com/tsikup/multiresolution_histopathology.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38083519

RESUMO

Digital histopathology image analysis of tumor tissue sections has seen great research interest for automating standard diagnostic tasks, but also for developing novel prognostic biomarkers. However, research has mainly been focused on developing uniresolution models, capturing either high-resolution cellular features or low-resolution tissue architectural features. In addition, in the patch-based weakly-supervised training of deep learning models, the features which represent the intratumoral heterogeneity are lost. In this study, we propose a multiresolution attention-based multiple instance learning framework that can capture cellular and contextual features from the whole tissue for predicting patient-level outcomes. Several basic mathematical operations were examined for integrating multiresolution features, i.e. addition, mean, multiplication and concatenation. The proposed multiplication-based multiresolution model performed the best (AUC=0.864), while all multiresolution models outperformed the uniresolution baseline models (AUC=0.669, 0.713) for breast-cancer grading. (Implementation: https://github.com/tsikup/multiresolution-clam).


Assuntos
Neoplasias da Mama , Processamento de Imagem Assistida por Computador , Humanos , Feminino , Processamento de Imagem Assistida por Computador/métodos , Diagnóstico por Imagem , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/patologia
3.
Sci Rep ; 13(1): 714, 2023 01 13.
Artigo em Inglês | MEDLINE | ID: mdl-36639671

RESUMO

Automatic segmentation of the prostate of and the prostatic zones on MRI remains one of the most compelling research areas. While different image enhancement techniques are emerging as powerful tools for improving the performance of segmentation algorithms, their application still lacks consensus due to contrasting evidence regarding performance improvement and cross-model stability, further hampered by the inability to explain models' predictions. Particularly, for prostate segmentation, the effectiveness of image enhancement on different Convolutional Neural Networks (CNN) remains largely unexplored. The present work introduces a novel image enhancement method, named RACLAHE, to enhance the performance of CNN models for segmenting the prostate's gland and the prostatic zones. The improvement in performance and consistency across five CNN models (U-Net, U-Net++, U-Net3+, ResU-net and USE-NET) is compared against four popular image enhancement methods. Additionally, a methodology is proposed to explain, both quantitatively and qualitatively, the relation between saliency maps and ground truth probability maps. Overall, RACLAHE was the most consistent image enhancement algorithm in terms of performance improvement across CNN models with the mean increase in Dice Score ranging from 3 to 9% for the different prostatic regions, while achieving minimal inter-model variability. The integration of a feature driven methodology to explain the predictions after applying image enhancement methods, enables the development of a concrete, trustworthy automated pipeline for prostate segmentation on MR images.


Assuntos
Processamento de Imagem Assistida por Computador , Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Algoritmos
4.
Cancers (Basel) ; 14(8)2022 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-35454904

RESUMO

The tumor immune microenvironment (TIME) is an important player in breast cancer pathophysiology. Surrogates for antitumor immune response have been explored as predictive biomarkers to immunotherapy, though with several limitations. Immunohistochemistry for programmed death ligand 1 suffers from analytical problems, immune signatures are devoid of spatial information and histopathological evaluation of tumor infiltrating lymphocytes exhibits interobserver variability. Towards improved understanding of the complex interactions in TIME, several emerging multiplex in situ methods are being developed and gaining much attention for protein detection. They enable the simultaneous evaluation of multiple targets in situ, detection of cell densities/subpopulations as well as estimations of functional states of immune infiltrate. Furthermore, they can characterize spatial organization of TIME-by cell-to-cell interaction analyses and the evaluation of distribution within different regions of interest and tissue compartments-while digital imaging and image analysis software allow for reproducibility of the various assays. In this review, we aim to provide an overview of the different multiplex in situ methods used in cancer research with special focus on breast cancer TIME at the neoadjuvant, adjuvant and metastatic setting. Spatial heterogeneity of TIME and importance of longitudinal evaluation of TIME changes under the pressure of therapy and metastatic progression are also addressed.

5.
Plants (Basel) ; 11(7)2022 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-35406899

RESUMO

Pollen identification is an important task for the botanical certification of honey. It is performed via thorough microscopic examination of the pollen present in honey; a process called melissopalynology. However, manual examination of the images is hard, time-consuming and subject to inter- and intra-observer variability. In this study, we investigated the applicability of deep learning models for the classification of pollen-grain images into 20 pollen types, based on the Cretan Pollen Dataset. In particular, we applied transfer and ensemble learning methods to achieve an accuracy of 97.5%, a sensitivity of 96.9%, a precision of 97%, an F1 score of 96.89% and an AUC of 0.9995. However, in a preliminary case study, when we applied the best-performing model on honey-based pollen-grain images, we found that it performed poorly; only 0.02 better than random guessing (i.e., an AUC of 0.52). This indicates that the model should be further fine-tuned on honey-based pollen-grain images to increase its effectiveness on such data.

6.
Diagnostics (Basel) ; 11(8)2021 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-34441447

RESUMO

Intravascular ultrasound (IVUS) imaging offers accurate cross-sectional vessel information. To this end, registering temporal IVUS pullbacks acquired at two time points can assist the clinicians to accurately assess pathophysiological changes in the vessels, disease progression and the effect of the treatment intervention. In this paper, we present a novel two-stage registration framework for aligning pairs of longitudinal and axial IVUS pullbacks. Initially, we use a Dynamic Time Warping (DTW)-based algorithm to align the pullbacks in a temporal fashion. Subsequently, an intensity-based registration method, that utilizes a variant of the Harmony Search optimizer to register each matched pair of the pullbacks by maximizing their Mutual Information, is applied. The presented method is fully automated and only required two single global image-based measurements, unlike other methods that require extraction of morphology-based features. The data used includes 42 synthetically generated pullback pairs, achieving an alignment error of 0.1853 frames per pullback, a rotation error 0.93° and a translation error of 0.0161 mm. In addition, it was also tested on 11 baseline and follow-up, and 10 baseline and post-stent deployment real IVUS pullback pairs from two clinical centres, achieving an alignment error of 4.3±3.9 for the longitudinal registration, and a distance and a rotational error of 0.56±0.323 mm and 12.4°±10.5°, respectively, for the axial registration. Although the performance of the proposed method does not match that of the state-of-the-art, our method relies on computationally lighter steps for its computations, which is crucial in real-time applications. On the other hand, the proposed method performs even or better that the state-of-the-art when considering the axial registration. The results indicate that the proposed method can support clinical decision making and diagnosis based on sequential imaging examinations.

7.
Comput Biol Med ; 135: 104599, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34247130

RESUMO

Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models' performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease's lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Inteligência Artificial , Retinopatia Diabética/diagnóstico por imagem , Feminino , Fundo de Olho , Humanos , Útero
8.
Exp Ther Med ; 20(5): 78, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32968435

RESUMO

The coronavirus pandemic and its unprecedented consequences globally has spurred the interest of the artificial intelligence research community. A plethora of published studies have investigated the role of imaging such as chest X-rays and computer tomography in coronavirus disease 2019 (COVID-19) automated diagnosis. Οpen repositories of medical imaging data can play a significant role by promoting cooperation among institutes in a world-wide scale. However, they may induce limitations related to variable data quality and intrinsic differences due to the wide variety of scanner vendors and imaging parameters. In this study, a state-of-the-art custom U-Net model is presented with a dice similarity coefficient performance of 99.6% along with a transfer learning VGG-19 based model for COVID-19 versus pneumonia differentiation exhibiting an area under curve of 96.1%. The above was significantly improved over the baseline model trained with no segmentation in selected tomographic slices of the same dataset. The presented study highlights the importance of a robust preprocessing protocol for image analysis within a heterogeneous imaging dataset and assesses the potential diagnostic value of the presented COVID-19 model by comparing its performance to the state of the art.

9.
Exp Ther Med ; 20(2): 727-735, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32742318

RESUMO

COVID-19 has led to an unprecedented healthcare crisis with millions of infected people across the globe often pushing infrastructures, healthcare workers and entire economies beyond their limits. The scarcity of testing kits, even in developed countries, has led to extensive research efforts towards alternative solutions with high sensitivity. Chest radiological imaging paired with artificial intelligence (AI) can offer significant advantages in diagnosis of novel coronavirus infected patients. To this end, transfer learning techniques are used for overcoming the limitations emanating from the lack of relevant big datasets, enabling specialized models to converge on limited data, as in the case of X-rays of COVID-19 patients. In this study, we present an interpretable AI framework assessed by expert radiologists on the basis on how well the attention maps focus on the diagnostically-relevant image regions. The proposed transfer learning methodology achieves an overall area under the curve of 1 for a binary classification problem across a 5-fold training/testing dataset.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA