Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Rev Biomed Eng ; PP2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38995713

RESUMO

Searching for similar images in archives of histology and histopathology images is a crucial task that may aid in patient tissue comparison for various purposes, ranging from triaging and diagnosis to prognosis and prediction. Whole slide images (WSIs) are highly detailed digital representations of tissue specimens mounted on glass slides. Matching WSI to WSI can serve as the critical method for patient tissue comparison. In this paper, we report extensive analysis and validation of four search methods bag of visual words (BoVW), Yottixel, SISH, RetCCL, and some of their potential variants. We analyze their algorithms and structures and assess their performance. For this evaluation, we utilized four internal datasets (1269 patients) and three public datasets (1207 patients), totaling more than 200, 000 patches from 38 different classes/subtypes across five primary sites. Certain search engines, for example, BoVW, exhibit notable efficiency and speed but suffer from low accuracy. Conversely, search engines like Yottixel demonstrate efficiency and speed, providing moderately accurate results. Recent proposals, including SISH, display inefficiency and yield inconsistent outcomes, while alternatives like RetCCL prove inadequate in both accuracy and efficiency. Further research is imperative to address the dual aspects of accuracy and minimal storage requirements in histopathological image search.

2.
Comput Biol Med ; 162: 107026, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37267827

RESUMO

Considering their gigapixel sizes, the representation of whole slide images (WSIs) for classification and retrieval systems is a non-trivial task. Patch processing and multi-Instance Learning (MIL) are common approaches to analyze WSIs. However, in end-to-end training, these methods require high GPU memory consumption due to the simultaneous processing of multiple sets of patches. Furthermore, compact WSI representations through binary and/or sparse representations are urgently needed for real-time image retrieval within large medical archives. To address these challenges, we propose a novel framework for learning compact WSI representations utilizing deep conditional generative modeling and the Fisher Vector Theory. The training of our method is instance-based, achieving better memory and computational efficiency during the training. To achieve efficient large-scale WSI search, we introduce new loss functions, namely gradient sparsity and gradient quantization losses, for learning sparse and binary permutation-invariant WSI representations called Conditioned Sparse Fisher Vector (C-Deep-SFV), and Conditioned Binary Fisher Vector (C-Deep-BFV). The learned WSI representations are validated on the largest public WSI archive, The Cancer Genomic Atlas (TCGA) and also Liver-Kidney-Stomach (LKS) dataset. For WSI search, the proposed method outperforms Yottixel and Gaussian Mixture Model (GMM)-based Fisher Vector both in terms of retrieval accuracy and speed. For WSI classification, we achieve competitive performance against state-of-art on lung cancer data from TCGA and the public benchmark LKS dataset.


Assuntos
Benchmarking , Aprendizagem , Genômica , Rim , Fígado
3.
IEEE J Biomed Health Inform ; 26(9): 4611-4622, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35687644

RESUMO

This paper investigates the effect of magnification on content-based image search in digital pathology archives and proposes to use multi-magnification image representation. Image search in large archives of digital pathology slides provides researchers and medical professionals with an opportunity to match records of current and past patients and learn from evidently diagnosed and treated cases. When working with microscopes, pathologists switch between different magnification levels while examining tissue specimens to find and evaluate various morphological features. Inspired by the conventional pathology workflow, we have investigated several magnification levels in digital pathology and their combinations to minimize the gap between AI-enabled image search methods and clinical settings. The proposed searching framework does not rely on any regional annotation and potentially applies to millions of unlabelled (raw) whole slide images. This paper suggests two approaches for combining magnification levels and compares their performance. The first approach obtains a single-vector deep feature representation for a digital slide, whereas the second approach works with a multi-vector deep feature representation. We report the search results of 20×, 10×, and 5× magnifications and their combinations on a subset of The Cancer Genome Atlas (TCGA) repository. The experiments verify that cell-level information at the highest magnification is essential for searching for diagnostic purposes. In contrast, low-magnification information may improve this assessment depending on the tumor type. Our multi-magnification approach achieved up to 11% F1-score improvement in searching among the urinary tract and brain tumor subtypes compared to the single-magnification image search.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Fluxo de Trabalho
4.
J Pathol Inform ; 13: 100133, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36605114

RESUMO

Image analysis in digital pathology has proven to be one of the most challenging fields in medical imaging for AI-driven classification and search tasks. Due to their gigapixel dimensions, whole slide images (WSIs) are difficult to represent for computational pathology. Self-supervised learning (SSL) has recently demonstrated excellent performance in learning effective representations on pretext objectives, which may improve the generalizations of downstream tasks. Previous self-supervised representation methods rely on patch selection and classification such that the effect of SSL on end-to-end WSI representation is not investigated. In contrast to existing augmentation-based SSL methods, this paper proposes a novel self-supervised learning scheme based on the available primary site information. We also design a fully supervised contrastive learning setup to increase the robustness of the representations for WSI classification and search for both pretext and downstream tasks. We trained and evaluated the model on more than 6000 WSIs from The Cancer Genome Atlas (TCGA) repository provided by the National Cancer Institute. The proposed architecture achieved excellent results on most primary sites and cancer subtypes. We also achieved the best result on validation on a lung cancer classification task.

5.
Med Image Anal ; 70: 102032, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33773296

RESUMO

Feature vectors provided by pre-trained deep artificial neural networks have become a dominant source for image representation in recent literature. Their contribution to the performance of image analysis can be improved through fine-tuning. As an ultimate solution, one might even train a deep network from scratch with the domain-relevant images, a highly desirable option which is generally impeded in pathology by lack of labeled images and the computational expense. In this study, we propose a new network, namely KimiaNet, that employs the topology of the DenseNet with four dense blocks, fine-tuned and trained with histopathology images in different configurations. We used more than 240,000 image patches with 1000×1000 pixels acquired at 20× magnification through our proposed "high-cellularity mosaic" approach to enable the usage of weak labels of 7126 whole slide images of formalin-fixed paraffin-embedded human pathology samples publicly available through The Cancer Genome Atlas (TCGA) repository. We tested KimiaNet using three public datasets, namely TCGA, endometrial cancer images, and colorectal cancer images by evaluating the performance of search and classification when corresponding features of different networks are used for image representation. As well, we designed and trained multiple convolutional batch-normalized ReLU (CBR) networks. The results show that KimiaNet provides superior results compared to the original DenseNet and smaller CBR networks when used as feature extractor to represent histopathology images.


Assuntos
Neoplasias , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias/diagnóstico por imagem
6.
PLoS One ; 15(1): e0226048, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31935220

RESUMO

Recently, brain-computer interface (BCI) systems developed based on steady-state visual evoked potential (SSVEP) have attracted much attention due to their high information transfer rate (ITR) and increasing number of targets. However, SSVEP-based methods can be improved in terms of their accuracy and target detection time. We propose a new method based on canonical correlation analysis (CCA) to integrate subject-specific models and subject-independent information and enhance BCI performance. We propose to use training data of other subjects to optimize hyperparameters for CCA-based model of a specific subject. An ensemble version of the proposed method is also developed for a fair comparison with ensemble task-related component analysis (TRCA). The proposed method is compared with TRCA and extended CCA methods. A publicly available, 35-subject SSVEP benchmark dataset is used for comparison studies and performance is quantified by classification accuracy and ITR. The ITR of the proposed method is higher than those of TRCA and extended CCA. The proposed method outperforms extended CCA in all conditions and TRCA for time windows greater than 0.3 s. The proposed method also outperforms TRCA when there are limited training blocks and electrodes. This study illustrates that adding subject-independent information to subject-specific models can improve performance of SSVEP-based BCIs.


Assuntos
Interfaces Cérebro-Computador , Potenciais Evocados Visuais , Modelagem Computacional Específica para o Paciente , Feminino , Humanos , Masculino , Adulto Jovem
7.
Front Hum Neurosci ; 12: 515, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30618689

RESUMO

In several behavioral psycholinguistic studies, it has been shown that concrete words are processed more efficiently. They can be remembered faster, recognized better, and can be learned easier than abstract words. This fact is called concreteness effect. There are fMRI studies which compared the neural representations of concrete and abstract concepts in terms of activated regions. In the present study, a comparison has been made between the condition-specific connectivity of functional networks (obtained by group ICA) during imagery of abstract and concrete words. The obtained results revealed that the functional network connectivity between three pairs of networks during concrete imagery is significantly different from that of abstract imagery (FDR correction at the significance level of 0.05). These results suggest that abstract and concrete concepts have different representations in terms of functional network connectivity pattern. Remarkably, in all of these network pairs, the connectivity during concrete imagery is significantly higher than that of abstract imagery. These more coherent networks include both linguistic and visual regions with a higher engagement of the right hemisphere, so the results are in line with dual coding theory. Additionally, these three pairs of networks include the contrasting regions which have shown stronger activation either in concrete or abstract word processing in former studies. The findings imply that the brain is more integrated and synchronized at the time of concrete imagery and it may explain the reason of faster concrete words processing. In order to validate the results, we used functional network connectivity distributions (FNCD). Wilcoxon rank-sum test was used to check if the abstract and concrete FNCDs extracted from whole subjects are the same. The result revealed that the corresponding distributions are different which indicates two different patterns of connectivity for abstract and concrete word processing. Also, the mean of FNCD is significantly higher at the time of concrete imagery than that of abstract imagery. Furthermore, FNCDs at the single-subject level are significantly more left-skewed or equally, include more strong connectivity for concrete imagery.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA