Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 94
Filter
1.
Artif Intell Med ; 157: 102972, 2024 Aug 31.
Article in English | MEDLINE | ID: mdl-39232270

ABSTRACT

The integration of morphological attributes extracted from histopathological images and genomic data holds significant importance in advancing tumor diagnosis, prognosis, and grading. Histopathological images are acquired through microscopic examination of tissue slices, providing valuable insights into cellular structures and pathological features. On the other hand, genomic data provides information about tumor gene expression and functionality. The fusion of these two distinct data types is crucial for gaining a more comprehensive understanding of tumor characteristics and progression. In the past, many studies relied on single-modal approaches for tumor diagnosis. However, these approaches had limitations as they were unable to fully harness the information from multiple data sources. To address these limitations, researchers have turned to multi-modal methods that concurrently leverage both histopathological images and genomic data. These methods better capture the multifaceted nature of tumors and enhance diagnostic accuracy. Nonetheless, existing multi-modal methods have, to some extent, oversimplified the extraction processes for both modalities and the fusion process. In this study, we presented a dual-branch neural network, namely SG-Fusion. Specifically, for the histopathological modality, we utilize the Swin-Transformer structure to capture both local and global features and incorporate contrastive learning to encourage the model to discern commonalities and differences in the representation space. For the genomic modality, we developed a graph convolutional network based on gene functional and expression level similarities. Additionally, our model integrates a cross-attention module to enhance information interaction and employs divergence-based regularization to enhance the model's generalization performance. Validation conducted on glioma datasets from the Cancer Genome Atlas unequivocally demonstrates that our SG-Fusion model outperforms both single-modal methods and existing multi-modal approaches in both survival analysis and tumor grading.

2.
Biomed Eng Comput Biol ; 15: 11795972241271569, 2024.
Article in English | MEDLINE | ID: mdl-39156985

ABSTRACT

Cancer is the leading cause of mortality in the world. And among all cancers lung and colon cancers are 2 of the most common causes of death and morbidity. The aim of this study was to develop an automated lung and colon cancer classification system using histopathological images. An automated lung and colon classification system was developed using histopathological images from the LC25000 dataset. The algorithm development included data splitting, deep neural network model selection, on the fly image augmentation, training and validation. The core of the algorithm was a Swin Transform V2 model, and 5-fold cross validation was used to evaluate model performance. The model performance was evaluated using Accuracy, Kappa, confusion matrix, precision, recall, and F1. Extensive experiments were conducted to compare the performances of different neural networks including both mainstream convolutional neural networks and vision transformers. The Swin Transform V2 model achieved a 1 (100%) on all metrics, which is the first single model to obtain perfect results on this dataset. The Swin Transformer V2 model has the potential to be used to assist pathologists in classifying lung and colon cancers using histopathology images.

3.
Acad Radiol ; 2024 Aug 29.
Article in English | MEDLINE | ID: mdl-39214816

ABSTRACT

RATIONALE AND OBJECTIVES: Accurately predicting the pathological response to chemotherapy before treatment is important for selecting the appropriate treatment groups, formulating individualized treatment plans, and improving the survival rates of patients with gastric cancer (GC). METHODS: We retrospectively enrolled 151 patients diagnosed with GC who underwent preoperative chemotherapy and surgical resection at the Affiliated Hospital of Qingdao University between January 2015 and June 2023. Both pretreatment-enhanced computer technology images and whole slide images of pathological hematoxylin and eosin-stained sections were available for each patient. The image features were extracted and used to construct an ensemble radiopathomics machine learning model. In addition, a nomogram was developed by combining the imaging features and clinical characteristics. RESULTS: In total, 962 radiomics and 999 pathomics signatures were extracted from 106 patients in the training cohort. A fusion radiopathomics model was constructed using 13 radiomics and 5 pathomics signatures. The fusion model showed favorable performance compared to single-omics models, with an area under the curve (AUC) of 0.789 in the validation cohort. Moreover, a combined radiopathomics nomogram (RPN) was developed based on radiopathomics features and the Borrmann type, which is a classification method for advanced GC according to tumor growth pattern and gross morphology. The RPN showed superior predictive performance in the training (AUC 0.880) and validation cohorts (AUC 0.797). The decision curve analysis showed that RPN could provide favorable clinical benefits to patients with GC. CONCLUSIONS: RPN was able to predict the pathological response to preoperative chemotherapy with high accuracy, and therefore provides a novel tool for personalized treatment of GC.

4.
Sensors (Basel) ; 24(16)2024 Aug 20.
Article in English | MEDLINE | ID: mdl-39205077

ABSTRACT

Stroke is the second leading cause of death and a major cause of disability around the world, and the development of atherosclerotic plaques in the carotid arteries is generally considered the leading cause of severe cerebrovascular events. In recent years, new reports have reinforced the role of an accurate histopathological analysis of carotid plaques to perform the stratification of affected patients and proceed to the correct prevention of complications. This work proposes applying an unsupervised learning approach to analyze complex whole-slide images (WSIs) of atherosclerotic carotid plaques to allow a simple and fast examination of their most relevant features. All the code developed for the present analysis is freely available. The proposed method offers qualitative and quantitative tools to assist pathologists in examining the complexity of whole-slide images of carotid atherosclerotic plaques more effectively. Nevertheless, future studies using supervised methods should provide evidence of the correspondence between the clusters estimated using the proposed textural-based approach and the regions manually annotated by expert pathologists.


Subject(s)
Carotid Arteries , Plaque, Atherosclerotic , Unsupervised Machine Learning , Humans , Plaque, Atherosclerotic/pathology , Plaque, Atherosclerotic/diagnostic imaging , Carotid Arteries/pathology , Image Processing, Computer-Assisted/methods , Algorithms , Image Interpretation, Computer-Assisted/methods
5.
Diagnostics (Basel) ; 14(13)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39001292

ABSTRACT

Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability.

6.
Sensors (Basel) ; 24(12)2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38931561

ABSTRACT

Breast cancer is the second most common cancer worldwide, primarily affecting women, while histopathological image analysis is one of the possibile methods used to determine tumor malignancy. Regarding image analysis, the application of deep learning has become increasingly prevalent in recent years. However, a significant issue is the unbalanced nature of available datasets, with some classes having more images than others, which may impact the performance of the models due to poorer generalizability. A possible strategy to avoid this problem is downsampling the class with the most images to create a balanced dataset. Nevertheless, this approach is not recommended for small datasets as it can lead to poor model performance. Instead, techniques such as data augmentation are traditionally used to address this issue. These techniques apply simple transformations such as translation or rotation to the images to increase variability in the dataset. Another possibility is using generative adversarial networks (GANs), which can generate images from a relatively small training set. This work aims to enhance model performance in classifying histopathological images by applying data augmentation using GANs instead of traditional techniques.


Subject(s)
Breast Neoplasms , Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Breast Neoplasms/pathology , Breast Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Deep Learning , Female , Algorithms , Image Interpretation, Computer-Assisted/methods
7.
BMC Oral Health ; 24(1): 601, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38783295

ABSTRACT

PROBLEM: Oral squamous cell carcinoma (OSCC) is the eighth most prevalent cancer globally, leading to the loss of structural integrity within the oral cavity layers and membranes. Despite its high prevalence, early diagnosis is crucial for effective treatment. AIM: This study aimed to utilize recent advancements in deep learning for medical image classification to automate the early diagnosis of oral histopathology images, thereby facilitating prompt and accurate detection of oral cancer. METHODS: A deep learning convolutional neural network (CNN) model categorizes benign and malignant oral biopsy histopathological images. By leveraging 17 pretrained DL-CNN models, a two-step statistical analysis identified the pretrained EfficientNetB0 model as the most superior. Further enhancement of EfficientNetB0 was achieved by incorporating a dual attention network (DAN) into the model architecture. RESULTS: The improved EfficientNetB0 model demonstrated impressive performance metrics, including an accuracy of 91.1%, sensitivity of 92.2%, specificity of 91.0%, precision of 91.3%, false-positive rate (FPR) of 1.12%, F1 score of 92.3%, Matthews correlation coefficient (MCC) of 90.1%, kappa of 88.8%, and computational time of 66.41%. Notably, this model surpasses the performance of state-of-the-art approaches in the field. CONCLUSION: Integrating deep learning techniques, specifically the enhanced EfficientNetB0 model with DAN, shows promising results for the automated early diagnosis of oral cancer through oral histopathology image analysis. This advancement has significant potential for improving the efficacy of oral cancer treatment strategies.


Subject(s)
Carcinoma, Squamous Cell , Deep Learning , Mouth Neoplasms , Neural Networks, Computer , Humans , Mouth Neoplasms/pathology , Mouth Neoplasms/diagnostic imaging , Mouth Neoplasms/diagnosis , Carcinoma, Squamous Cell/pathology , Carcinoma, Squamous Cell/diagnostic imaging , Carcinoma, Squamous Cell/diagnosis , Early Detection of Cancer/methods , Sensitivity and Specificity
8.
Comput Methods Programs Biomed ; 251: 108207, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38723437

ABSTRACT

BACKGROUND AND OBJECTIVE: Lung cancer (LC) has a high fatality rate that continuously affects human lives all over the world. Early detection of LC prolongs human life and helps to prevent the disease. Histopathological inspection is a common method to diagnose LC. Visual inspection of histopathological diagnosis necessitates more inspection time, and the decision depends on the subjective perception of clinicians. Usually, machine learning techniques mostly depend on traditional feature extraction which is labor-intensive and may not be appropriate for enormous data. In this work, a convolutional neural network (CNN)-based architecture is proposed for the more effective classification of lung tissue subtypes using histopathological images. METHODS: Authors have utilized the first-time nonlocal mean (NLM) filter to suppress the effect of noise from histopathological images. NLM filter efficiently eliminated noise while preserving the edges of images. Then, the obtained denoised images are given as input to the proposed multi-headed lung cancer classification convolutional neural network (ML3CNet). Furthermore, the model quantization technique is utilized to reduce the size of the proposed model for the storage of the data. Reduction in model size requires less memory and speeds up data processing. RESULTS: The effectiveness of the proposed model is compared with the other existing state-of-the-art methods. The proposed ML3CNet achieved an average classification accuracy of 99.72%, sensitivity of 99.66%, precision of 99.64%, specificity of 99.84%, F-1 score of 0.9965, and area under the curve of 0.9978. The quantized accuracy of 98.92% is attained by the proposed model. To validate the applicability of the proposed ML3CNet, it has also been tested on the colon cancer dataset. CONCLUSION: The findings reveal that the proposed approach can be beneficial to automatically classify LC subtypes that might assist healthcare workers in making decisions more precisely. The proposed model can be implemented on the hardware using Raspberry Pi for practical realization.


Subject(s)
Lung Neoplasms , Neural Networks, Computer , Humans , Lung Neoplasms/classification , Lung Neoplasms/pathology , Lung Neoplasms/diagnostic imaging , Algorithms , Machine Learning , Image Processing, Computer-Assisted/methods , Diagnosis, Computer-Assisted/methods
9.
J Transl Med ; 22(1): 438, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38720336

ABSTRACT

BACKGROUND: Advanced unresectable gastric cancer (GC) patients were previously treated with chemotherapy alone as the first-line therapy. However, with the Food and Drug Administration's (FDA) 2022 approval of programmed cell death protein 1 (PD-1) inhibitor combined with chemotherapy as the first-li ne treatment for advanced unresectable GC, patients have significantly benefited. However, the significant costs and potential adverse effects necessitate precise patient selection. In recent years, the advent of deep learning (DL) has revolutionized the medical field, particularly in predicting tumor treatment responses. Our study utilizes DL to analyze pathological images, aiming to predict first-line PD-1 combined chemotherapy response for advanced-stage GC. METHODS: In this multicenter retrospective analysis, Hematoxylin and Eosin (H&E)-stained slides were collected from advanced GC patients across four medical centers. Treatment response was evaluated according to iRECIST 1.1 criteria after a comprehensive first-line PD-1 immunotherapy combined with chemotherapy. Three DL models were employed in an ensemble approach to create the immune checkpoint inhibitors Response Score (ICIsRS) as a novel histopathological biomarker derived from Whole Slide Images (WSIs). RESULTS: Analyzing 148,181 patches from 313 WSIs of 264 advanced GC patients, the ensemble model exhibited superior predictive accuracy, leading to the creation of ICIsNet. The model demonstrated robust performance across four testing datasets, achieving AUC values of 0.92, 0.95, 0.96, and 1 respectively. The boxplot, constructed from the ICIsRS, reveals statistically significant disparities between the well response and poor response (all p-values < = 0.001). CONCLUSION: ICIsRS, a DL-derived biomarker from WSIs, effectively predicts advanced GC patients' responses to PD-1 combined chemotherapy, offering a novel approach for personalized treatment planning and allowing for more individualized and potentially effective treatment strategies based on a patient's unique response situations.


Subject(s)
Deep Learning , Immune Checkpoint Inhibitors , Programmed Cell Death 1 Receptor , Stomach Neoplasms , Humans , Stomach Neoplasms/drug therapy , Stomach Neoplasms/pathology , Male , Female , Treatment Outcome , Middle Aged , Immune Checkpoint Inhibitors/therapeutic use , Programmed Cell Death 1 Receptor/antagonists & inhibitors , Aged , Retrospective Studies , ROC Curve , Adult
10.
Med Image Anal ; 95: 103162, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38593644

ABSTRACT

Active Learning (AL) has the potential to solve a major problem of digital pathology: the efficient acquisition of labeled data for machine learning algorithms. However, existing AL methods often struggle in realistic settings with artifacts, ambiguities, and class imbalances, as commonly seen in the medical field. The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value. To address these challenges, we propose Focused Active Learning (FocAL), which combines a Bayesian Neural Network with Out-of-Distribution detection to estimate different uncertainties for the acquisition function. Specifically, the weighted epistemic uncertainty accounts for the class imbalance, aleatoric uncertainty for ambiguous images, and an OoD score for artifacts. We perform extensive experiments to validate our method on MNIST and the real-world Panda dataset for the classification of prostate cancer. The results confirm that other AL methods are 'distracted' by ambiguities and artifacts which harm the performance. FocAL effectively focuses on the most informative images, avoiding ambiguities and artifacts during acquisition. For both experiments, FocAL outperforms existing AL approaches, reaching a Cohen's kappa of 0.764 with only 0.69% of the labeled Panda data.


Subject(s)
Prostatic Neoplasms , Humans , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Male , Machine Learning , Bayes Theorem , Algorithms , Image Interpretation, Computer-Assisted/methods , Artifacts , Neural Networks, Computer
11.
J Neurooncol ; 168(2): 283-298, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38557926

ABSTRACT

PURPOSE: To develop and validate a pathomics signature for predicting the outcomes of Primary Central Nervous System Lymphoma (PCNSL). METHODS: In this study, 132 whole-slide images (WSIs) of 114 patients with PCNSL were enrolled. Quantitative features of hematoxylin and eosin (H&E) stained slides were extracted using CellProfiler. A pathomics signature was established and validated. Cox regression analysis, receiver operating characteristic (ROC) curves, Calibration, decision curve analysis (DCA), and net reclassification improvement (NRI) were performed to assess the significance and performance. RESULTS: In total, 802 features were extracted using a fully automated pipeline. Six machine-learning classifiers demonstrated high accuracy in distinguishing malignant neoplasms. The pathomics signature remained a significant factor of overall survival (OS) and progression-free survival (PFS) in the training cohort (OS: HR 7.423, p < 0.001; PFS: HR 2.143, p = 0.022) and independent validation cohort (OS: HR 4.204, p = 0.017; PFS: HR 3.243, p = 0.005). A significantly lower response rate to initial treatment was found in high Path-score group (19/35, 54.29%) as compared to patients in the low Path-score group (16/70, 22.86%; p < 0.001). The DCA and NRI analyses confirmed that the nomogram showed incremental performance compared with existing models. The ROC curve demonstrated a relatively sensitive and specific profile for the nomogram (1-, 2-, and 3-year AUC = 0.862, 0.932, and 0.927, respectively). CONCLUSION: As a novel, non-invasive, and convenient approach, the newly developed pathomics signature is a powerful predictor of OS and PFS in PCNSL and might be a potential predictive indicator for therapeutic response.


Subject(s)
Central Nervous System Neoplasms , Lymphoma , Machine Learning , Humans , Female , Male , Central Nervous System Neoplasms/pathology , Central Nervous System Neoplasms/diagnosis , Central Nervous System Neoplasms/mortality , Middle Aged , Prognosis , Lymphoma/pathology , Lymphoma/diagnosis , Lymphoma/mortality , Aged , Adult , ROC Curve , Aged, 80 and over , Survival Rate , Young Adult , Retrospective Studies , Biomarkers, Tumor/metabolism
12.
J Imaging Inform Med ; 37(3): 1177-1186, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38332407

ABSTRACT

Helicobacter pylori (H. pylori) is a widespread pathogenic bacterium, impacting over 4 billion individuals globally. It is primarily linked to gastric diseases, including gastritis, peptic ulcers, and cancer. The current histopathological method for diagnosing H. pylori involves labour-intensive examination of endoscopic biopsies by trained pathologists. However, this process can be time-consuming and may occasionally result in the oversight of small bacterial quantities. Our study explored the potential of five pre-trained models for binary classification of 204 histopathological images, distinguishing between H. pylori-positive and H. pylori-negative cases. These models include EfficientNet-b0, DenseNet-201, ResNet-101, MobileNet-v2, and Xception. To evaluate the models' performance, we conducted a five-fold cross-validation, ensuring the models' reliability across different subsets of the dataset. After extensive evaluation and comparison of the models, ResNet101 emerged as the most promising. It achieved an average accuracy of 0.920, with impressive scores for sensitivity, specificity, positive predictive value, negative predictive value, F1 score, Matthews's correlation coefficient, and Cohen's kappa coefficient. Our study achieved these robust results using a smaller dataset compared to previous studies, highlighting the efficacy of deep learning models even with limited data. These findings underscore the potential of deep learning models, particularly ResNet101, to support pathologists in achieving precise and dependable diagnostic procedures for H. pylori. This is particularly valuable in scenarios where swift and accurate diagnoses are essential.


Subject(s)
Deep Learning , Helicobacter Infections , Helicobacter pylori , Humans , Helicobacter Infections/pathology , Helicobacter Infections/microbiology , Helicobacter Infections/diagnosis , Helicobacter pylori/isolation & purification , Helicobacter pylori/pathogenicity , Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Sensitivity and Specificity
13.
Med Biol Eng Comput ; 62(6): 1899-1909, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38409645

ABSTRACT

Early detection is critical for successfully diagnosing cancer, and timely analysis of diagnostic tests is increasingly important. In the context of neuroendocrine tumors, the Ki-67 proliferation index serves as a fundamental biomarker, aiding pathologists in grading and diagnosing these tumors based on histopathological images. The appropriate treatment plan for the patient is determined based on the tumor grade. An artificial intelligence-based method is proposed to aid pathologists in the automated calculation and grading of the Ki-67 proliferation index. The proposed system first performs preprocessing to enhance image quality. Then, segmentation process is performed using the U-Net architecture, which is a deep learning algorithm, to separate the nuclei from the background. The identified nuclei are then evaluated as Ki-67 positive or negative based on basic color space information and other features. The Ki-67 proliferation index is then calculated, and the neuroendocrine tumor is graded accordingly. The proposed system's performance was evaluated on a dataset obtained from the Department of Pathology at Meram Faculty of Medicine Hospital, Necmettin Erbakan University. The results of the pathologist and the proposed system were compared, and the proposed system was found to have an accuracy of 95% in tumor grading when compared to the pathologist's report.


Subject(s)
Artificial Intelligence , Cell Proliferation , Ki-67 Antigen , Neoplasm Grading , Neuroendocrine Tumors , Humans , Ki-67 Antigen/metabolism , Ki-67 Antigen/analysis , Neuroendocrine Tumors/pathology , Neuroendocrine Tumors/diagnosis , Neuroendocrine Tumors/metabolism , Algorithms , Deep Learning , Image Processing, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods
14.
Entropy (Basel) ; 26(2)2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38392420

ABSTRACT

Immunohistochemistry is a powerful technique that is widely used in biomedical research and clinics; it allows one to determine the expression levels of some proteins of interest in tissue samples using color intensity due to the expression of biomarkers with specific antibodies. As such, immunohistochemical images are complex and their features are difficult to quantify. Recently, we proposed a novel method, including a first separation stage based on non-negative matrix factorization (NMF), that achieved good results. However, this method was highly dependent on the parameters that control sparseness and non-negativity, as well as on algorithm initialization. Furthermore, the previously proposed method required a reference image as a starting point for the NMF algorithm. In the present work, we propose a new, simpler and more robust method for the automated, unsupervised scoring of immunohistochemical images based on bright field. Our work is focused on images from tumor tissues marked with blue (nuclei) and brown (protein of interest) stains. The new proposed method represents a simpler approach that, on the one hand, avoids the use of NMF in the separation stage and, on the other hand, circumvents the need for a control image. This new approach determines the subspace spanned by the two colors of interest using principal component analysis (PCA) with dimension reduction. This subspace is a two-dimensional space, allowing for color vector determination by considering the point density peaks. A new scoring stage is also developed in our method that, again, avoids reference images, making the procedure more robust and less dependent on parameters. Semi-quantitative image scoring experiments using five categories exhibit promising and consistent results when compared to manual scoring carried out by experts.

15.
Brief Bioinform ; 25(1)2023 11 22.
Article in English | MEDLINE | ID: mdl-38145948

ABSTRACT

Spatial transcriptomics unveils the complex dynamics of cell regulation and transcriptomes, but it is typically cost-prohibitive. Predicting spatial gene expression from histological images via artificial intelligence offers a more affordable option, yet existing methods fall short in extracting deep-level information from pathological images. In this paper, we present THItoGene, a hybrid neural network that utilizes dynamic convolutional and capsule networks to adaptively sense potential molecular signals in histological images for exploring the relationship between high-resolution pathology image phenotypes and regulation of gene expression. A comprehensive benchmark evaluation using datasets from human breast cancer and cutaneous squamous cell carcinoma has demonstrated the superior performance of THItoGene in spatial gene expression prediction. Moreover, THItoGene has demonstrated its capacity to decipher both the spatial context and enrichment signals within specific tissue regions. THItoGene can be freely accessed at https://github.com/yrjia1015/THItoGene.


Subject(s)
Carcinoma, Squamous Cell , Deep Learning , Skin Neoplasms , Humans , Artificial Intelligence , Gene Expression Profiling
16.
Bioengineering (Basel) ; 10(10)2023 Sep 28.
Article in English | MEDLINE | ID: mdl-37892874

ABSTRACT

The paper proposes a federated content-based medical image retrieval (FedCBMIR) tool that utilizes federated learning (FL) to address the challenges of acquiring a diverse medical data set for training CBMIR models. CBMIR is a tool to find the most similar cases in the data set to assist pathologists. Training such a tool necessitates a pool of whole-slide images (WSIs) to train the feature extractor (FE) to extract an optimal embedding vector. The strict regulations surrounding data sharing in hospitals makes it difficult to collect a rich data set. FedCBMIR distributes an unsupervised FE to collaborative centers for training without sharing the data set, resulting in shorter training times and higher performance. FedCBMIR was evaluated by mimicking two experiments, including two clients with two different breast cancer data sets, namely BreaKHis and Camelyon17 (CAM17), and four clients with the BreaKHis data set at four different magnifications. FedCBMIR increases the F1 score (F1S) of each client from 96% to 98.1% in CAM17 and from 95% to 98.4% in BreaKHis, with 11.44 fewer hours in training time. FedCBMIR provides 98%, 96%, 94%, and 97% F1S in the BreaKHis experiment with a generalized model and accomplishes this in 25.53 fewer hours of training.

17.
Bioengineering (Basel) ; 10(10)2023 Sep 29.
Article in English | MEDLINE | ID: mdl-37892876

ABSTRACT

Signet ring cell (SRC) carcinoma is a particularly serious type of cancer that is a leading cause of death all over the world. SRC carcinoma has a more deceptive onset than other carcinomas and is mostly encountered in its later stages. Thus, the recognition of SRCs at their initial stages is a challenge because of different variants and sizes and illumination changes. The recognition process of SRCs at their early stages is costly because of the requirement for medical experts. A timely diagnosis is important because the level of the disease determines the severity, cure, and survival rate of victims. To tackle the current challenges, a deep learning (DL)-based methodology is proposed in this paper, i.e., custom CircleNet with ResNet-34 for SRC recognition and classification. We chose this method because of the circular shapes of SRCs and achieved better performance due to the CircleNet method. We utilized a challenging dataset for experimentation and performed augmentation to increase the dataset samples. The experiments were conducted using 35,000 images and attained 96.40% accuracy. We performed a comparative analysis and confirmed that our method outperforms the other methods.

18.
Comput Med Imaging Graph ; 110: 102302, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37839216

ABSTRACT

Image-based precision medicine research is able to help doctors make better decisions on treatments. Among all kinds of medical images, a special form is called Whole Slide Image (WSI), which is used for diagnosing patients with cancer, aiming to enable more accurate survival prediction with its high resolution. However, One unique challenge of the WSI-based prediction models is processing the gigabyte-size or even terabyte-size WSIs, which would make most models computationally infeasible. Although existing models mostly use a pre-selected subset of key patches or patch clusters as input, they might discard some important morphology information, making the prediction inferior. Another challenge is improving the prediction models' explainability, which is crucial to help doctors understand the predictions given by the models and make faithful decisions with high confidence. To address the above two challenges, in this work, we propose a novel explainable survival prediction model based on Vision Transformer. Specifically, we adopt dual-channel convolutional layers to utilize the complete WSIs for more accurate predictions. We also introduce the aleatoric uncertainty into our model to understand its limitation and avoid overconfidence in using the prediction results. Additionally, we present a post-hoc explainable method to identify the most salient patches and distinct morphology features as supporting evidence for predictions. Evaluations of two large cancer datasets show that our proposed model is able to make survival predictions more effectively and has better explainability for cancer diagnosis.


Subject(s)
Neoplasms , Humans , Uncertainty , Survival Analysis , Neoplasms/diagnostic imaging
19.
Comput Biol Med ; 164: 107300, 2023 09.
Article in English | MEDLINE | ID: mdl-37557055

ABSTRACT

Breast cancer histopathological image automatic classification can reduce pathologists workload and provide accurate diagnosis. However, one challenge is that empirical datasets are usually imbalanced, resulting in poorer classification quality compared with conventional methods based on balanced datasets. The recently proposed bilateral branch network (BBN) tackles this problem through considering both representation and classifier learning to improve classification performance. We firstly apply bilateral sampling strategy to imbalanced breast cancer histopathological image classification and propose a meta-adaptive-weighting-based bilateral multi-dimensional refined space feature attention network (MAW-BMRSFAN). The model is composed of BMRSFAN and MAWN. Specifically, the refined space feature attention module (RSFAM) is based on convolutional long short-term memories (ConvLSTMs). It is designed to extract refined spatial features of different dimensions for image classification and is inserted into different layers of classification model. Meanwhile, the MAWN is proposed to model the mapping from a balanced meta-dataset to imbalanced dataset. It finds suitable weighting parameter for BMRSFAN more flexibly through adaptively learning from a small amount of balanced dataset directly. The experiments show that MAW-BMRSFAN performs better than previous methods. The recognition accuracy of MAW-BMRSFAN under four different magnifications still is higher than 80% even when unbalance factor is 16, indicating that MAW-BMRSFAN can make ideal performance under extreme imbalanced conditions.


Subject(s)
Breast Neoplasms , Humans , Female , Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Diagnosis, Computer-Assisted/methods
20.
Comput Biol Med ; 164: 107201, 2023 09.
Article in English | MEDLINE | ID: mdl-37517325

ABSTRACT

Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Machine Learning , Diagnostic Imaging , Image Processing, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL