Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Quant Imaging Med Surg ; 14(8): 5831-5844, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39144041

RESUMO

Background: Axillary lymph node (ALN) status is a crucial prognostic indicator for breast cancer metastasis, with manual interpretation of whole slide images (WSIs) being the current standard practice. However, this method is subjective and time-consuming. Recent advancements in deep learning-based methods for medical image analysis have shown promise in improving clinical diagnosis. This study aims to leverage these technological advancements to develop a deep learning model based on features extracted from primary tumor biopsies for preoperatively identifying ALN metastasis in early-stage breast cancer patients with negative nodes. Methods: We present DLCNBC-SA, a deep learning-based network specifically tailored for core needle biopsy and clinical data feature extraction, which integrates a self-attention mechanism (CNBC-SA). The proposed model consists of a feature extractor based on convolutional neural network (CNN) and an improved self-attention mechanism module, which can preserve the independence of features in WSIs for analysis and enhancement to provide rich feature representation. To validate the performance of the proposed model, we conducted comparative experiments and ablation studies using publicly available datasets, and verification was performed through quantitative analysis. Results: The comparative experiment illustrates the superior performance of the proposed model in the task of binary classification of ALNs, as compared to alternative methods. Our method achieved outstanding performance [area under the curve (AUC): 0.882] in this task, significantly surpassing the state-of-the-art (SOTA) method on the same dataset (AUC: 0.862). The ablation experiment reveals that incorporating RandomRotation data augmentation technology and utilizing Adadelta optimizer can effectively enhance the performance of the proposed model. Conclusions: The experimental results demonstrate that the model proposed in this paper outperforms the SOTA model on the same dataset, thereby establishing its reliability as an assistant for pathologists in analyzing WSIs of breast cancer. Consequently, it significantly enhances both the efficiency and accuracy of doctors during the diagnostic process.

2.
Virchows Arch ; 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39107524

RESUMO

The aim of the present study was to develop and validate a quantitative image analysis (IA) algorithm to aid pathologists in assessing bright-field HER2 in situ hybridization (ISH) tests in solid cancers. A cohort of 80 sequential cases (40 HER2-negative and 40 HER2-positive) were evaluated for HER2 gene amplification with bright-field ISH. We developed an IA algorithm using the ISH Module from HALO software to automatically quantify HER2 and CEP17 copy numbers per cell as well as the HER2/CEP17 ratio. We observed a high correlation of HER2/CEP17 ratio, an average of HER2 and CEP17 copy number per cell between visual and IA quantification (Pearson's correlation coefficient of 0.842, 0.916, and 0.765, respectively). IA was able to count from 124 cells to 47,044 cells (median of 5565 cells). The margin of error for the visual quantification of the HER2/CEP17 ratio and of the average of HER2 copy number per cell decreased from a median of 0.23 to 0.02 and from a median of 0.49 to 0.04, respectively, in IA. Curve estimation regression models showed that a minimum of 469 or 953 invasive cancer cells per case is needed to reach an average margin of error below 0.1 for the HER2/CEP17 ratio or for the average of HER2 copy number per cell, respectively. Lastly, on average, a case took 212.1 s to execute the IA, which means that it evaluates about 130 cells/s and requires 6.7 s/mm2. The concordance of the IA software with the visual scoring was 95%, with a sensitivity of 90% and a specificity of 100%. All four discordant cases were able to achieve concordant results after the region of interest adjustment. In conclusion, this validation study underscores the usefulness of IA in HER2 ISH testing, displaying excellent concordance with visual scoring and significantly reducing margins of error.

3.
Med Image Anal ; 97: 103294, 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39128377

RESUMO

Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E2-MIL) framework for whole slide image classification. E2-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E2-MIL.

4.
J Med Imaging (Bellingham) ; 11(4): 047501, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39087085

RESUMO

Purpose: Endometrial cancer (EC) is one of the most common types of cancer affecting women. While the hematoxylin-and-eosin (H&E) staining remains the standard for histological analysis, the immunohistochemistry (IHC) method provides molecular-level visualizations. Our study proposes a digital staining method to generate the hematoxylin-3,3'-diaminobenzidine (H-DAB) IHC stain of Ki-67 for the whole slide image of the EC tumor from its H&E stain counterpart. Approach: We employed a color unmixing technique to yield stain density maps from the optical density (OD) of the stains and utilized the U-Net for end-to-end inference. The effectiveness of the proposed method was evaluated using the Pearson correlation between the digital and physical stain's labeling index (LI), a key metric indicating tumor proliferation. Two different cross-validation schemes were designed in our study: intraslide validation and cross-case validation (CCV). In the widely used intraslide scheme, the training and validation sets might include different regions from the same slide. The rigorous CCV validation scheme strictly prohibited any validation slide from contributing to training. Results: The proposed method yielded a high-resolution digital stain with preserved histological features, indicating a reliable correlation with the physical stain in terms of the Ki-67 LI. In the intraslide scheme, using intraslide patches resulted in a biased accuracy (e.g., R = 0.98 ) significantly higher than that of CCV. The CCV scheme retained a fair correlation (e.g., R = 0.66 ) between the LIs calculated from the digital stain and its physical IHC counterpart. Inferring the OD of the IHC stain from that of the H&E stain enhanced the correlation metric, outperforming that of the baseline model using the RGB space. Conclusions: Our study revealed that molecule-level insights could be obtained from H&E images using deep learning. Furthermore, the improvement brought via OD inference indicated a possible method for creating more generalizable models for digital staining via per-stain analysis.

5.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38960406

RESUMO

Spatial transcriptomics data play a crucial role in cancer research, providing a nuanced understanding of the spatial organization of gene expression within tumor tissues. Unraveling the spatial dynamics of gene expression can unveil key insights into tumor heterogeneity and aid in identifying potential therapeutic targets. However, in many large-scale cancer studies, spatial transcriptomics data are limited, with bulk RNA-seq and corresponding Whole Slide Image (WSI) data being more common (e.g. TCGA project). To address this gap, there is a critical need to develop methodologies that can estimate gene expression at near-cell (spot) level resolution from existing WSI and bulk RNA-seq data. This approach is essential for reanalyzing expansive cohort studies and uncovering novel biomarkers that have been overlooked in the initial assessments. In this study, we present STGAT (Spatial Transcriptomics Graph Attention Network), a novel approach leveraging Graph Attention Networks (GAT) to discern spatial dependencies among spots. Trained on spatial transcriptomics data, STGAT is designed to estimate gene expression profiles at spot-level resolution and predict whether each spot represents tumor or non-tumor tissue, especially in patient samples where only WSI and bulk RNA-seq data are available. Comprehensive tests on two breast cancer spatial transcriptomics datasets demonstrated that STGAT outperformed existing methods in accurately predicting gene expression. Further analyses using the TCGA breast cancer dataset revealed that gene expression estimated from tumor-only spots (predicted by STGAT) provides more accurate molecular signatures for breast cancer sub-type and tumor stage prediction, and also leading to improved patient survival and disease-free analysis. Availability: Code is available at https://github.com/compbiolabucf/STGAT.


Assuntos
Perfilação da Expressão Gênica , RNA-Seq , Transcriptoma , Humanos , RNA-Seq/métodos , Perfilação da Expressão Gênica/métodos , Neoplasias da Mama/genética , Neoplasias da Mama/metabolismo , Regulação Neoplásica da Expressão Gênica , Biologia Computacional/métodos , Feminino , Biomarcadores Tumorais/genética , Biomarcadores Tumorais/metabolismo
6.
Front Oncol ; 14: 1346237, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39035745

RESUMO

Pancreatic cancer is one of the most lethal cancers worldwide, with a 5-year survival rate of less than 5%, the lowest of all cancer types. Pancreatic ductal adenocarcinoma (PDAC) is the most common and aggressive pancreatic cancer and has been classified as a health emergency in the past few decades. The histopathological diagnosis and prognosis evaluation of PDAC is time-consuming, laborious, and challenging in current clinical practice conditions. Pathological artificial intelligence (AI) research has been actively conducted lately. However, accessing medical data is challenging; the amount of open pathology data is small, and the absence of open-annotation data drawn by medical staff makes it difficult to conduct pathology AI research. Here, we provide easily accessible high-quality annotation data to address the abovementioned obstacles. Data evaluation is performed by supervised learning using a deep convolutional neural network structure to segment 11 annotated PDAC histopathological whole slide images (WSIs) drawn by medical staff directly from an open WSI dataset. We visualized the segmentation results of the histopathological images with a Dice score of 73% on the WSIs, including PDAC areas, thus identifying areas important for PDAC diagnosis and demonstrating high data quality. Additionally, pathologists assisted by AI can significantly increase their work efficiency. The pathological AI guidelines we propose are effective in developing histopathological AI for PDAC and are significant in the clinical field.

7.
Med Image Anal ; 97: 103257, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38981282

RESUMO

The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results provide a comparison of the performance of current WSI registration methods and guide researchers in selecting and developing methods.

8.
Cancer Cytopathol ; 2024 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-39003588

RESUMO

BACKGROUND: This study evaluated the diagnostic effectiveness of the AIxURO platform, an artificial intelligence-based tool, to support urine cytology for bladder cancer management, which typically requires experienced cytopathologists and substantial diagnosis time. METHODS: One cytopathologist and two cytotechnologists reviewed 116 urine cytology slides and corresponding whole-slide images (WSIs) from urology patients. They used three diagnostic modalities: microscopy, WSI review, and AIxURO, per The Paris System for Reporting Urinary Cytology (TPS) criteria. Performance metrics, including TPS-guided and binary diagnosis, inter- and intraobserver agreement, and screening time, were compared across all methods and reviewers. RESULTS: AIxURO improved diagnostic accuracy by increasing sensitivity (from 25.0%-30.6% to 63.9%), positive predictive value (PPV; from 21.6%-24.3% to 31.1%), and negative predictive value (NPV; from 91.3%-91.6% to 95.3%) for atypical urothelial cell (AUC) cases. For suspicious for high-grade urothelial carcinoma (SHGUC) cases, it improved sensitivity (from 15.2%-27.3% to 33.3%), PPV (from 31.3%-47.4% to 61.1%), and NPV (from 91.6%-92.7% to 93.3%). Binary diagnoses exhibited an improvement in sensitivity (from 77.8%-82.2% to 90.0%) and NPV (from 91.7%-93.4% to 95.8%). Interobserver agreement across all methods showed moderate consistency (κ = 0.57-0.61), with the cytopathologist demonstrating higher intraobserver agreement than the two cytotechnologists across the methods (κ = 0.75-0.88). AIxURO significantly reduced screening time by 52.3%-83.2% from microscopy and 43.6%-86.7% from WSI review across all reviewers. Screening-positive (AUC+) cases required more time than negative cases across all methods and reviewers. CONCLUSIONS: AIxURO demonstrates the potential to improve both sensitivity and efficiency in bladder cancer diagnostics via urine cytology. Its integration into the cytopathological screening workflow could markedly decrease screening times, which would improve overall diagnostic processes.

9.
ArXiv ; 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38903738

RESUMO

Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: a) accurately predicting the overall cancer phenotype and b) finding out what cellular morphologies are associated with it at the tile level. To address these challenges, a weakly supervised Multiple Instance Learning (MIL) approach was explored for two prevalent cancer types, Invasive Breast Carcinoma (TCGA-BRCA) and Lung Squamous Cell Carcinoma (TCGA-LUSC). This approach was explored for tumor detection at low magnification levels and TP53 mutations at various levels. Our results show that a novel additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by Attention MIL (AUC 0.97). More interestingly from the perspective of the molecular pathologist, these different AI architectures identify distinct sensitivities to morphological features (through the detection of Regions of Interest, RoI) at different amplification levels. Tellingly, TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved.

10.
Sci Rep ; 14(1): 13304, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38858367

RESUMO

The limited field of view of high-resolution microscopic images hinders the study of biological samples in a single shot. Stitching of microscope images (tiles) captured by the whole-slide imaging (WSI) technique solves this problem. However, stitching is challenging due to the repetitive textures of tissues, the non-informative background part of the slide, and the large number of tiles that impact performance and computational time. To address these challenges, we proposed the Fast and Robust Microscopic Image Stitching (FRMIS) algorithm, which relies on pairwise and global alignment. The speeded up robust features (SURF) were extracted and matched within a small part of the overlapping region to compute the transformation and align two neighboring tiles. In cases where the transformation could not be computed due to an insufficient number of matched features, features were extracted from the entire overlapping region. This enhances the efficiency of the algorithm since most of the computational load is related to pairwise registration and reduces misalignment that may occur by matching duplicated features in tiles with repetitive textures. Then, global alignment was achieved by constructing a weighted graph where the weight of each edge is determined by the normalized inverse of the number of matched features between two tiles. FRMIS has been evaluated on experimental and synthetic datasets from different modalities with different numbers of tiles and overlaps, demonstrating faster stitching time compared to existing algorithms such as the Microscopy Image Stitching Tool (MIST) toolbox. FRMIS outperforms MIST by 481% for bright-field, 259% for phase-contrast, and 282% for fluorescence modalities, while also being robust to uneven illumination.

11.
Cancers (Basel) ; 16(11)2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38893251

RESUMO

The presence of spread through air spaces (STASs) in early-stage lung adenocarcinoma is a significant prognostic factor associated with disease recurrence and poor outcomes. Although current STAS detection methods rely on pathological examinations, the advent of artificial intelligence (AI) offers opportunities for automated histopathological image analysis. This study developed a deep learning (DL) model for STAS prediction and investigated the correlation between the prediction results and patient outcomes. To develop the DL-based STAS prediction model, 1053 digital pathology whole-slide images (WSIs) from the competition dataset were enrolled in the training set, and 227 WSIs from the National Taiwan University Hospital were enrolled for external validation. A YOLOv5-based framework comprising preprocessing, candidate detection, false-positive reduction, and patient-based prediction was proposed for STAS prediction. The model achieved an area under the curve (AUC) of 0.83 in predicting STAS presence, with 72% accuracy, 81% sensitivity, and 63% specificity. Additionally, the DL model demonstrated a prognostic value in disease-free survival compared to that of pathological evaluation. These findings suggest that DL-based STAS prediction could serve as an adjunctive screening tool and facilitate clinical decision-making in patients with early-stage lung adenocarcinoma.

12.
J Imaging Inform Med ; 2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38886290

RESUMO

The efficacy of immune checkpoint inhibitors is significantly influenced by the tumor immune microenvironment (TIME). RNA sequencing of tumor tissue can offer valuable insights into TIME, but its high cost and long turnaround time seriously restrict its utility in routine clinical examinations. Several recent studies have suggested that ultrahigh-resolution pathology images can infer cellular and molecular characteristics. However, few study pay attention to the quantitative estimation of various tumor infiltration immune cells from pathology images. In this paper, we integrated contrastive learning and weakly supervised learning to infer tumor-associated macrophages and potential immunotherapy benefit from whole slide images (WSIs) of H &E stained pathological sections. We split the high-resolution WSIs into tiles and then apply contrastive learning to extract features of each tile. After aggregating the features at the tile level, we employ weak supervisory signals to fine-tune the encoder for various downstream tasks. Comprehensive experiments on two independent breast cancer cohorts and spatial transcriptomics data demonstrate that the computational pathological features accurately predict the proportion of tumor-infiltrating immune cells, particularly the infiltration level of macrophages, as well as the immune subtypes and potential immunotherapy benefit. These findings demonstrate that our model effectively captures pathological features beyond human vision, establishing a mapping relationship between cellular compositions and histological morphology, thus expanding the clinical applications of digital pathology images.

13.
Lab Invest ; 104(8): 102094, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38871058

RESUMO

Accurate assessment of epidermal growth factor receptor (EGFR) mutation status and subtype is critical for the treatment of non-small cell lung cancer patients. Conventional molecular testing methods for detecting EGFR mutations have limitations. In this study, an artificial intelligence-powered deep learning framework was developed for the weakly supervised prediction of EGFR mutations in non-small cell lung cancer from hematoxylin and eosin-stained histopathology whole-slide images. The study cohort was partitioned into training and validation subsets. Foreground regions containing tumor tissue were extracted from whole-slide images. A convolutional neural network employing a contrastive learning paradigm was implemented to extract patch-level morphologic features. These features were aggregated using a vision transformer-based model to predict EGFR mutation status and classify patient cases. The established prediction model was validated on unseen data sets. In internal validation with a cohort from the University of Science and Technology of China (n = 172), the model achieved patient-level areas under the receiver-operating characteristic curve (AUCs) of 0.927 and 0.907, sensitivities of 81.6% and 83.3%, and specificities of 93.0% and 92.3%, for surgical resection and biopsy specimens, respectively, in EGFR mutation subtype prediction. External validation with cohorts from the Second Affiliated Hospital of Anhui Medical University and the First Affiliated Hospital of Wannan Medical College (n = 193) yielded patient-level AUCs of 0.849 and 0.867, sensitivities of 79.2% and 80.7%, and specificities of 91.7% and 90.7% for surgical and biopsy specimens, respectively. Further validation with The Cancer Genome Atlas data set (n = 81) showed an AUC of 0.861, a sensitivity of 84.6%, and a specificity of 90.5%. Deep learning solutions demonstrate potential advantages for automated, noninvasive, fast, cost-effective, and accurate inference of EGFR alterations from histomorphology. Integration of such artificial intelligence frameworks into routine digital pathology workflows could augment existing molecular testing pipelines.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Receptores ErbB , Hematoxilina , Neoplasias Pulmonares , Mutação , Humanos , Receptores ErbB/genética , Carcinoma Pulmonar de Células não Pequenas/genética , Carcinoma Pulmonar de Células não Pequenas/patologia , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/patologia , Amarelo de Eosina-(YS) , Feminino , Masculino , Pessoa de Meia-Idade , Idoso
14.
Comput Biol Med ; 178: 108710, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38843570

RESUMO

BACKGROUND: Efficient and precise diagnosis of non-small cell lung cancer (NSCLC) is quite critical for subsequent targeted therapy and immunotherapy. Since the advent of whole slide images (WSIs), the transition from traditional histopathology to digital pathology has aroused the application of convolutional neural networks (CNNs) in histopathological recognition and diagnosis. HookNet can make full use of macroscopic and microscopic information for pathological diagnosis, but it cannot integrate other excellent CNN structures. The new version of HookEfficientNet is based on a combination of HookNet structure and EfficientNet that performs well in the recognition of general objects. Here, a high-precision artificial intelligence-guided histopathological recognition system was established by HookEfficientNet to provide a basis for the intelligent differential diagnosis of NSCLC. METHODS: A total of 216 WSIs of lung adenocarcinoma (LUAD) and 192 WSIs of lung squamous cell carcinoma (LUSC) were recruited from the First Affiliated Hospital of Zhengzhou University. Deep learning methods based on HookEfficientNet, HookNet and EfficientNet B4-B6 were developed and compared with each other using area under the curve (AUC) and the Youden index. Temperature scaling was used to calibrate the heatmap and highlight the cancer region of interest. Four pathologists of different levels blindly reviewed 108 WSIs of LUAD and LUSC, and the diagnostic results were compared with the various deep learning models. RESULTS: The HookEfficientNet model outperformed HookNet and EfficientNet B4-B6. After temperature scaling, the HookEfficientNet model achieved AUCs of 0.973, 0.980, and 0.989 and Youden index values of 0.863, 0.899, and 0.922 for LUAD, LUSC and normal lung tissue, respectively, in the testing set. The accuracy of the model was better than the average accuracy from experienced pathologists, and the model was superior to pathologists in the diagnosis of LUSC. CONCLUSIONS: HookEfficientNet can effectively recognize LUAD and LUSC with performance superior to that of senior pathologists, especially for LUSC. The model has great potential to facilitate the application of deep learning-assisted histopathological diagnosis for LUAD and LUSC in the future.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Redes Neurais de Computação , Humanos , Carcinoma Pulmonar de Células não Pequenas/patologia , Neoplasias Pulmonares/patologia , Interpretação de Imagem Assistida por Computador/métodos , Diagnóstico por Computador/métodos
15.
Comput Biol Med ; 178: 108714, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38889627

RESUMO

BACKGROUND: The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations. METHODS: We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy. RESULTS: Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy. CONCLUSION: Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.


Assuntos
Interpretação de Imagem Assistida por Computador , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
16.
Dig Dis Sci ; 69(8): 2985-2995, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38837111

RESUMO

BACKGROUND: Colorectal cancer (CRC) is a malignant tumor within the digestive tract with both a high incidence rate and mortality. Early detection and intervention could improve patient clinical outcomes and survival. METHODS: This study computationally investigates a set of prognostic tissue and cell features from diagnostic tissue slides. With the combination of clinical prognostic variables, the pathological image features could predict the prognosis in CRC patients. Our CRC prognosis prediction pipeline sequentially consisted of three modules: (1) A MultiTissue Net to delineate outlines of different tissue types within the WSI of CRC for further ROI selection by pathologists. (2) Development of three-level quantitative image metrics related to tissue compositions, cell shape, and hidden features from a deep network. (3) Fusion of multi-level features to build a prognostic CRC model for predicting survival for CRC. RESULTS: Experimental results suggest that each group of features has a particular relationship with the prognosis of patients in the independent test set. In the fusion features combination experiment, the accuracy rate of predicting patients' prognosis and survival status is 81.52%, and the AUC value is 0.77. CONCLUSION: This paper constructs a model that can predict the postoperative survival of patients by using image features and clinical information. Some features were found to be associated with the prognosis and survival of patients.


Assuntos
Neoplasias Colorretais , Humanos , Neoplasias Colorretais/patologia , Neoplasias Colorretais/mortalidade , Prognóstico , Masculino , Feminino , Interpretação de Imagem Assistida por Computador , Valor Preditivo dos Testes
17.
Biomed Phys Eng Express ; 10(5)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-38925106

RESUMO

Detecting the Kirsten Rat Sarcoma Virus (KRAS) gene mutation is significant for colorectal cancer (CRC) patients. TheKRASgene encodes a protein involved in the epidermal growth factor receptor (EGFR) signaling pathway, and mutations in this gene can negatively impact the use of monoclonal antibodies in anti-EGFR therapy and affect treatment decisions. Currently, commonly used methods like next-generation sequencing (NGS) identifyKRASmutations but are expensive, time-consuming, and may not be suitable for every cancer patient sample. To address these challenges, we have developedKRASFormer, a novel framework that predictsKRASgene mutations from Haematoxylin and Eosin (H & E) stained WSIs that are widely available for most CRC patients.KRASFormerconsists of two stages: the first stage filters out non-tumor regions and selects only tumour cells using a quality screening mechanism, and the second stage predicts theKRASgene either wildtype' or mutant' using a Vision Transformer-based XCiT method. The XCiT employs cross-covariance attention to capture clinically meaningful long-range representations of textural patterns in tumour tissue andKRASmutant cells. We evaluated the performance of the first stage using an independent CRC-5000 dataset, and the second stage included both The Cancer Genome Atlas colon and rectal cancer (TCGA-CRC-DX) and in-house cohorts. The results of our experiments showed that the XCiT outperformed existing state-of-the-art methods, achieving AUCs for ROC curves of 0.691 and 0.653 on TCGA-CRC-DX and in-house datasets, respectively. Our findings emphasize three key consequences: the potential of using H & E-stained tissue slide images for predictingKRASgene mutations as a cost-effective and time-efficient means for guiding treatment choice with CRC patients; the increase in performance metrics of a Transformer-based model; and the value of the collaboration between pathologists and data scientists in deriving a morphologically meaningful model.


Assuntos
Neoplasias Colorretais , Mutação , Proteínas Proto-Oncogênicas p21(ras) , Humanos , Neoplasias Colorretais/genética , Neoplasias Colorretais/patologia , Proteínas Proto-Oncogênicas p21(ras)/genética , Algoritmos , Receptores ErbB/genética , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Processamento de Imagem Assistida por Computador/métodos , Curva ROC
18.
J Med Imaging (Bellingham) ; 11(3): 037501, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38737492

RESUMO

Purpose: Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs. Approach: We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks. Results: The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively. Conclusions: The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.

19.
Front Oncol ; 14: 1275769, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38746682

RESUMO

Background: Whole Slide Image (WSI) analysis, driven by deep learning algorithms, has the potential to revolutionize tumor detection, classification, and treatment response prediction. However, challenges persist, such as limited model generalizability across various cancer types, the labor-intensive nature of patch-level annotation, and the necessity of integrating multi-magnification information to attain a comprehensive understanding of pathological patterns. Methods: In response to these challenges, we introduce MAMILNet, an innovative multi-scale attentional multi-instance learning framework for WSI analysis. The incorporation of attention mechanisms into MAMILNet contributes to its exceptional generalizability across diverse cancer types and prediction tasks. This model considers whole slides as "bags" and individual patches as "instances." By adopting this approach, MAMILNet effectively eliminates the requirement for intricate patch-level labeling, significantly reducing the manual workload for pathologists. To enhance prediction accuracy, the model employs a multi-scale "consultation" strategy, facilitating the aggregation of test outcomes from various magnifications. Results: Our assessment of MAMILNet encompasses 1171 cases encompassing a wide range of cancer types, showcasing its effectiveness in predicting complex tasks. Remarkably, MAMILNet achieved impressive results in distinct domains: for breast cancer tumor detection, the Area Under the Curve (AUC) was 0.8872, with an Accuracy of 0.8760. In the realm of lung cancer typing diagnosis, it achieved an AUC of 0.9551 and an Accuracy of 0.9095. Furthermore, in predicting drug therapy responses for ovarian cancer, MAMILNet achieved an AUC of 0.7358 and an Accuracy of 0.7341. Conclusion: The outcomes of this study underscore the potential of MAMILNet in driving the advancement of precision medicine and individualized treatment planning within the field of oncology. By effectively addressing challenges related to model generalization, annotation workload, and multi-magnification integration, MAMILNet shows promise in enhancing healthcare outcomes for cancer patients. The framework's success in accurately detecting breast tumors, diagnosing lung cancer types, and predicting ovarian cancer therapy responses highlights its significant contribution to the field and paves the way for improved patient care.

20.
Eur J Surg Oncol ; 50(7): 108369, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38703632

RESUMO

BACKGROUND: TNM staging is the main reference standard for prognostic prediction of colorectal cancer (CRC), but the prognosis heterogeneity of patients with the same stage is still large. This study aimed to classify the tumor microenvironment of patients with stage III CRC and quantify the classified tumor tissues based on deep learning to explore the prognostic value of the developed tumor risk signature (TRS). METHODS: A tissue classification model was developed to identify nine tissues (adipose, background, debris, lymphocytes, mucus, smooth muscle, normal mucosa, stroma, and tumor) in whole-slide images (WSIs) of stage III CRC patients. This model was used to extract tumor tissues from WSIs of 265 stage III CRC patients from The Cancer Genome Atlas and 70 stage III CRC patients from the Sixth Affiliated Hospital of Sun Yat-sen University. We used three different deep learning models for tumor feature extraction and applied a Cox model to establish the TRS. Survival analysis was conducted to explore the prognostic performance of TRS. RESULTS: The tissue classification model achieved 94.4 % accuracy in identifying nine tissue types. The TRS showed a Harrell's concordance index of 0.736, 0.716, and 0.711 in the internal training, internal validation, and external validation sets. Survival analysis showed that TRS had significant predictive ability (hazard ratio: 3.632, p = 0.03) for prognostic prediction. CONCLUSION: The TRS is an independent and significant prognostic factor for PFS of stage III CRC patients and it contributes to risk stratification of patients with different clinical stages.


Assuntos
Neoplasias Colorretais , Aprendizado Profundo , Estadiamento de Neoplasias , Microambiente Tumoral , Humanos , Neoplasias Colorretais/patologia , Prognóstico , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Modelos de Riscos Proporcionais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA