Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 261
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38960406

ABSTRACT

Spatial transcriptomics data play a crucial role in cancer research, providing a nuanced understanding of the spatial organization of gene expression within tumor tissues. Unraveling the spatial dynamics of gene expression can unveil key insights into tumor heterogeneity and aid in identifying potential therapeutic targets. However, in many large-scale cancer studies, spatial transcriptomics data are limited, with bulk RNA-seq and corresponding Whole Slide Image (WSI) data being more common (e.g. TCGA project). To address this gap, there is a critical need to develop methodologies that can estimate gene expression at near-cell (spot) level resolution from existing WSI and bulk RNA-seq data. This approach is essential for reanalyzing expansive cohort studies and uncovering novel biomarkers that have been overlooked in the initial assessments. In this study, we present STGAT (Spatial Transcriptomics Graph Attention Network), a novel approach leveraging Graph Attention Networks (GAT) to discern spatial dependencies among spots. Trained on spatial transcriptomics data, STGAT is designed to estimate gene expression profiles at spot-level resolution and predict whether each spot represents tumor or non-tumor tissue, especially in patient samples where only WSI and bulk RNA-seq data are available. Comprehensive tests on two breast cancer spatial transcriptomics datasets demonstrated that STGAT outperformed existing methods in accurately predicting gene expression. Further analyses using the TCGA breast cancer dataset revealed that gene expression estimated from tumor-only spots (predicted by STGAT) provides more accurate molecular signatures for breast cancer sub-type and tumor stage prediction, and also leading to improved patient survival and disease-free analysis. Availability: Code is available at https://github.com/compbiolabucf/STGAT.


Subject(s)
Gene Expression Profiling , RNA-Seq , Transcriptome , Humans , RNA-Seq/methods , Gene Expression Profiling/methods , Breast Neoplasms/genetics , Breast Neoplasms/metabolism , Gene Expression Regulation, Neoplastic , Computational Biology/methods , Female , Biomarkers, Tumor/genetics , Biomarkers, Tumor/metabolism
2.
Brief Bioinform ; 24(3)2023 05 19.
Article in English | MEDLINE | ID: mdl-37114657

ABSTRACT

PURPOSE: Evaluation of genetic mutations in cancers is important because distinct mutational profiles help determine individualized drug therapy. However, molecular analyses are not routinely performed in all cancers because they are expensive, time-consuming and not universally available. Artificial intelligence (AI) has shown the potential to determine a wide range of genetic mutations on histologic image analysis. Here, we assessed the status of mutation prediction AI models on histologic images by a systematic review. METHODS: A literature search using the MEDLINE, Embase and Cochrane databases was conducted in August 2021. The articles were shortlisted by titles and abstracts. After a full-text review, publication trends, study characteristic analysis and comparison of performance metrics were performed. RESULTS: Twenty-four studies were found mostly from developed countries, and their number is increasing. The major targets were gastrointestinal, genitourinary, gynecological, lung and head and neck cancers. Most studies used the Cancer Genome Atlas, with a few using an in-house dataset. The area under the curve of some of the cancer driver gene mutations in particular organs was satisfactory, such as 0.92 of BRAF in thyroid cancers and 0.79 of EGFR in lung cancers, whereas the average of all gene mutations was 0.64, which is still suboptimal. CONCLUSION: AI has the potential to predict gene mutations on histologic images with appropriate caution. Further validation with larger datasets is still required before AI models can be used in clinical practice to predict gene mutations.


Subject(s)
Artificial Intelligence , Thyroid Neoplasms , Humans , Benchmarking , Databases, Factual , Mutation
3.
Lab Invest ; 104(8): 102094, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38871058

ABSTRACT

Accurate assessment of epidermal growth factor receptor (EGFR) mutation status and subtype is critical for the treatment of non-small cell lung cancer patients. Conventional molecular testing methods for detecting EGFR mutations have limitations. In this study, an artificial intelligence-powered deep learning framework was developed for the weakly supervised prediction of EGFR mutations in non-small cell lung cancer from hematoxylin and eosin-stained histopathology whole-slide images. The study cohort was partitioned into training and validation subsets. Foreground regions containing tumor tissue were extracted from whole-slide images. A convolutional neural network employing a contrastive learning paradigm was implemented to extract patch-level morphologic features. These features were aggregated using a vision transformer-based model to predict EGFR mutation status and classify patient cases. The established prediction model was validated on unseen data sets. In internal validation with a cohort from the University of Science and Technology of China (n = 172), the model achieved patient-level areas under the receiver-operating characteristic curve (AUCs) of 0.927 and 0.907, sensitivities of 81.6% and 83.3%, and specificities of 93.0% and 92.3%, for surgical resection and biopsy specimens, respectively, in EGFR mutation subtype prediction. External validation with cohorts from the Second Affiliated Hospital of Anhui Medical University and the First Affiliated Hospital of Wannan Medical College (n = 193) yielded patient-level AUCs of 0.849 and 0.867, sensitivities of 79.2% and 80.7%, and specificities of 91.7% and 90.7% for surgical and biopsy specimens, respectively. Further validation with The Cancer Genome Atlas data set (n = 81) showed an AUC of 0.861, a sensitivity of 84.6%, and a specificity of 90.5%. Deep learning solutions demonstrate potential advantages for automated, noninvasive, fast, cost-effective, and accurate inference of EGFR alterations from histomorphology. Integration of such artificial intelligence frameworks into routine digital pathology workflows could augment existing molecular testing pipelines.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Deep Learning , ErbB Receptors , Hematoxylin , Lung Neoplasms , Mutation , Humans , ErbB Receptors/genetics , Carcinoma, Non-Small-Cell Lung/genetics , Carcinoma, Non-Small-Cell Lung/pathology , Lung Neoplasms/genetics , Lung Neoplasms/pathology , Eosine Yellowish-(YS) , Female , Male , Middle Aged , Aged
4.
Lab Invest ; 104(2): 100288, 2024 02.
Article in English | MEDLINE | ID: mdl-37977550

ABSTRACT

Liver transplantation is an effective treatment for end-stage liver disease, acute liver failure, and primary hepatic malignancy. However, the limited availability of donor organs remains a challenge. Severe large-droplet fat (LDF) macrovesicular steatosis, characterized by cytoplasmic replacement with large fat vacuoles, can lead to liver transplant complications. Artificial intelligence models, such as segmentation and detection models, are being developed to detect LDF hepatocytes. The Segment-Anything Model, utilizing the DEtection TRansformer architecture, has the ability to segment objects without prior knowledge of size or shape. We investigated the Segment-Anything Model's potential to detect LDF hepatocytes in liver biopsies. Pathologist-annotated specimens were used to evaluate model performance. The model showed high sensitivity but compromised specificity due to similarities with other structures. Filtering algorithms were developed to improve specificity. Integration of the Segment-Anything Model with rule-based algorithms accurately detected LDF hepatocytes. Improved diagnosis and treatment of liver diseases can be achieved through advancements in artificial intelligence algorithms for liver histology analysis.


Subject(s)
Fatty Liver , Liver Transplantation , Humans , Artificial Intelligence , Living Donors , Fatty Liver/diagnostic imaging , Fatty Liver/pathology , Liver/diagnostic imaging , Liver/pathology
5.
Lab Invest ; 104(6): 102049, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38513977

ABSTRACT

Although pathological tissue analysis is typically performed on single 2-dimensional (2D) histologic reference slides, 3-dimensional (3D) reconstruction from a sequence of histologic sections could provide novel opportunities for spatial analysis of the extracted tissue. In this review, we analyze recent works published after 2018 and report information on the extracted tissue types, the section thickness, and the number of sections used for reconstruction. By analyzing the technological requirements for 3D reconstruction, we observe that software tools exist, both free and commercial, which include the functionality to perform 3D reconstruction from a sequence of histologic images. Through the analysis of the most recent works, we provide an overview of the workflows and tools that are currently used for 3D reconstruction from histologic sections and address points for future work, such as a missing common file format or computer-aided analysis of the reconstructed model.


Subject(s)
Imaging, Three-Dimensional , Imaging, Three-Dimensional/methods , Humans , Software , Animals
6.
Breast Cancer Res ; 26(1): 31, 2024 02 23.
Article in English | MEDLINE | ID: mdl-38395930

ABSTRACT

BACKGROUND: Accurate classification of breast cancer molecular subtypes is crucial in determining treatment strategies and predicting clinical outcomes. This classification largely depends on the assessment of human epidermal growth factor receptor 2 (HER2), estrogen receptor (ER), and progesterone receptor (PR) status. However, variability in interpretation among pathologists pose challenges to the accuracy of this classification. This study evaluates the role of artificial intelligence (AI) in enhancing the consistency of these evaluations. METHODS: AI-powered HER2 and ER/PR analyzers, consisting of cell and tissue models, were developed using 1,259 HER2, 744 ER, and 466 PR-stained immunohistochemistry (IHC) whole-slide images of breast cancer. External validation cohort comprising HER2, ER, and PR IHCs of 201 breast cancer cases were analyzed with these AI-powered analyzers. Three board-certified pathologists independently assessed these cases without AI annotation. Then, cases with differing interpretations between pathologists and the AI analyzer were revisited with AI assistance, focusing on evaluating the influence of AI assistance on the concordance among pathologists during the revised evaluation compared to the initial assessment. RESULTS: Reevaluation was required in 61 (30.3%), 42 (20.9%), and 80 (39.8%) of HER2, in 15 (7.5%), 17 (8.5%), and 11 (5.5%) of ER, and in 26 (12.9%), 24 (11.9%), and 28 (13.9%) of PR evaluations by the pathologists, respectively. Compared to initial interpretations, the assistance of AI led to a notable increase in the agreement among three pathologists on the status of HER2 (from 49.3 to 74.1%, p < 0.001), ER (from 93.0 to 96.5%, p = 0.096), and PR (from 84.6 to 91.5%, p = 0.006). This improvement was especially evident in cases of HER2 2+ and 1+, where the concordance significantly increased from 46.2 to 68.4% and from 26.5 to 70.7%, respectively. Consequently, a refinement in the classification of breast cancer molecular subtypes (from 58.2 to 78.6%, p < 0.001) was achieved with AI assistance. CONCLUSIONS: This study underscores the significant role of AI analyzers in improving pathologists' concordance in the classification of breast cancer molecular subtypes.


Subject(s)
Breast Neoplasms , Humans , Female , Breast Neoplasms/diagnosis , Breast Neoplasms/metabolism , Receptors, Estrogen/metabolism , Biomarkers, Tumor/metabolism , Artificial Intelligence , Observer Variation , Receptors, Progesterone/metabolism , Receptor, ErbB-2/metabolism
7.
Brief Bioinform ; 23(5)2022 09 20.
Article in English | MEDLINE | ID: mdl-35901472

ABSTRACT

MOTIVATION: Digital pathological analysis is run as the main examination used for cancer diagnosis. Recently, deep learning-driven feature extraction from pathology images is able to detect genetic variations and tumor environment, but few studies focus on differential gene expression in tumor cells. RESULTS: In this paper, we propose a self-supervised contrastive learning framework, HistCode, to infer differential gene expression from whole slide images (WSIs). We leveraged contrastive learning on large-scale unannotated WSIs to derive slide-level histopathological features in latent space, and then transfer it to tumor diagnosis and prediction of differentially expressed cancer driver genes. Our experiments showed that our method outperformed other state-of-the-art models in tumor diagnosis tasks, and also effectively predicted differential gene expression. Interestingly, we found the genes with higher fold change can be more precisely predicted. To intuitively illustrate the ability to extract informative features from pathological images, we spatially visualized the WSIs colored by the attention scores of image tiles. We found that the tumor and necrosis areas were highly consistent with the annotations of experienced pathologists. Moreover, the spatial heatmap generated by lymphocyte-specific gene expression patterns was also consistent with the manually labeled WSIs.


Subject(s)
Neoplasms , Oncogenes , Humans , Machine Learning , Neoplasms/diagnosis , Neoplasms/genetics , Neoplasms/pathology
8.
J Transl Med ; 22(1): 182, 2024 02 19.
Article in English | MEDLINE | ID: mdl-38373959

ABSTRACT

BACKGROUND: Digital histopathology provides valuable information for clinical decision-making. We hypothesized that a deep risk network (DeepRisk) based on digital pathology signature (DPS) derived from whole-slide images could improve the prognostic value of the tumor, node, and metastasis (TNM) staging system and offer chemotherapeutic benefits for gastric cancer (GC). METHODS: DeepRisk is a multi-scale, attention-based learning model developed on 1120 GCs in the Zhongshan dataset and validated with two external datasets. Then, we assessed its association with prognosis and treatment response. The multi-omics analysis and multiplex Immunohistochemistry were conducted to evaluate the potential pathogenesis and spatial immune contexture underlying DPS. RESULTS: Multivariate analysis indicated that the DPS was an independent prognosticator with a better C-index (0.84 for overall survival and 0.71 for disease-free survival). Patients with low-DPS after neoadjuvant chemotherapy responded favorably to treatment. Spatial analysis indicated that exhausted immune clusters and increased infiltration of CD11b+CD11c+ immune cells were present at the invasive margin of high-DPS group. Multi-omics data from the Cancer Genome Atlas-Stomach adenocarcinoma (TCGA-STAD) hint at the relevance of DPS to myeloid derived suppressor cells infiltration and immune suppression. CONCLUSION: DeepRisk network is a reliable tool that enhances prognostic value of TNM staging and aid in precise treatment, providing insights into the underlying pathogenic mechanisms.


Subject(s)
Adenocarcinoma , Stomach Neoplasms , Humans , Stomach Neoplasms/drug therapy , Neoadjuvant Therapy , Clinical Decision-Making , Artificial Intelligence , Prognosis
9.
BMC Cancer ; 24(1): 368, 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38519974

ABSTRACT

OBJECTIVE: This study aimed to develop and validate an artificial intelligence radiopathological model using preoperative CT scans and postoperative hematoxylin and eosin (HE) stained slides to predict the pathological staging of gastric cancer (stage I-II and stage III). METHODS: This study included a total of 202 gastric cancer patients with confirmed pathological staging (training cohort: n = 141; validation cohort: n = 61). Pathological histological features were extracted from HE slides, and pathological models were constructed using logistic regression (LR), support vector machine (SVM), and NaiveBayes. The optimal pathological model was selected through receiver operating characteristic (ROC) curve analysis. Machine learnin algorithms were employed to construct radiomic models and radiopathological models using the optimal pathological model. Model performance was evaluated using ROC curve analysis, and clinical utility was estimated using decision curve analysis (DCA). RESULTS: A total of 311 pathological histological features were extracted from the HE images, including 101 Term Frequency-Inverse Document Frequency (TF-IDF) features and 210 deep learning features. A pathological model was constructed using 19 selected pathological features through dimension reduction, with the SVM model demonstrating superior predictive performance (AUC, training cohort: 0.949; validation cohort: 0.777). Radiomic features were constructed using 6 selected features from 1834 radiomic features extracted from CT scans via SVM machine algorithm. Simultaneously, a radiopathomics model was built using 17 non-zero coefficient features obtained through dimension reduction from a total of 2145 features (combining both radiomics and pathomics features). The best discriminative ability was observed in the SVM_radiopathomics model (AUC, training cohort: 0.953; validation cohort: 0.851), and clinical decision curve analysis (DCA) demonstrated excellent clinical utility. CONCLUSION: The radiopathomics model, combining pathological and radiomic features, exhibited superior performance in distinguishing between stage I-II and stage III gastric cancer. This study is based on the prediction of pathological staging using pathological tissue slides from surgical specimens after gastric cancer curative surgery and preoperative CT images, highlighting the feasibility of conducting research on pathological staging using pathological slides and CT images.


Subject(s)
Stomach Neoplasms , Humans , Stomach Neoplasms/diagnostic imaging , Artificial Intelligence , Algorithms , Eosine Yellowish-(YS) , Tomography, X-Ray Computed
10.
Pathobiology ; 91(1): 8-17, 2024.
Article in English | MEDLINE | ID: mdl-36791682

ABSTRACT

The expanding digitalization of routine diagnostic histological slides holds a potential to apply artificial intelligence (AI) to pathology, including bone marrow (BM) histology. In this perspective, we describe potential tasks in diagnostics that can be supported, investigations that can be guided, and questions that can be answered by the future application of AI on whole-slide images of BM biopsies. These range from characterization of cell lineages and quantification of cells and stromal structures to disease prediction. First glimpses show an exciting potential to detect subtle phenotypic changes with AI that are due to specific genotypes. The discussion is illustrated by examples of current AI research using BM biopsy slides. In addition, we briefly discuss current challenges for implementation of AI-supported diagnostics.


Subject(s)
Artificial Intelligence , Bone Marrow , Humans , Biopsy , Cell Lineage , Genotype
11.
J Pathol ; 259(2): 125-135, 2023 02.
Article in English | MEDLINE | ID: mdl-36318158

ABSTRACT

Colorectal adenoma is a recognized precancerous lesion of colorectal cancer (CRC), and at least 80% of colorectal cancers are malignantly transformed from it. Therefore, it is essential to distinguish benign from malignant adenomas in the early screening of colorectal cancer. Many deep learning computational pathology studies based on whole slide images (WSIs) have been proposed. Most approaches require manual annotation of lesion regions on WSIs, which is time-consuming and labor-intensive. This study proposes a new approach, MIST - Multiple Instance learning network based on the Swin Transformer, which can accurately classify colorectal adenoma WSIs only with slide-level labels. MIST uses the Swin Transformer as the backbone to extract features of images through self-supervised contrastive learning and uses a dual-stream multiple instance learning network to predict the class of slides. We trained and validated MIST on 666 WSIs collected from 480 colorectal adenoma patients in the Department of Pathology, The Affiliated Drum Tower Hospital of Nanjing University Medical School. These slides contained six common types of colorectal adenomas. The accuracy of external validation on 273 newly collected WSIs from Nanjing First Hospital was 0.784, which was superior to the existing methods and reached a level comparable to that of the local pathologist's accuracy of 0.806. Finally, we analyzed the interpretability of MIST and observed that the lesion areas of interest in MIST were generally consistent with those of interest to local pathologists. In conclusion, MIST is a low-burden, interpretable, and effective approach that can be used in colorectal cancer screening and may lead to a potential reduction in the mortality of CRC patients by assisting clinicians in the decision-making process. © 2022 The Pathological Society of Great Britain and Ireland.


Subject(s)
Adenocarcinoma , Adenoma , Colorectal Neoplasms , Humans , Pathologists , United Kingdom
12.
Methods ; 212: 31-38, 2023 04.
Article in English | MEDLINE | ID: mdl-36706825

ABSTRACT

Liver is an important metabolic organ in human body and is sensitive to toxic chemicals or drugs. Adverse reactions caused by drug hepatotoxicity will damage the liver and hepatotoxicity is the leading cause of removal of approved drugs from the market. Therefore, it is of great significance to identify liver toxicity as early as possible in the drug development process. In this study, we developed a predictive model for drug hepatotoxicity based on histopathological whole slide images (WSI) which are the by-product of drug experiments and have received little attention. To better represent the WSIs, we constructed a graph representation for each WSI by dividing it into small patches, taking sampled patches as nodes and calculating the correlation coefficients between node features as the edges of the graph structure. Then a WSI-level graph convolutional network (GCN) was built to effectively extract the node information of the graph and predict the toxicity. In addition, we introduced a gated attention global context vector (gaGCV) to combine the global context to make node features to contain more comprehensive information. The results validated on rat liver in vivo data from the Open TG-GATES show that the use of WSI for the prediction of toxicity is feasible and effective.


Subject(s)
Chemical and Drug Induced Liver Injury , Liver , Animals , Humans , Rats , Chemical and Drug Induced Liver Injury/etiology , Liver/pathology , Microscopy , Image Interpretation, Computer-Assisted
13.
Pediatr Dev Pathol ; 27(1): 32-38, 2024.
Article in English | MEDLINE | ID: mdl-37943723

ABSTRACT

INTRODUCTION: In osteosarcoma, the most significant indicator of prognosis is the histologic changes related to tumor response to preoperative chemotherapy, such as necrosis. We have developed a method to measure the osteosarcoma treatment effect using whole slide image (WSI) with an open-source digital image analytical software Qupath. MATERIALS AND METHODS: In Qupath, each osteosarcoma case was treated as a project. All H&E slides from the entire representative slice of osteosarcoma were scanned into WSIs and imported into a project in Qupath. The regions of tumor and tumor necrosis were annotated, and their areas were measured in Qupath. In order to measure the osteosarcoma treatment effect, we needed to calculate the percentage of total necrosis area over total tumor area. We developed a tool that can automatically extract all values of tumor and necrosis areas from a Qupath project into an Excel file, sum these values for necrosis and whole tumor respectively, and calculate necrosis/tumor percentage. CONCLUSION: Our method that combines WSI with Qupath can provide an objective measurement to facilitate pathologist's assessment of osteosarcoma response to treatment. The proposed approach can also be used for other types of tumors that have clinical need for post-treatment response assessment.


Subject(s)
Bone Neoplasms , Osteosarcoma , Humans , Software , Osteosarcoma/diagnosis , Osteosarcoma/therapy , Osteosarcoma/pathology , Bone Neoplasms/diagnosis , Bone Neoplasms/therapy , Bone Neoplasms/pathology , Necrosis/pathology
14.
Dig Dis Sci ; 69(8): 2985-2995, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38837111

ABSTRACT

BACKGROUND: Colorectal cancer (CRC) is a malignant tumor within the digestive tract with both a high incidence rate and mortality. Early detection and intervention could improve patient clinical outcomes and survival. METHODS: This study computationally investigates a set of prognostic tissue and cell features from diagnostic tissue slides. With the combination of clinical prognostic variables, the pathological image features could predict the prognosis in CRC patients. Our CRC prognosis prediction pipeline sequentially consisted of three modules: (1) A MultiTissue Net to delineate outlines of different tissue types within the WSI of CRC for further ROI selection by pathologists. (2) Development of three-level quantitative image metrics related to tissue compositions, cell shape, and hidden features from a deep network. (3) Fusion of multi-level features to build a prognostic CRC model for predicting survival for CRC. RESULTS: Experimental results suggest that each group of features has a particular relationship with the prognosis of patients in the independent test set. In the fusion features combination experiment, the accuracy rate of predicting patients' prognosis and survival status is 81.52%, and the AUC value is 0.77. CONCLUSION: This paper constructs a model that can predict the postoperative survival of patients by using image features and clinical information. Some features were found to be associated with the prognosis and survival of patients.


Subject(s)
Colorectal Neoplasms , Humans , Colorectal Neoplasms/pathology , Colorectal Neoplasms/mortality , Prognosis , Male , Female , Image Interpretation, Computer-Assisted , Predictive Value of Tests
15.
Microsc Microanal ; 30(1): 118-132, 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38156737

ABSTRACT

Automated quantification of human epidermal growth factor receptor 2 (HER2) immunohistochemistry (IHC) using whole slide imaging (WSI) is expected to eliminate subjectivity in visual assessment. However, the color intensity in WSI varies depending on the staining process and scanner device. Such variations affect the image analysis results. This paper presents methods to diminish the influence of color variation produced in the staining process using a calibrator slide consisting of peptide-coated microbeads. The calibrator slide is stained along with tissue sample slides, and the 3,3'-diaminobenzidine (DAB) color intensities of the microbeads are used for calibrating the color variation of the sample slides. An off-the-shelf image analysis tool is employed for the automated assessment, in which cells are classified by the thresholds for the membrane staining. We have adopted two methods for calibrating the color variation based on the DAB color intensities obtained from the calibrator slide: (1) thresholds for classifying the DAB membranous intensity are adjusted, and (2) the color intensity of WSI is corrected. In the experiment, the calibrator slides and tissue of breast cancer slides were stained together on different days and used to test our protocol. With the proposed protocol, the discordance in the HER2 evaluation was reduced to one slide out of 120 slides.


Subject(s)
Breast Neoplasms , Coloring Agents , Humans , Female , Immunohistochemistry , Calibration , Image Processing, Computer-Assisted/methods
16.
BMC Oral Health ; 24(1): 434, 2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38594651

ABSTRACT

BACKGROUND: The grading of oral epithelial dysplasia is often time-consuming for oral pathologists and the results are poorly reproducible between observers. In this study, we aimed to establish an objective, accurate and useful detection and grading system for oral epithelial dysplasia in the whole-slides of oral leukoplakia. METHODS: Four convolutional neural networks were compared using the image patches from 56 whole-slide of oral leukoplakia labeled by pathologists as the gold standard. Sequentially, feature detection models were trained, validated and tested with 1,000 image patches using the optimal network. Lastly, a comprehensive system named E-MOD-plus was established by combining feature detection models and a multiclass logistic model. RESULTS: EfficientNet-B0 was selected as the optimal network to build feature detection models. In the internal dataset of whole-slide images, the prediction accuracy of E-MOD-plus was 81.3% (95% confidence interval: 71.4-90.5%) and the area under the receiver operating characteristic curve was 0.793 (95% confidence interval: 0.650 to 0.925); in the external dataset of 229 tissue microarray images, the prediction accuracy was 86.5% (95% confidence interval: 82.4-90.0%) and the area under the receiver operating characteristic curve was 0.669 (95% confidence interval: 0.496 to 0.843). CONCLUSIONS: E-MOD-plus was objective and accurate in the detection of pathological features as well as the grading of oral epithelial dysplasia, and had potential to assist pathologists in clinical practice.


Subject(s)
Deep Learning , Humans , Leukoplakia, Oral/diagnosis
17.
Cancer Sci ; 114(10): 4114-4124, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37574759

ABSTRACT

Controversy exists regarding whether patients with low-risk papillary thyroid microcarcinoma (PTMC) should undergo surgery or active surveillance; the inaccuracy of the preoperative clinical lymph node status assessment is one of the primary factors contributing to the controversy. It is imperative to accurately predict the lymph node status of PTMC before surgery. We selected 208 preoperative fine-needle aspiration (FNA) liquid-based preparations of PTMC as our research objects; all of these instances underwent lymph node dissection and, aside from lymph node status, were consistent with low-risk PTMC. We separated them into two groups according to whether the postoperative pathology showed central lymph node metastases. The deep learning model was expected to predict, based on the preoperative thyroid FNA liquid-based preparation, whether PTMC was accompanied by central lymph node metastases. Our deep learning model attained a sensitivity, specificity, positive prediction value (PPV), negative prediction value (NPV), and accuracy of 78.9% (15/19), 73.9% (17/23), 71.4% (15/21), 81.0% (17/21), and 76.2% (32/42), respectively. The area under the receiver operating characteristic curve (value was 0.8503. The predictive performance of the deep learning model was superior to that of the traditional clinical evaluation, and further analysis revealed the cell morphologies that played key roles in model prediction. Our study suggests that the deep learning model based on preoperative thyroid FNA liquid-based preparation is a reliable strategy for predicting central lymph node metastases in thyroid micropapillary carcinoma, and its performance surpasses that of traditional clinical examination.

18.
Mod Pathol ; 36(8): 100216, 2023 08.
Article in English | MEDLINE | ID: mdl-37178923

ABSTRACT

Identifying lymph node (LN) metastasis in invasive breast carcinoma can be tedious and time-consuming. We investigated an artificial intelligence (AI) algorithm to detect LN metastasis by screening hematoxylin and eosin (H&E) slides in a clinical digital workflow. The study included 2 sentinel LN (SLN) cohorts (a validation cohort with 234 SLNs and a consensus cohort with 102 SLNs) and 1 nonsentinel LN cohort (258 LNs enriched with lobular carcinoma and postneoadjuvant therapy cases). All H&E slides were scanned into whole slide images in a clinical digital workflow, and whole slide images were automatically batch-analyzed using the Visiopharm Integrator System (VIS) metastasis AI algorithm. For the SLN validation cohort, the VIS metastasis AI algorithm detected all 46 metastases, including 19 macrometastases, 26 micrometastases, and 1 with isolated tumor cells with a sensitivity of 100%, specificity of 41.5%, positive predictive value of 29.5%, and negative predictive value (NPV) of 100%. The false positivity was caused by histiocytes (52.7%), crushed lymphocytes (18.2%), and others (29.1%), which were readily recognized during pathologists' reviews. For the SLN consensus cohort, 3 pathologists examined all VIS AI annotated H&E slides and cytokeratin immunohistochemistry slides with similar average concordance rates (99% for both modalities). However, the average time consumed by pathologists using VIS AI annotated slides was significantly less than using immunohistochemistry slides (0.6 vs 1.0 minutes, P = .0377). For the nonsentinel LN cohort, the AI algorithm detected all 81 metastases, including 23 from lobular carcinoma and 31 from postneoadjuvant chemotherapy cases, with a sensitivity of 100%, specificity of 78.5%, positive predictive value of 68.1%, and NPV of 100%. The VIS AI algorithm showed perfect sensitivity and NPV in detecting LN metastasis and less time consumed, suggesting its potential utility as a screening modality in routine clinical digital pathology workflow to improve efficiency.


Subject(s)
Breast Neoplasms , Carcinoma, Lobular , Humans , Female , Lymphatic Metastasis/diagnosis , Lymphatic Metastasis/pathology , Breast Neoplasms/pathology , Sentinel Lymph Node Biopsy/methods , Carcinoma, Lobular/pathology , Artificial Intelligence , Workflow , Hematoxylin , Lymph Nodes/pathology
19.
Histopathology ; 82(7): 1105-1111, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36849712

ABSTRACT

AIMS: Subclassification of large B cell lymphoma (LBCL) is challenging due to the overlap in histopathological, immunophenotypical and genetic data. In particular, the criteria to separate diffuse large B cell lymphoma (DLBCL) and high-grade B cell lymphoma (HGBL) are difficult to apply in practice. The Lunenburg Lymphoma Biomarker Consortium previously reported a cohort of over 5000 LBCL that included fluorescence in-situ hybridisation (FISH) data. This cohort contained 209 cases with MYC rearrangement that were available for a validation study by a panel of eight expert haematopathologists of how various histopathological features are used. METHODS AND RESULTS: Digital whole slide images of haematoxylin and eosin-stained sections allowed the pathologists to visually score cases independently as well as participate in virtual joint review conferences. Standardised consensus guidelines were formulated for scoring histopathological features and included overall architecture/growth pattern, presence or absence of a starry-sky pattern, cell size, nuclear pleomorphism, nucleolar prominence and a range of cytological characteristics. Despite the use of consensus guidelines, the results show a high degree of discordance among the eight expert pathologists. Approximately 50% of the cases lacked a majority score, and this discordance spanned all six histopathological features. Moreover, none of the histological variables aided in prediction of MYC single versus double/triple-hit or immunoglobulin-partner FISH-based designations or clinical outcome measures. CONCLUSIONS: Our findings indicate that there are no specific conventional morphological parameters that help to subclassify MYC-rearranged LBCL or select cases for FISH analysis, and that incorporation of FISH data is essential for accurate classification and prognostication.


Subject(s)
Lymphoma, Large B-Cell, Diffuse , Humans , Reproducibility of Results , Lymphoma, Large B-Cell, Diffuse/diagnosis , Lymphoma, Large B-Cell, Diffuse/genetics , Lymphoma, Large B-Cell, Diffuse/pathology , Biomarkers , Proto-Oncogene Proteins c-myc/genetics , Proto-Oncogene Proteins c-bcl-2/genetics , Proto-Oncogene Proteins c-bcl-6/genetics , Gene Rearrangement
20.
Histopathology ; 83(2): 211-228, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37071058

ABSTRACT

AIMS: Classification of histological patterns in lung adenocarcinoma (LUAD) is critical for clinical decision-making, especially in the early stage. However, the inter- and intraobserver subjectivity of pathologists make the quantification of histological patterns varied and inconsistent. Moreover, the spatial information of histological patterns is not evident to the naked eye of pathologists. METHODS AND RESULTS: We establish the LUAD-subtype deep learning model (LSDLM) with optimal ResNet34 followed by a four-layer Neural Network classifier, based on 40 000 well-annotated path-level tiles. The LSDLM shows robust performance for the identification of histopathological subtypes on the whole-slide level, with an area under the curve (AUC) value of 0.93, 0.96 and 0.85 across one internal and two external validation data sets. The LSDLM is capable of accurately distinguishing different LUAD subtypes through confusion matrices, albeit with a bias for high-risk subtypes. It possesses mixed histology pattern recognition on a par with senior pathologists. Combining the LSDLM-based risk score with the spatial K score (K-RS) shows great capacity for stratifying patients. Furthermore, we found the corresponding gene-level signature (AI-SRSS) to be an independent risk factor correlated with prognosis. CONCLUSIONS: Leveraging state-of-the-art deep learning models, the LSDLM shows capacity to assist pathologists in classifying histological patterns and prognosis stratification of LUAD patients.


Subject(s)
Adenocarcinoma of Lung , Deep Learning , Lung Neoplasms , Humans , Adenocarcinoma of Lung/diagnosis , Adenocarcinoma of Lung/pathology , Lung Neoplasms/diagnosis , Lung Neoplasms/pathology , Prognosis , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL