RESUMO
Deep learning consistently demonstrates high performance in classifying and segmenting medical images like CT, PET, and MRI. However, compared to these kinds of images, whole slide images (WSIs) of stained tissue sections are huge and thus much less efficient to process, especially for deep learning algorithms. To overcome these challenges, we present attention2majority, a weak multiple instance learning model to automatically and efficiently process WSIs for classification. Our method initially assigns exhaustively sampled label-free patches with the label of the respective WSIs and trains a convolutional neural network to perform patch-wise classification. Then, an intelligent sampling method is performed in which patches with high confidence are collected to form weak representations of WSIs. Lastly, we apply a multi-head attention-based multiple instance learning model to do slide-level classification based on high-confidence patches (intelligently sampled patches). Attention2majority was trained and tested on classifying the quality of 127 WSIs (of regenerated kidney sections) into three categories. On average, attention2majority resulted in 97.4%±2.4 AUC for the four-fold cross-validation. We demonstrate that the intelligent sampling module within attention2majority is superior to the current state-of-the-art random sampling method. Furthermore, we show that the replacement of random sampling with intelligent sampling in attention2majority results in its performance boost (from 94.9%±3.1 to 97.4%±2.4 average AUC for the four-fold cross-validation). We also tested a variation of attention2majority on the famous Camelyon16 dataset, which resulted in 89.1%±0.8 AUC1. When compared to random sampling, the attention2majority demonstrated excellent slide-level interpretability. It also provided an efficient framework to arrive at a multi-class slide-level prediction.
Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Rim/diagnóstico por imagemRESUMO
Failure to identify difficult intubation is the leading cause of anesthesia-related death and morbidity. Despite preoperative airway assessment, 75-93% of difficult intubations are unanticipated, and airway examination methods underperform, with sensitivities of 20-62% and specificities of 82-97%. To overcome these impediments, we aim to develop a deep learning model to identify difficult to intubate patients using frontal face images. We proposed an ensemble of convolutional neural networks which leverages a database of celebrity facial images to learn robust features of multiple face regions. This ensemble extracts features from patient images (n = 152) which are subsequently classified by a respective ensemble of attention-based multiple instance learning models. Through majority voting, a patient is classified as difficult or easy to intubate. Whereas two conventional bedside tests resulted in AUCs of 0.6042 and 0.4661, the proposed method resulted in an AUC of 0.7105 using a cohort of 76 difficult and 76 easy to intubate patients. Generic features yielded AUCs of 0.4654-0.6278. The proposed model can operate at high sensitivity and low specificity (0.9079 and 0.4474) or low sensitivity and high specificity (0.3684 and 0.9605). The proposed ensembled model outperforms conventional bedside tests and generic features. Side facial images may improve the performance of the proposed model. The proposed method significantly surpasses conventional bedside tests and deep learning methods. We expect our model will play an important role in developing deep learning methods where frontal face features play an important role.
Assuntos
Aprendizado Profundo , Bases de Dados Factuais , Face/diagnóstico por imagem , Humanos , Redes Neurais de ComputaçãoRESUMO
BACKGROUND: Machine learning sustains successful application to many diagnostic and prognostic problems in computational histopathology. Yet, few efforts have been made to model gene expression from histopathology. This study proposes a methodology which predicts selected gene expression values (microarray) from haematoxylin and eosin whole-slide images as an intermediate data modality to identify fulminant-like pulmonary tuberculosis ('supersusceptible') in an experimentally infected cohort of Diversity Outbred mice (n=77). METHODS: Gradient-boosted trees were utilized as a novel feature selector to identify gene transcripts predictive of fulminant-like pulmonary tuberculosis. A novel attention-based multiple instance learning model for regression was used to predict selected genes' expression from whole-slide images. Gene expression predictions were shown to be sufficiently replicated to identify supersusceptible mice using gradient-boosted trees trained on ground truth gene expression data. FINDINGS: The model was accurate, showing high positive correlations with ground truth gene expression on both cross-validation (n = 77, 0.63 ≤ ρ ≤ 0.84) and external testing sets (n = 33, 0.65 ≤ ρ ≤ 0.84). The sensitivity and specificity for gene expression predictions to identify supersusceptible mice (n=77) were 0.88 and 0.95, respectively, and for an external set of mice (n=33) 0.88 and 0.93, respectively. IMPLICATIONS: Our methodology maps histopathology to gene expression with sufficient accuracy to predict a clinical outcome. The proposed methodology exemplifies a computational template for gene expression panels, in which relatively inexpensive and widely available tissue histopathology may be mapped to specific genes' expression to serve as a diagnostic or prognostic tool. FUNDING: National Institutes of Health and American Lung Association.