Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Med Image Anal ; 93: 103070, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38176354

ABSTRACT

We propose DiRL, a Diversity-inducing Representation Learning technique for histopathology imaging. Self-supervised learning (SSL) techniques, such as contrastive and non-contrastive approaches, have been shown to learn rich and effective representations of digitized tissue samples with limited pathologist supervision. Our analysis of vanilla SSL-pretrained models' attention distribution reveals an insightful observation: sparsity in attention, i.e, models tends to localize most of their attention to some prominent patterns in the image. Although attention sparsity can be beneficial in natural images due to these prominent patterns being the object of interest itself, this can be sub-optimal in digital pathology; this is because, unlike natural images, digital pathology scans are not object-centric, but rather a complex phenotype of various spatially intermixed biological components. Inadequate diversification of attention in these complex images could result in crucial information loss. To address this, we leverage cell segmentation to densely extract multiple histopathology-specific representations, and then propose a prior-guided dense pretext task, designed to match the multiple corresponding representations between the views. Through this, the model learns to attend to various components more closely and evenly, thus inducing adequate diversification in attention for capturing context-rich representations. Through quantitative and qualitative analysis on multiple tasks across cancer types, we demonstrate the efficacy of our method and observe that the attention is more globally distributed.


Subject(s)
Image Processing, Computer-Assisted , Machine Learning , Pathology , Humans , Phenotype , Pathology/methods
2.
Cureus ; 15(8): e44130, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37753018

ABSTRACT

BACKGROUND:  Pneumonia is an infectious disease that is especially harmful to those with weak immune systems, such as children under the age of 5. While radiologists' diagnosis of pediatric pneumonia on chest radiographs (CXRs) is often accurate, subtle findings can be missed due to the subjective nature of the diagnosis process. Artificial intelligence (AI) techniques, such as convolutional neural networks (CNNs), can help make the process more objective and precise. However, off-the-shelf CNNs may perform poorly if they are not tuned to their appropriate hyperparameters. Our study aimed to identify the CNNs and their hyperparameter combinations (dropout, batch size, and optimizer) that optimize model performance. METHODOLOGY:  Sixty models based on five CNNs (VGG 16, VGG 19, DenseNet 121, DenseNet 169, and InceptionResNet V2) and 12 hyperparameter combinations were tested. Adam, Root Mean Squared Propagation (RmsProp), and Mini-Batch Stochastic Gradient Descent (SGD) optimizers were used. Two batch sizes, 32 and 64, were utilized. A dropout rate of either 0.5 or 0.7 was used in all dropout layers. We used a deidentified CXR dataset of 4200 pneumonia (Figure 1a) and 1600 normal images (Figure 1b). Seventy percent of the CXRs in the dataset were used for training the model, 20% were used for validating the model, and 10% were used for testing the model. All CNNs were trained first on the ImageNet dataset. They were then trained, with frozen weights, on the CXR-containing dataset.  Results: Among the 60 models, VGG-19 (dropout of 0.5, batch size of 32, and Adam optimizer) was the most accurate. This model achieved an accuracy of 87.9%. A dropout of 0.5 consistently gave higher accuracy, area under the receiver operating characteristics curve (AUROC), and area under the precision-recall curve (AUPRC) compared to a dropout of 0.7. The CNNs InceptionResNet V2, DenseNet 169, VGG 16, and VGG 19 significantly outperformed the DenseNet121 CNN in accuracy and AUROC. The Adam and RmsProp optimizer had improved AUROC and AUPRC compared to the SGD optimizer. The batch size had no statistically significant effect on model performance. CONCLUSION:  We recommend using low dropout rates (0.5) and RmsProp or Adam optimizer for pneumonia-detecting CNNs. Additionally, we discourage using the DenseNet121 CNN when other CNNs are available. Finally, the batch size may be set to any value, dependent on computational resources.

3.
Diagnostics (Basel) ; 11(10)2021 Sep 30.
Article in English | MEDLINE | ID: mdl-34679510

ABSTRACT

In this study, we aimed to predict mechanical ventilation requirement and mortality using computational modeling of chest radiographs (CXRs) for coronavirus disease 2019 (COVID-19) patients. This two-center, retrospective study analyzed 530 deidentified CXRs from 515 COVID-19 patients treated at Stony Brook University Hospital and Newark Beth Israel Medical Center between March and August 2020. Linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and random forest (RF) machine learning classifiers to predict mechanical ventilation requirement and mortality were trained and evaluated using radiomic features extracted from patients' CXRs. Deep learning (DL) approaches were also explored for the clinical outcome prediction task and a novel radiomic embedding framework was introduced. All results are compared against radiologist grading of CXRs (zone-wise expert severity scores). Radiomic classification models had mean area under the receiver operating characteristic curve (mAUCs) of 0.78 ± 0.05 (sensitivity = 0.72 ± 0.07, specificity = 0.72 ± 0.06) and 0.78 ± 0.06 (sensitivity = 0.70 ± 0.09, specificity = 0.73 ± 0.09), compared with expert scores mAUCs of 0.75 ± 0.02 (sensitivity = 0.67 ± 0.08, specificity = 0.69 ± 0.07) and 0.79 ± 0.05 (sensitivity = 0.69 ± 0.08, specificity = 0.76 ± 0.08) for mechanical ventilation requirement and mortality prediction, respectively. Classifiers using both expert severity scores and radiomic features for mechanical ventilation (mAUC = 0.79 ± 0.04, sensitivity = 0.71 ± 0.06, specificity = 0.71 ± 0.08) and mortality (mAUC = 0.83 ± 0.04, sensitivity = 0.79 ± 0.07, specificity = 0.74 ± 0.09) demonstrated improvement over either artificial intelligence or radiologist interpretation alone. Our results also suggest instances in which the inclusion of radiomic features in DL improves model predictions over DL alone. The models proposed in this study and the prognostic information they provide might aid physician decision making and efficient resource allocation during the COVID-19 pandemic.

4.
ArXiv ; 2021 Jul 01.
Article in English | MEDLINE | ID: mdl-32699815

ABSTRACT

We predict mechanical ventilation requirement and mortality using computational modeling of chest radiographs (CXRs) for coronavirus disease 2019 (COVID-19) patients. This two-center, retrospective study analyzed 530 deidentified CXRs from 515 COVID-19 patients treated at Stony Brook University Hospital and Newark Beth Israel Medical Center between March and August 2020. DL and machine learning classifiers to predict mechanical ventilation requirement and mortality were trained and evaluated using patient CXRs. A novel radiomic embedding framework was also explored for outcome prediction. All results are compared against radiologist grading of CXRs (zone-wise expert severity scores). Radiomic and DL classification models had mAUCs of 0.78+/-0.02 and 0.81+/-0.04, compared with expert scores mAUCs of 0.75+/-0.02 and 0.79+/-0.05 for mechanical ventilation requirement and mortality prediction, respectively. Combined classifiers using both radiomics and expert severity scores resulted in mAUCs of 0.79+/-0.04 and 0.83+/-0.04 for each prediction task, demonstrating improvement over either artificial intelligence or radiologist interpretation alone. Our results also suggest instances where inclusion of radiomic features in DL improves model predictions, something that might be explored in other pathologies. The models proposed in this study and the prognostic information they provide might aid physician decision making and resource allocation during the COVID-19 pandemic.

SELECTION OF CITATIONS
SEARCH DETAIL