Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Bioengineering (Basel) ; 10(5)2023 May 05.
Article in English | MEDLINE | ID: mdl-37237626

ABSTRACT

The COVID-19 pandemic has posed unprecedented challenges to global healthcare systems, highlighting the need for accurate and timely risk prediction models that can prioritize patient care and allocate resources effectively. This study presents DeepCOVID-Fuse, a deep learning fusion model that predicts risk levels in patients with confirmed COVID-19 by combining chest radiographs (CXRs) and clinical variables. The study collected initial CXRs, clinical variables, and outcomes (i.e., mortality, intubation, hospital length of stay, Intensive care units (ICU) admission) from February to April 2020, with risk levels determined by the outcomes. The fusion model was trained on 1657 patients (Age: 58.30 ± 17.74; Female: 807) and validated on 428 patients (56.41 ± 17.03; 190) from the local healthcare system and tested on 439 patients (56.51 ± 17.78; 205) from a different holdout hospital. The performance of well-trained fusion models on full or partial modalities was compared using DeLong and McNemar tests. Results show that DeepCOVID-Fuse significantly (p < 0.05) outperformed models trained only on CXRs or clinical variables, with an accuracy of 0.658 and an area under the receiver operating characteristic curve (AUC) of 0.842. The fusion model achieves good outcome predictions even when only one of the modalities is used in testing, demonstrating its ability to learn better feature representations across different modalities during training.

2.
Microsc Microanal ; 27(4): 878-888, 2021 08.
Article in English | MEDLINE | ID: mdl-34108070

ABSTRACT

A profound characteristic of field cancerization is alterations in chromatin packing. This study aimed to quantify these alterations using electron microscopy image analysis of buccal mucosa cells of laryngeal, esophageal, and lung cancer patients. Analysis was done on normal-appearing mucosa, believed to be within the cancerization field, and not tumor itself. Large-scale electron microscopy (nanotomy) images were acquired of cancer patients and controls. Within the nuclei, the chromatin packing of euchromatin and heterochromatin was characterized. Furthermore, the chromatin organization was quantified through chromatin packing density scaling. A significant difference was found between the cancer and control groups in the chromatin packing density scaling parameter for length scales below the optical diffraction limit (200 nm) in both the euchromatin (p = 0.002) and the heterochromatin (p = 0.006). The chromatin packing scaling analysis also indicated that the chromatin organization of cancer patients deviated significantly from the control group. They might allow for novel strategies for cancer risk stratification and diagnosis with high sensitivity. This could aid clinicians in personalizing screening strategies for high-risk patients and follow-up strategies for treated cancer patients.


Subject(s)
Chromatin , Mouth Mucosa , Mouth Neoplasms , Euchromatin , Heterochromatin , Humans , Microscopy, Electron , Mouth Mucosa/cytology , Mouth Neoplasms/diagnosis
3.
Radiology ; 299(1): E167-E176, 2021 04.
Article in English | MEDLINE | ID: mdl-33231531

ABSTRACT

Background There are characteristic findings of coronavirus disease 2019 (COVID-19) on chest images. An artificial intelligence (AI) algorithm to detect COVID-19 on chest radiographs might be useful for triage or infection control within a hospital setting, but prior reports have been limited by small data sets, poor data quality, or both. Purpose To present DeepCOVID-XR, a deep learning AI algorithm to detect COVID-19 on chest radiographs, that was trained and tested on a large clinical data set. Materials and Methods DeepCOVID-XR is an ensemble of convolutional neural networks developed to detect COVID-19 on frontal chest radiographs, with reverse-transcription polymerase chain reaction test results as the reference standard. The algorithm was trained and validated on 14 788 images (4253 positive for COVID-19) from sites across the Northwestern Memorial Health Care System from February 2020 to April 2020 and was then tested on 2214 images (1192 positive for COVID-19) from a single hold-out institution. Performance of the algorithm was compared with interpretations from five experienced thoracic radiologists on 300 random test images using the McNemar test for sensitivity and specificity and the DeLong test for the area under the receiver operating characteristic curve (AUC). Results A total of 5853 patients (mean age, 58 years ± 19 [standard deviation]; 3101 women) were evaluated across data sets. For the entire test set, accuracy of DeepCOVID-XR was 83%, with an AUC of 0.90. For 300 random test images (134 positive for COVID-19), accuracy of DeepCOVID-XR was 82%, compared with that of individual radiologists (range, 76%-81%) and the consensus of all five radiologists (81%). DeepCOVID-XR had a significantly higher sensitivity (71%) than one radiologist (60%, P < .001) and significantly higher specificity (92%) than two radiologists (75%, P < .001; 84%, P = .009). AUC of DeepCOVID-XR was 0.88 compared with the consensus AUC of 0.85 (P = .13 for comparison). With consensus interpretation as the reference standard, the AUC of DeepCOVID-XR was 0.95 (95% CI: 0.92, 0.98). Conclusion DeepCOVID-XR, an artificial intelligence algorithm, detected coronavirus disease 2019 on chest radiographs with a performance similar to that of experienced thoracic radiologists in consensus. © RSNA, 2020 Supplemental material is available for this article. See also the editorial by van Ginneken in this issue.


Subject(s)
Artificial Intelligence , COVID-19/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Algorithms , Datasets as Topic , Female , Humans , Male , Middle Aged , SARS-CoV-2 , Sensitivity and Specificity , United States
4.
Shape Med Imaging (2020) ; 12474: 95-107, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33283214

ABSTRACT

We propose a mesh-based technique to aid in the classification of Alzheimer's disease dementia (ADD) using mesh representations of the cortex and subcortical structures. Deep learning methods for classification tasks that utilize structural neuroimaging often require extensive learning parameters to optimize. Frequently, these approaches for automated medical diagnosis also lack visual interpretability for areas in the brain involved in making a diagnosis. This work: (a) analyzes brain shape using surface information of the cortex and subcortical structures, (b) proposes a residual learning framework for state-of-the-art graph convolutional networks which offer a significant reduction in learnable parameters, and (c) offers visual interpretability of the network via class-specific gradient information that localizes important regions of interest in our inputs. With our proposed method leveraging the use of cortical and subcortical surface information, we outperform other machine learning methods with a 96.35% testing accuracy for the ADD vs. healthy control problem. We confirm the validity of our model by observing its performance in a 25-trial Monte Carlo cross-validation. The generated visualization maps in our study show correspondences with current knowledge regarding the structural localization of pathological changes in the brain associated to dementia of the Alzheimer's type.

SELECTION OF CITATIONS
SEARCH DETAIL
...