Your browser doesn't support javascript.
loading
Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data.
Tehrani, Sara Saberi Moghadam; Zarvani, Maral; Amiri, Paria; Ghods, Zahra; Raoufi, Masoomeh; Safavi-Naini, Seyed Amir Ahmad; Soheili, Amirali; Gharib, Mohammad; Abbasi, Hamid.
Affiliation
  • Tehrani SSM; Faculty of Engineering, Alzahra University, Tehran, Iran.
  • Zarvani M; Faculty of Engineering, Alzahra University, Tehran, Iran.
  • Amiri P; University of Erlangen-Nuremberg, Bavaria, Germany.
  • Ghods Z; Faculty of Engineering, Alzahra University, Tehran, Iran.
  • Raoufi M; Department of Radiology, School of Medicine, Imam Hossein Hospital, Shahid Beheshti, University of Medical Sciences, Tehran, Iran.
  • Safavi-Naini SAA; Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
  • Soheili A; School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
  • Gharib M; Auckland City Hospital, Auckland, 1010, New Zealand.
  • Abbasi H; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand. h.abbasi@auckland.ac.nz.
BMC Med Inform Decis Mak ; 23(1): 265, 2023 11 17.
Article in En | MEDLINE | ID: mdl-37978393
BACKGROUND: Despite the globally reducing hospitalization rates and the much lower risks of Covid-19 mortality, accurate diagnosis of the infection stage and prediction of outcomes are clinically of interest. Advanced current technology can facilitate automating the process and help identifying those who are at higher risks of developing severe illness. This work explores and represents deep-learning-based schemes for predicting clinical outcomes in Covid-19 infected patients, using Visual Transformer and Convolutional Neural Networks (CNNs), fed with 3D data fusion of CT scan images and patients' clinical data. METHODS: We report on the efficiency of Video Swin Transformers and several CNN models fed with fusion datasets and CT scans only vs. a set of conventional classifiers fed with patients' clinical data only. A relatively large clinical dataset from 380 Covid-19 diagnosed patients was used to train/test the models. RESULTS: Results show that the 3D Video Swin Transformers fed with the fusion datasets of 64 sectional CT scans + 67 clinical labels outperformed all other approaches for predicting outcomes in Covid-19-infected patients amongst all techniques (i.e., TPR = 0.95, FPR = 0.40, F0.5 score = 0.82, AUC = 0.77, Kappa = 0.6). CONCLUSIONS: We demonstrate how the utility of our proposed novel 3D data fusion approach through concatenating CT scan images with patients' clinical data can remarkably improve the performance of the models in predicting Covid-19 infection outcomes. SIGNIFICANCE: Findings indicate possibilities of predicting the severity of outcome using patients' CT images and clinical data collected at the time of admission to hospital.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: COVID-19 Limits: Humans Language: En Journal: BMC Med Inform Decis Mak Journal subject: INFORMATICA MEDICA Year: 2023 Document type: Article Affiliation country: Iran Country of publication: United kingdom

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: COVID-19 Limits: Humans Language: En Journal: BMC Med Inform Decis Mak Journal subject: INFORMATICA MEDICA Year: 2023 Document type: Article Affiliation country: Iran Country of publication: United kingdom