Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Quant Imaging Med Surg ; 13(2): 572-584, 2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36819269

RESUMO

Background: Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR). Methods: In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features. Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN's attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation. Results: Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes' F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19). Conclusions: A two-step COVID-19 classification framework integrating information from both DLR and radiomics features (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features.

2.
Quant Imaging Med Surg ; 13(1): 394-416, 2023 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-36620146

RESUMO

Background: The coronavirus disease 2019 (COVID-19) led to a dramatic increase in the number of cases of patients with pneumonia worldwide. In this study, we aimed to develop an AI-assisted multistrategy image enhancement technique for chest X-ray (CXR) images to improve the accuracy of COVID-19 classification. Methods: Our new classification strategy consisted of 3 parts. First, the improved U-Net model with a variational encoder segmented the lung region in the CXR images processed by histogram equalization. Second, the residual net (ResNet) model with multidilated-rate convolution layers was used to suppress the bone signals in the 217 lung-only CXR images. A total of 80% of the available data were allocated for training and validation. The other 20% of the remaining data were used for testing. The enhanced CXR images containing only soft tissue information were obtained. Third, the neural network model with a residual cascade was used for the super-resolution reconstruction of low-resolution bone-suppressed CXR images. The training and testing data consisted of 1,200 and 100 CXR images, respectively. To evaluate the new strategy, improved visual geometry group (VGG)-16 and ResNet-18 models were used for the COVID-19 classification task of 2,767 CXR images. The accuracy of the multistrategy enhanced CXR images was verified through comparative experiments with various enhancement images. In terms of quantitative verification, 8-fold cross-validation was performed on the bone suppression model. In terms of evaluating the COVID-19 classification, the CXR images obtained by the improved method were used to train 2 classification models. Results: Compared with other methods, the CXR images obtained based on the proposed model had better performance in the metrics of peak signal-to-noise ratio and root mean square error. The super-resolution CXR images of bone suppression obtained based on the neural network model were also anatomically close to the real CXR images. Compared with the initial CXR images, the classification accuracy rates of the internal and external testing data on the VGG-16 model increased by 5.09% and 12.81%, respectively, while the values increased by 3.51% and 18.20%, respectively, for the ResNet-18 model. The numerical results were better than those of the single-enhancement, double-enhancement, and no-enhancement CXR images. Conclusions: The multistrategy enhanced CXR images can help to classify COVID-19 more accurately than the other existing methods.

3.
Quant Imaging Med Surg ; 12(7): 3917-3931, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35782269

RESUMO

Background: Coronavirus disease 2019 (COVID-19) is a pandemic disease. Fast and accurate diagnosis of COVID-19 from chest radiography may enable more efficient allocation of scarce medical resources and hence improved patient outcomes. Deep learning classification of chest radiographs may be a plausible step towards this. We hypothesize that bone suppression of chest radiographs may improve the performance of deep learning classification of COVID-19 phenomena in chest radiographs. Methods: Two bone suppression methods (Gusarev et al. and Rajaraman et al.) were implemented. The Gusarev and Rajaraman methods were trained on 217 pairs of normal and bone-suppressed chest radiographs from the X-ray Bone Shadow Suppression dataset (https://www.kaggle.com/hmchuong/xray-bone-shadow-supression). Two classifier methods with different network architectures were implemented. Binary classifier models were trained on the public RICORD-1c and RSNA Pneumonia Challenge datasets. An external test dataset was created retrospectively from a set of 320 COVID-19 positive patients from Queen Elizabeth Hospital (Hong Kong, China) and a set of 518 non-COVID-19 patients from Pamela Youde Nethersole Eastern Hospital (Hong Kong, China), and used to evaluate the effect of bone suppression on classifier performance. Classification performance, quantified by sensitivity, specificity, negative predictive value (NPV), accuracy and area under the receiver operating curve (AUC), for non-suppressed radiographs was compared to that for bone suppressed radiographs. Some of the pre-trained models used in this study are published at (https://github.com/danielnflam). Results: Bone suppression of external test data was found to significantly (P<0.05) improve AUC for one classifier architecture [from 0.698 (non-suppressed) to 0.732 (Rajaraman-suppressed)]. For the other classifier architecture, suppression did not significantly (P>0.05) improve or worsen classifier performance. Conclusions: Rajaraman suppression significantly improved classification performance in one classification architecture, and did not significantly worsen classifier performance in the other classifier architecture. This research could be extended to explore the impact of bone suppression on classification of different lung pathologies, and the effect of other image enhancement techniques on classifier performance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA