Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 278
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Methods ; 226: 89-101, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38642628

RESUMEN

Obtaining an accurate segmentation of the pulmonary nodules in computed tomography (CT) images is challenging. This is due to: (1) the heterogeneous nature of the lung nodules; (2) comparable visual characteristics between the nodules and their surroundings. A robust multi-scale feature extraction mechanism that can effectively obtain multi-scale representations at a granular level can improve segmentation accuracy. As the most commonly used network in lung nodule segmentation, UNet, its variants, and other image segmentation methods lack this robust feature extraction mechanism. In this study, we propose a multi-stride residual 3D UNet (MRUNet-3D) to improve the segmentation accuracy of lung nodules in CT images. It incorporates a multi-slide Res2Net block (MSR), which replaces the simple sequence of convolution layers in each encoder stage to effectively extract multi-scale features at a granular level from different receptive fields and resolutions while conserving the strengths of 3D UNet. The proposed method has been extensively evaluated on the publicly available LUNA16 dataset. Experimental results show that it achieves competitive segmentation performance with an average dice similarity coefficient of 83.47 % and an average surface distance of 0.35 mm on the dataset. More notably, our method has proven to be robust to the heterogeneity of lung nodules. It has also proven to perform better at segmenting small lung nodules. Ablation studies have shown that the proposed MSR and RFIA modules are fundamental to improving the performance of the proposed model.


Asunto(s)
Imagenología Tridimensional , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Imagenología Tridimensional/métodos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Algoritmos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Pulmón/diagnóstico por imagen
2.
BMC Med Imaging ; 24(1): 137, 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38844854

RESUMEN

BACKGROUND: This study investigated whether the Combat compensation method can remove the variability of radiomic features extracted from different scanners, while also examining its impact on the subsequent predictive performance of machine learning models. MATERIALS AND METHODS: 135 CT images of Credence Cartridge Radiomic phantoms were collected and screened from three scanners manufactured by Siemens, Philips, and GE. 100 radiomic features were extracted and 20 radiomic features were screened according to the Lasso regression method. The radiomic features extracted from the rubber and resin-filled regions in the cartridges were labeled into different categories for evaluating the performance of the machine learning model. Radiomics features were divided into three groups based on the different scanner manufacturers. The radiomic features were randomly divided into training and test sets with a ratio of 8:2. Five machine learning models (lasso, logistic regression, random forest, support vector machine, neural network) were employed to evaluate the impact of Combat on radiomic features. The variability among radiomic features were assessed using analysis of variance (ANOVA) and principal component analysis (PCA). Accuracy, precision, recall, and area under the receiver curve (AUC) were used as evaluation metrics for model classification. RESULTS: The principal component and ANOVA analysis results show that the variability of different scanner manufacturers in radiomic features was removed (P˃0.05). After harmonization with the Combat algorithm, the distributions of radiomic features were aligned in terms of location and scale. The performance of machine learning models for classification improved, with the Random Forest model showing the most significant enhancement. The AUC value increased from 0.88 to 0.92. CONCLUSIONS: The Combat algorithm has reduced variability in radiomic features from different scanners. In the phantom CT dataset, it appears that the machine learning model's classification performance may have improved after Combat harmonization. However, further investigation and validation are required to fully comprehend Combat's impact on radiomic features in medical imaging.


Asunto(s)
Aprendizaje Automático , Fantasmas de Imagen , Humanos , Tomografía Computarizada por Rayos X , Tomógrafos Computarizados por Rayos X , Análisis de Componente Principal , Redes Neurales de la Computación , Algoritmos , Radiómica
3.
Artículo en Inglés | MEDLINE | ID: mdl-39107903

RESUMEN

BACKGROUND: Sinusitis is a commonly encountered clinical condition that imposes a considerable burden on the healthcare systems. A significant number of maxillary sinus opacifications are diagnosed as sinusitis, often overlooking the precise differentiation between cystic formations and inflammatory sinusitis, resulting in inappropriate clinical treatment. This study aims to improve diagnostic accuracy by investigating the feasibility of differentiating maxillary sinusitis, retention cysts, and normal sinuses. METHODS: We developed a deep learning-based automatic detection model to diagnose maxillary sinusitis using ostiomeatal unit computed tomography images. Of the 1080 randomly selected coronal-view CT images, including 2158 maxillary sinuses, datasets of maxillary sinus lesions comprised 1138 normal sinuses, 366 cysts, and 654 sinusitis based on radiographic findings, and were divided into training (n = 648 CT images), validation (n = 216), and test (n = 216) sets. We utilized a You Only Look Once based model for object detection, enhanced by the transfer learning method. To address the insufficiency of training data, various data augmentation techniques were adopted, thereby improving the model's robustness. RESULTS: The trained You Only Look Once version 8 nano (YOLOv8n) model achieved an overall precision of 97.1%, with the following class precisions on the test set: normal = 96.9%, cyst = 95.2%, and sinusitis = 99.2%. With an average F1 score of 95.4%, the F1 score was the highest for normal, then sinusitis, and finally, cysts. Upon evaluating a performance on difficulty level, the precision decreased to 92.4% on challenging test dataset. CONCLUSIONS: The developed model is feasible for assisting clinicians in screening maxillary sinusitis lesions.

4.
J Xray Sci Technol ; 32(1): 123-139, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37458060

RESUMEN

BACKGROUND: By providing both functional and anatomical information from a single scan, digital imaging technologies like PET/CT and PET/MRI hybrids are gaining popularity in medical imaging industry. In clinical practice, the median value (SUVmed) receives less attention owing to disagreements surrounding what defines a lesion, but the SUVmax value, which is a semi-quantitative statistic used to analyse PET and PET/CT images, is commonly used to evaluate lesions. OBJECTIVE: This study aims to build an image processing technique with the purpose of automatically detecting and isolating lesions in PET/CT images, as well as measuring and assessing the SUVmed. METHODS: The pictures are separated into their respective lesions using mathematical morphology and the crescent region, which are both part of the image processing method. In this research, a total of 18 different pictures of lesions were evaluated. RESULTS: The findings of the study reveal that the threshold is satisfied by both the SUVmax and the SUVmed for most of the lesion types. However, in six instances, the SUVmax and SUVmed values are found to be in different courts. CONCLUSION: The new information revealed by this study needs to be further investigated to determine if it has any practical value in diagnosing and monitoring lesions. However, results of this study suggest that SUVmed should receive more attention in the evaluation of lesions in PET and CT images.


Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Tomografía de Emisión de Positrones/métodos , Imagen Multimodal/métodos , Procesamiento de Imagen Asistido por Computador , Fluorodesoxiglucosa F18
5.
Zhongguo Yi Liao Qi Xie Za Zhi ; 48(4): 355-360, 2024 Jul 30.
Artículo en Zh | MEDLINE | ID: mdl-39155245

RESUMEN

In response to the issue that traditional lung nodule detection models cannot dynamically optimize and update with the increase of new data, a new lung nodule detection model-task incremental meta-learning model (TIMLM) is proposed. This model comprises of two loops: the inner loop imposes incremental learning regularization update constraints, while the outer loop employs a meta-update strategy to sample old and new knowledge and learn a set of generalized parameters that adapt to old and new data. Under the condition that the main structure of the model is not changed as much as possible, it preserves the old knowledge that was learned previously. Experimental verification on the publicly available lung dataset showed that, compared with traditional deep network models and mainstream incremental models, TIMLM has greatly improved in terms of accuracy, sensitivity, and other indicators, demonstrating good continuous learning and anti-forgetting capabilities.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Algoritmos , Pulmón/diagnóstico por imagen , Aprendizaje Automático , Nódulo Pulmonar Solitario/diagnóstico por imagen
6.
Funct Integr Genomics ; 23(2): 88, 2023 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-36933049

RESUMEN

Metabolic reprogramming is essential for establishing the tumor microenvironment (TME). Glutamine has been implicated in cancer metabolism, but its role in clear cell renal carcinoma (ccRCC) remains unknown. Transcriptome data of patients with ccRCC and single-cell RNA sequencing (scRNA-seq) data were obtained from The Cancer Genome Atlas (TCGA, 539 ccRCC samples and 59 normal samples) database and GSE152938 (5 ccRCC samples). Differentially expressed genes related to glutamine metabolism (GRGs) were obtained from the MSigDB database. Consensus cluster analysis distinguished metabolism-related ccRCC subtypes. LASSO-Cox regression analysis was used to construct a metabolism-related prognostic model. The ssGSEA and ESTIMATE algorithms evaluated the level of immune cell infiltration in the TME, and the immunotherapy sensitivity score was obtained from TIDE. Cell-cell communication analysis was used to observe the distribution and effects of the target genes in the cell subsets. An image genomics model was constructed using imaging feature extraction and a machine learning algorithm. Results: Fourteen GRGs were identified. Overall survival and progression-free survival rates were lower in metabolic cluster 2, compared with those in cluster 1. The matrix/ESTIMATE/immune score in C1 decreased, but tumor purity in C2 increased. Immune cells were more active in the high-risk group, in which CD8 + T cells, follicular helper T cells, Th1 cells, and Th2 cells were significantly higher than those in the low-risk group. The expression levels of immune checkpoints were also significantly different between the two groups. RIMKL mainly appeared in epithelial cells in the single-cell analysis. ARHGAP11B was sparsely distributed. The imaging genomics model proved effective in aiding with clinical decisions. Glutamine metabolism plays a crucial role in the formation of immune TMEs in ccRCC. It is effective in differentiating the risk and predicting survival in patients with ccRCC. Imaging features can be used as new biomarkers for predicting ccRCC immunotherapy.


Asunto(s)
Carcinoma de Células Renales , Neoplasias Renales , Humanos , Carcinoma de Células Renales/diagnóstico por imagen , Carcinoma de Células Renales/genética , Glutamina , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/genética , Análisis de Secuencia de ARN , Tomografía Computarizada por Rayos X , Microambiente Tumoral , Proteínas Activadoras de GTPasa
7.
J Microsc ; 291(2): 145-155, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37155344

RESUMEN

Characterising the microstructure of foams is an important task for improving foam manufacturing processes and building foam numerical models. This study proposed a method for measuring the thickness of individual cell walls of closed-cell foams in micro-CT images. It comprises a distance transform on CT images to obtain thickness information of cell walls, a watershed transform on the distance matrix to locate the midlines of cell walls, identifying the intersections of midlines of cell walls by examining how many regions each pixel on the midlines of cell walls connects with, disconnecting and numbering the midlines of cell walls, extracting the distance values of the pixels on the midlines (or midplanes) of cell walls, and calculating the thickness of individual cell walls by multiplying the extracted distance values by two. Using this method, the thickness of cell walls of a polymeric closed-cell foam was measured. It was found that cell wall thickness measured in 2D images shows larger average values (around 1.5 times) and dispersion compared to that measured in volumetric images.


Asunto(s)
Polímeros , Microtomografía por Rayos X/métodos
8.
BMC Med Imaging ; 23(1): 124, 2023 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-37700250

RESUMEN

BACKGROUND: Brain extraction is an essential prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion recognition, location, and segmentation. Segmentation using a fully convolutional neural network (FCN) yields high accuracy but a relatively slow extraction speed. METHODS: This paper proposes an integrated algorithm, FABEM, to address the above issues. This method first uses threshold segmentation, closed operation, convolutional neural network (CNN), and image filling to generate a specific mask. Then, it detects the number of connected regions of the mask. If the number of connected regions equals 1, the extraction is done by directly multiplying with the original image. Otherwise, the mask was further segmented using the region growth method for original images with single-region brain distribution. Conversely, for images with multi-region brain distribution, Deeplabv3 + is used to adjust the mask. Finally, the mask is multiplied with the original image to complete the extraction. RESULTS: The algorithm and 5 FCN models were tested on 24 datasets containing different lesions, and the algorithm's performance showed MPA = 0.9968, MIoU = 0.9936, and MBF = 0.9963, comparable to the Deeplabv3+. Still, its extraction speed is much faster than the Deeplabv3+. It can complete the brain extraction of a head CT image in about 0.43 s, about 3.8 times that of the Deeplabv3+. CONCLUSION: Thus, this method can achieve accurate brain extraction from head CT images faster, creating a good basis for subsequent brain volume measurement and feature extraction of intracranial lesions.


Asunto(s)
Algoritmos , Encéfalo , Humanos , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
9.
Chemometr Intell Lab Syst ; 236: 104799, 2023 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-36883063

RESUMEN

The pandemic caused by the coronavirus disease 2019 (COVID-19) has continuously wreaked havoc on human health. Computer-aided diagnosis (CAD) system based on chest computed tomography (CT) has been a hotspot option for COVID-19 diagnosis. However, due to the high cost of data annotation in the medical field, it happens that the number of unannotated data is much larger than the annotated data. Meanwhile, having a highly accurate CAD system always requires a large amount of labeled data training. To solve this problem while meeting the needs, this paper presents an automated and accurate COVID-19 diagnosis system using few labeled CT images. The overall framework of this system is based on the self-supervised contrastive learning (SSCL). Based on the framework, our enhancement of our system can be summarized as follows. 1) We integrated a two-dimensional discrete wavelet transform with contrastive learning to fully use all the features from the images. 2) We use the recently proposed COVID-Net as the encoder, with a redesign to target the specificity of the task and learning efficiency. 3) A new pretraining strategy based on contrastive learning is applied for broader generalization ability. 4) An additional auxiliary task is exerted to promote performance during classification. The final experimental result of our system attained 93.55%, 91.59%, 96.92% and 94.18% for accuracy, recall, precision, and F1-score respectively. By comparing results with the existing schemes, we demonstrate the performance enhancement and superiority of our proposed system.

10.
J Appl Clin Med Phys ; 24(6): e13996, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37082799

RESUMEN

Aiming at the difficulties of lumbar vertebrae segmentation in computed tomography (CT) images, we propose an automatic lumbar vertebrae segmentation method based on deep learning. The method mainly includes two parts: lumbar vertebra positioning and lumbar vertebrae segmentation. First of all, we propose a lumbar spine localization network of Unet network, which can directly locate the lumbar spine part in the image. Then, we propose a three-dimensional XUnet lumbar vertebrae segmentation method to achieve automatic lumbar vertebrae segmentation. The method proposed in this paper was validated on the lumbar spine CT images on the public dataset VerSe 2020 and our hospital dataset. Through qualitative comparison and quantitative analysis, the experimental results show that the method proposed in this paper can obtain good lumbar vertebrae segmentation performance, which can be further applied to detection of spinal anomalies and surgical treatment.


Asunto(s)
Aprendizaje Profundo , Vértebras Lumbares , Humanos , Vértebras Lumbares/diagnóstico por imagen , Algoritmos , Columna Vertebral , Tomografía Computarizada por Rayos X/métodos , Hospitales , Procesamiento de Imagen Asistido por Computador/métodos
11.
Sensors (Basel) ; 23(17)2023 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-37688038

RESUMEN

Segmenting the liver and liver tumors in computed tomography (CT) images is an important step toward quantifiable biomarkers for a computer-aided decision-making system and precise medical diagnosis. Radiologists and specialized physicians use CT images to diagnose and classify liver organs and tumors. Because these organs have similar characteristics in form, texture, and light intensity values, other internal organs such as the heart, spleen, stomach, and kidneys confuse visual recognition of the liver and tumor division. Furthermore, visual identification of liver tumors is time-consuming, complicated, and error-prone, and incorrect diagnosis and segmentation can hurt the patient's life. Many automatic and semi-automatic methods based on machine learning algorithms have recently been suggested for liver organ recognition and tumor segmentation. However, there are still difficulties due to poor recognition precision and speed and a lack of dependability. This paper presents a novel deep learning-based technique for segmenting liver tumors and identifying liver organs in computed tomography maps. Based on the LiTS17 database, the suggested technique comprises four Chebyshev graph convolution layers and a fully connected layer that can accurately segment the liver and liver tumors. Thus, the accuracy, Dice coefficient, mean IoU, sensitivity, precision, and recall obtained based on the proposed method according to the LiTS17 dataset are around 99.1%, 91.1%, 90.8%, 99.4%, 99.4%, and 91.2%, respectively. In addition, the effectiveness of the proposed method was evaluated in a noisy environment, and the proposed network could withstand a wide range of environmental signal-to-noise ratios (SNRs). Thus, at SNR = -4 dB, the accuracy of the proposed method for liver organ segmentation remained around 90%. The proposed model has obtained satisfactory and favorable results compared to previous research. According to the positive results, the proposed model is expected to be used to assist radiologists and specialist doctors in the near future.


Asunto(s)
Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Riñón , Algoritmos , Sistemas de Computación
12.
Sensors (Basel) ; 23(3)2023 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-36772394

RESUMEN

The 2019 coronavirus disease (COVID-19) has rapidly spread across the globe. It is crucial to identify positive cases as rapidly as humanely possible to provide appropriate treatment for patients and prevent the pandemic from spreading further. Both chest X-ray and computed tomography (CT) images are capable of accurately diagnosing COVID-19. To distinguish lung illnesses (i.e., COVID-19 and pneumonia) from normal cases using chest X-ray and CT images, we combined convolutional neural network (CNN) and recurrent neural network (RNN) models by replacing the fully connected layers of CNN with a version of RNN. In this framework, the attributes of CNNs were utilized to extract features and those of RNNs to calculate dependencies and classification base on extracted features. CNN models VGG19, ResNet152V2, and DenseNet121 were combined with long short-term memory (LSTM) and gated recurrent unit (GRU) RNN models, which are convenient to develop because these networks are all available as features on many platforms. The proposed method is evaluated using a large dataset totaling 16,210 X-ray and CT images (5252 COVID-19 images, 6154 pneumonia images, and 4804 normal images) were taken from several databases, which had various image sizes, brightness levels, and viewing angles. Their image quality was enhanced via normalization, gamma correction, and contrast-limited adaptive histogram equalization. The ResNet152V2 with GRU model achieved the best architecture with an accuracy of 93.37%, an F1 score of 93.54%, a precision of 93.73%, and a recall of 93.47%. From the experimental results, the proposed method is highly effective in distinguishing lung diseases. Furthermore, both CT and X-ray images can be used as input for classification, allowing for the rapid and easy detection of COVID-19.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Neumonía , Humanos , COVID-19/diagnóstico por imagen , SARS-CoV-2 , Rayos X , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Prueba de COVID-19
13.
J Digit Imaging ; 36(5): 2138-2147, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37407842

RESUMEN

To develop a deep learning-based model for detecting rib fractures on chest X-Ray and to evaluate its performance based on a multicenter study. Chest digital radiography (DR) images from 18,631 subjects were used for the training, testing, and validation of the deep learning fracture detection model. We first built a pretrained model, a simple framework for contrastive learning of visual representations (simCLR), using contrastive learning with the training set. Then, simCLR was used as the backbone for a fully convolutional one-stage (FCOS) objective detection network to identify rib fractures from chest X-ray images. The detection performance of the network for four different types of rib fractures was evaluated using the testing set. A total of 127 images from Data-CZ and 109 images from Data-CH with the annotations for four types of rib fractures were used for evaluation. The results showed that for Data-CZ, the sensitivities of the detection model with no pretraining, pretrained ImageNet, and pretrained DR were 0.465, 0.735, and 0.822, respectively, and the average number of false positives per scan was five in all cases. For the Data-CH test set, the sensitivities of three different pretraining methods were 0.403, 0.655, and 0.748. In the identification of four fracture types, the detection model achieved the highest performance for displaced fractures, with sensitivities of 0.873 and 0.774 for the Data-CZ and Data-CH test sets, respectively, with 5 false positives per scan, followed by nondisplaced fractures, buckle fractures, and old fractures. A pretrained model can significantly improve the performance of the deep learning-based rib fracture detection based on X-ray images, which can reduce missed diagnoses and improve the diagnostic efficacy.


Asunto(s)
Fracturas de las Costillas , Humanos , Fracturas de las Costillas/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Rayos X , Radiografía , Estudios Retrospectivos
14.
J Digit Imaging ; 36(5): 2015-2024, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37268842

RESUMEN

The paper aims to develop prediction model that integrates clinical, radiomics, and deep features using transfer learning to stratifying between high and low risk of thymoma. Our study enrolled 150 patients with thymoma (76 low-risk and 74 high-risk) who underwent surgical resection and pathologically confirmed in Shengjing Hospital of China Medical University from January 2018 to December 2020. The training cohort consisted of 120 patients (80%) and the test cohort consisted of 30 patients (20%). The 2590 radiomics and 192 deep features from non-enhanced, arterial, and venous phase CT images were extracted and ANOVA, Pearson correlation coefficient, PCA, and LASSO were used to select the most significant features. A fusion model that integrated clinical, radiomics, and deep features was developed with SVM classifiers to predict the risk level of thymoma, and accuracy, sensitivity, specificity, ROC curves, and AUC were applied to evaluate the classification model. In both the training and test cohorts, the fusion model demonstrated better performance in stratifying high and low risk of thymoma. It had AUCs of 0.99 and 0.95, and an accuracy of 0.93 and 0.83, respectively. This was compared to the clinical model (AUCs of 0.70 and 0.51, accuracy of 0.68 and 0.47), the radiomics model (AUCs of 0.97 and 0.82, accuracy of 0.93 and 0.80), and the deep model (AUCs of 0.94 and 0.85, accuracy of 0.88 and 0.80). The fusion model integrating clinical, radiomics and deep features based on transfer learning was efficient for noninvasively stratifying high risk and low risk of thymoma. The models could help to determine surgery strategy for thymoma cancer.


Asunto(s)
Timoma , Neoplasias del Timo , Humanos , Timoma/diagnóstico por imagen , Timoma/cirugía , Multiómica , Aprendizaje , Neoplasias del Timo/diagnóstico por imagen , Aprendizaje Automático , Estudios Retrospectivos
15.
J Digit Imaging ; 36(3): 827-836, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36596937

RESUMEN

Novel coronavirus disease 2019 (COVID-19) has rapidly spread throughout the world; however, it is difficult for clinicians to make early diagnoses. This study is to evaluate the feasibility of using deep learning (DL) models to identify asymptomatic COVID-19 patients based on chest CT images. In this retrospective study, six DL models (Xception, NASNet, ResNet, EfficientNet, ViT, and Swin), based on convolutional neural networks (CNNs) or transformer architectures, were trained to identify asymptomatic patients with COVID-19 on chest CT images. Data from Yangzhou were randomly split into a training set (n = 2140) and an internal-validation set (n = 360). Data from Suzhou was the external-test set (n = 200). Model performance was assessed by the metrics accuracy, recall, and specificity and was compared with the assessments of two radiologists. A total of 2700 chest CT images were collected in this study. In the validation dataset, the Swin model achieved the highest accuracy of 0.994, followed by the EfficientNet model (0.954). The recall and the precision of the Swin model were 0.989 and 1.000, respectively. In the test dataset, the Swin model was still the best and achieved the highest accuracy (0.980). All the DL models performed remarkably better than the two experts. Last, the time on the test set diagnosis spent by two experts-42 min, 17 s (junior); and 29 min, 43 s (senior)-was significantly higher than those of the DL models (all below 2 min). This study evaluated the feasibility of multiple DL models in distinguishing asymptomatic patients with COVID-19 from healthy subjects on chest CT images. It found that a transformer-based model, the Swin model, performed best.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , COVID-19/diagnóstico por imagen , Estudios Retrospectivos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
16.
Appl Soft Comput ; 132: 109851, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36447954

RESUMEN

The world has been undergoing the most ever unprecedented circumstances caused by the coronavirus pandemic, which is having a devastating global effect in different aspects of life. Since there are not effective antiviral treatments for Covid-19 yet, it is crucial to early detect and monitor the progression of the disease, thereby helping to reduce mortality. While different measures are being used to combat the virus, medical imaging techniques have been examined to support doctors in diagnosing the disease. In this paper, we present a practical solution for the detection of Covid-19 from chest X-ray (CXR) and lung computed tomography (LCT) images, exploiting cutting-edge Machine Learning techniques. As the main classification engine, we make use of EfficientNet and MixNet, two recently developed families of deep neural networks. Furthermore, to make the training more effective and efficient, we apply three transfer learning algorithms. The ultimate aim is to build a reliable expert system to detect Covid-19 from different sources of images, making it be a multi-purpose AI diagnosing system. We validated our proposed approach using four real-world datasets. The first two are CXR datasets consist of 15,000 and 17,905 images, respectively. The other two are LCT datasets with 2,482 and 411,528 images, respectively. The five-fold cross-validation methodology was used to evaluate the approach, where the dataset is split into five parts, and accordingly the evaluation is conducted in five rounds. By each evaluation, four parts are combined to form the training data, and the remaining one is used for testing. We obtained an encouraging prediction performance for all the considered datasets. In all the configurations, the obtained accuracy is always larger than 95.0%. Compared to various existing studies, our approach yields a substantial performance gain. Moreover, such an improvement is statistically significant.

17.
Expert Syst Appl ; 227: 120367, 2023 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-37193000

RESUMEN

The COVID-19 is one of the most significant obstacles that humanity is now facing. The use of computed tomography (CT) images is one method that can be utilized to recognize COVID-19 in early stage. In this study, an upgraded variant of Moth flame optimization algorithm (Es-MFO) is presented by considering a nonlinear self-adaptive parameter and a mathematical principle based on the Fibonacci approach method to achieve a higher level of accuracy in the classification of COVID-19 CT images. The proposed Es-MFO algorithm is evaluated using nineteen different basic benchmark functions, thirty and fifty dimensional IEEE CEC'2017 test functions, and compared the proficiency with a variety of other fundamental optimization techniques as well as MFO variants. Moreover, the suggested Es-MFO algorithm's robustness and durability has been evaluated with tests including the Friedman rank test and the Wilcoxon rank test, as well as a convergence analysis and a diversity analysis. Furthermore, the proposed Es-MFO algorithm resolves three CEC2020 engineering design problems to examine the problem-solving ability of the proposed method. The proposed Es-MFO algorithm is then used to solve the COVID-19 CT image segmentation problem using multi-level thresholding with the help of Otsu's method. Comparison results of the suggested Es-MFO with basic and MFO variants proved the superiority of the newly developed algorithm.

18.
Comput Electr Eng ; 105: 108479, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36406625

RESUMEN

Recent studies have shown that computed tomography (CT) scan images can characterize COVID-19 disease in patients. Several deep learning (DL) methods have been proposed for diagnosis in the literature, including convolutional neural networks (CNN). But, with inefficient patient classification models, the number of 'False Negatives' can put lives at risk. The primary objective is to improve the model so that it does not reveal 'Covid' as 'Non-Covid'. This study uses Dense-CNN to categorize patients efficiently. A novel loss function based on cross-entropy has also been used to improve the CNN algorithm's convergence. The proposed model is built and tested on a recently published large dataset. Extensive study and comparison with well-known models reveal the effectiveness of the proposed method over known methods. The proposed model achieved a prediction accuracy of 93.78%, while false-negative is only 6.5%. This approach's significant advantage is accelerating the diagnosis and treatment of COVID-19.

19.
Int J Med Sci ; 19(3): 490-498, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35370462

RESUMEN

Introduction: Early detection of lung cancer is one way to improve outcomes. Improving the detection of nodules on chest CT scans is important. Previous artificial intelligence (AI) modules show rapid advantages, which improves the performance of detecting lung nodules in some datasets. However, they have a high false-positive (FP) rate. Its effectiveness in clinical practice has not yet been fully proven. We aimed to use AI assistance in CT scans to decrease FP. Materials and methods: CT images of 60 patients were obtained. Five senior doctors who were blinded to these cases participated in this study for the detection of lung nodules. Two doctors performed manual detection and labeling of lung nodules without AI assistance. Another three doctors used AI assistance to detect and label lung nodules before manual interpretation. The AI program is based on a deep learning framework. Results: In total, 266 nodules were identified. For doctors without AI assistance, the FP was 0.617-0.650/scan and the sensitivity was 59.2-67.0%. For doctors with AI assistance, the FP was 0.067 to 0.2/scan and the sensitivity was 59.2-77.3% This AI-assisted program significantly reduced FP. The error-prone characteristics of lung nodules were central locations, ground-glass appearances, and small sizes. The AI-assisted program improved the detection of error-prone nodules. Conclusions: Detection of lung nodules is important for lung cancer treatment. When facing a large number of CT scans, error-prone nodules are a great challenge for doctors. The AI-assisted program improved the performance of detecting lung nodules, especially for error-prone nodules.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Inteligencia Artificial , Humanos , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
20.
BMC Med Imaging ; 22(1): 120, 2022 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-35790901

RESUMEN

Covid-19 is a disease that can lead to pneumonia, respiratory syndrome, septic shock, multiple organ failure, and death. This pandemic is viewed as a critical component of the fight against an enormous threat to the human population. Deep convolutional neural networks have recently proved their ability to perform well in classification and dimension reduction tasks. Selecting hyper-parameters is critical for these networks. This is because the search space expands exponentially in size as the number of layers increases. All existing approaches utilize a pre-trained or designed architecture as an input. None of them takes design and pruning into account throughout the process. In fact, there exists a convolutional topology for any architecture, and each block of a CNN corresponds to an optimization problem with a large search space. However, there are no guidelines for designing a specific architecture for a specific purpose; thus, such design is highly subjective and heavily reliant on data scientists' knowledge and expertise. Motivated by this observation, we propose a topology optimization method for designing a convolutional neural network capable of classifying radiography images and detecting probable chest anomalies and infections, including COVID-19. Our method has been validated in a number of comparative studies against relevant state-of-the-art architectures.


Asunto(s)
COVID-19 , COVID-19/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA