Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 8.834
Filter
1.
BMC Med Imaging ; 24(1): 162, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38956470

ABSTRACT

BACKGROUND: The image quality of computed tomography angiography (CTA) images following endovascular aneurysm repair (EVAR) is not satisfactory, since artifacts resulting from metallic implants obstruct the clear depiction of stent and isolation lumens, and also adjacent soft tissues. However, current techniques to reduce these artifacts still need further advancements due to higher radiation doses, longer processing times and so on. Thus, the aim of this study is to assess the impact of utilizing Single-Energy Metal Artifact Reduction (SEMAR) alongside a novel deep learning image reconstruction technique, known as the Advanced Intelligent Clear-IQ Engine (AiCE), on image quality of CTA follow-ups conducted after EVAR. MATERIALS: This retrospective study included 47 patients (mean age ± standard deviation: 68.6 ± 7.8 years; 37 males) who underwent CTA examinations following EVAR. Images were reconstructed using four different methods: hybrid iterative reconstruction (HIR), AiCE, the combination of HIR and SEMAR (HIR + SEMAR), and the combination of AiCE and SEMAR (AiCE + SEMAR). Two radiologists, blinded to the reconstruction techniques, independently evaluated the images. Quantitative assessments included measurements of image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), the longest length of artifacts (AL), and artifact index (AI). These parameters were subsequently compared across different reconstruction methods. RESULTS: The subjective results indicated that AiCE + SEMAR performed the best in terms of image quality. The mean image noise intensity was significantly lower in the AiCE + SEMAR group (25.35 ± 6.51 HU) than in the HIR (47.77 ± 8.76 HU), AiCE (42.93 ± 10.61 HU), and HIR + SEMAR (30.34 ± 4.87 HU) groups (p < 0.001). Additionally, AiCE + SEMAR exhibited the highest SNRs and CNRs, as well as the lowest AIs and ALs. Importantly, endoleaks and thrombi were most clearly visualized using AiCE + SEMAR. CONCLUSIONS: In comparison to other reconstruction methods, the combination of AiCE + SEMAR demonstrates superior image quality, thereby enhancing the detection capabilities and diagnostic confidence of potential complications such as early minor endleaks and thrombi following EVAR. This improvement in image quality could lead to more accurate diagnoses and better patient outcomes.


Subject(s)
Artifacts , Computed Tomography Angiography , Endovascular Procedures , Humans , Retrospective Studies , Female , Computed Tomography Angiography/methods , Aged , Male , Endovascular Procedures/methods , Middle Aged , Aortic Aneurysm, Abdominal/surgery , Aortic Aneurysm, Abdominal/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Stents , Endovascular Aneurysm Repair
2.
BMC Med Imaging ; 24(1): 165, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38956579

ABSTRACT

BACKGROUND: Pneumoconiosis has a significant impact on the quality of patient survival due to its difficult staging diagnosis and poor prognosis. This study aimed to develop a computer-aided diagnostic system for the screening and staging of pneumoconiosis based on a multi-stage joint deep learning approach using X-ray chest radiographs of pneumoconiosis patients. METHODS: In this study, a total of 498 medical chest radiographs were obtained from the Department of Radiology of West China Fourth Hospital. The dataset was randomly divided into a training set and a test set at a ratio of 4:1. Following histogram equalization for image enhancement, the images were segmented using the U-Net model, and staging was predicted using a convolutional neural network classification model. We first used Efficient-Net for multi-classification staging diagnosis, but the results showed that stage I/II of pneumoconiosis was difficult to diagnose. Therefore, based on clinical practice we continued to improve the model by using the Res-Net 34 Multi-stage joint method. RESULTS: Of the 498 cases collected, the classification model using the Efficient-Net achieved an accuracy of 83% with a Quadratic Weighted Kappa (QWK) score of 0.889. The classification model using the multi-stage joint approach of Res-Net 34 achieved an accuracy of 89% with an area under the curve (AUC) of 0.98 and a high QWK score of 0.94. CONCLUSIONS: In this study, the diagnostic accuracy of pneumoconiosis staging was significantly improved by an innovative combined multi-stage approach, which provided a reference for clinical application and pneumoconiosis screening.


Subject(s)
Deep Learning , Pneumoconiosis , Humans , Pneumoconiosis/diagnostic imaging , Pneumoconiosis/pathology , Male , Middle Aged , Female , Radiography, Thoracic/methods , Aged , Adult , Neural Networks, Computer , China , Diagnosis, Computer-Assisted/methods , Radiographic Image Interpretation, Computer-Assisted/methods
3.
BMC Med Imaging ; 24(1): 163, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38956583

ABSTRACT

PURPOSE: To examine whether there is a significant difference in image quality between the deep learning reconstruction (DLR [AiCE, Advanced Intelligent Clear-IQ Engine]) and hybrid iterative reconstruction (HIR [AIDR 3D, adaptive iterative dose reduction three dimensional]) algorithms on the conventional enhanced and CE-boost (contrast-enhancement-boost) images of indirect computed tomography venography (CTV) of lower extremities. MATERIALS AND METHODS: In this retrospective study, seventy patients who underwent CTV from June 2021 to October 2022 to assess deep vein thrombosis and varicose veins were included. Unenhanced and enhanced images were reconstructed for AIDR 3D and AiCE, AIDR 3D-boost and AiCE-boost images were obtained using subtraction software. Objective and subjective image qualities were assessed, and radiation doses were recorded. RESULTS: The CT values of the inferior vena cava (IVC), femoral vein ( FV), and popliteal vein (PV) in the CE-boost images were approximately 1.3 (1.31-1.36) times higher than in those of the enhanced images. There were no significant differences in mean CT values of IVC, FV, and PV between AIDR 3D and AiCE, AIDR 3D-boost and AiCE-boost images. Noise in AiCE, AiCE-boost images was significantly lower than in AIDR 3D and AIDR 3D-boost images ( P < 0.05). The SNR (signal-to-noise ratio), CNR (contrast-to-noise ratio), and subjective scores of AiCE-boost images were the highest among 4 groups, surpassing AiCE, AIDR 3D, and AIDR 3D-boost images (all P < 0.05). CONCLUSION: In indirect CTV of the lower extremities images, DLR with the CE-boost technique could decrease the image noise and improve the CT values, SNR, CNR, and subjective image scores. AiCE-boost images received the highest subjective image quality score and were more readily accepted by radiologists.


Subject(s)
Contrast Media , Deep Learning , Lower Extremity , Phlebography , Humans , Male , Retrospective Studies , Female , Middle Aged , Lower Extremity/blood supply , Lower Extremity/diagnostic imaging , Aged , Phlebography/methods , Adult , Algorithms , Venous Thrombosis/diagnostic imaging , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Popliteal Vein/diagnostic imaging , Varicose Veins/diagnostic imaging , Vena Cava, Inferior/diagnostic imaging , Femoral Vein/diagnostic imaging , Radiation Dosage , Computed Tomography Angiography/methods , Aged, 80 and over , Radiographic Image Enhancement/methods
4.
Radiol Imaging Cancer ; 6(4): e230149, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38995172

ABSTRACT

Purpose To compare two deep learning-based commercially available artificial intelligence (AI) systems for mammography with digital breast tomosynthesis (DBT) and benchmark them against the performance of radiologists. Materials and Methods This retrospective study included consecutive asymptomatic patients who underwent mammography with DBT (2019-2020). Two AI systems (Transpara 1.7.0 and ProFound AI 3.0) were used to evaluate the DBT examinations. The systems were compared using receiver operating characteristic (ROC) analysis to calculate the area under the ROC curve (AUC) for detecting malignancy overall and within subgroups based on mammographic breast density. Breast Imaging Reporting and Data System results obtained from standard-of-care human double-reading were compared against AI results with use of the DeLong test. Results Of 419 female patients (median age, 60 years [IQR, 52-70 years]) included, 58 had histologically proven breast cancer. The AUC was 0.86 (95% CI: 0.85, 0.91), 0.93 (95% CI: 0.90, 0.95), and 0.98 (95% CI: 0.96, 0.99) for Transpara, ProFound AI, and human double-reading, respectively. For Transpara, a rule-out criterion of score 7 or lower yielded 100% (95% CI: 94.2, 100.0) sensitivity and 60.9% (95% CI: 55.7, 66.0) specificity. The rule-in criterion of higher than score 9 yielded 96.6% sensitivity (95% CI: 88.1, 99.6) and 78.1% specificity (95% CI: 73.8, 82.5). For ProFound AI, a rule-out criterion of lower than score 51 yielded 100% sensitivity (95% CI: 93.8, 100) and 67.0% specificity (95% CI: 62.2, 72.1). The rule-in criterion of higher than score 69 yielded 93.1% (95% CI: 83.3, 98.1) sensitivity and 82.0% (95% CI: 77.9, 86.1) specificity. Conclusion Both AI systems showed high performance in breast cancer detection but lower performance compared with human double-reading. Keywords: Mammography, Breast, Oncology, Artificial Intelligence, Deep Learning, Digital Breast Tomosynthesis © RSNA, 2024.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Mammography , Humans , Female , Breast Neoplasms/diagnostic imaging , Mammography/methods , Middle Aged , Retrospective Studies , Aged , Deep Learning , Breast/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Sensitivity and Specificity
5.
BMC Med Imaging ; 24(1): 180, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39039460

ABSTRACT

OBJECTIVES: Rheumatoid arthritis (RA) is a severe and common autoimmune disease. Conventional diagnostic methods are often subjective, error-prone, and repetitive works. There is an urgent need for a method to detect RA accurately. Therefore, this study aims to develop an automatic diagnostic system based on deep learning for recognizing and staging RA from radiographs to assist physicians in diagnosing RA quickly and accurately. METHODS: We develop a CNN-based fully automated RA diagnostic model, exploring five popular CNN architectures on two clinical applications. The model is trained on a radiograph dataset containing 240 hand radiographs, of which 39 are normal and 201 are RA with five stages. For evaluation, we use 104 hand radiographs, of which 13 are normal and 91 RA with five stages. RESULTS: The CNN model achieves good performance in RA diagnosis based on hand radiographs. For the RA recognition, all models achieve an AUC above 90% with a sensitivity over 98%. In particular, the AUC of the GoogLeNet-based model is 97.80%, and the sensitivity is 100.0%. For the RA staging, all models achieve over 77% AUC with a sensitivity over 80%. Specifically, the VGG16-based model achieves 83.36% AUC with 92.67% sensitivity. CONCLUSION: The presented GoogLeNet-based model and VGG16-based model have the best AUC and sensitivity for RA recognition and staging, respectively. The experimental results demonstrate the feasibility and applicability of CNN in radiograph-based RA diagnosis. Therefore, this model has important clinical significance, especially for resource-limited areas and inexperienced physicians.


Subject(s)
Arthritis, Rheumatoid , Deep Learning , Neural Networks, Computer , Arthritis, Rheumatoid/diagnostic imaging , Humans , Sensitivity and Specificity , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography/methods , Hand/diagnostic imaging , Male , Female
6.
Sci Rep ; 14(1): 15967, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38987309

ABSTRACT

Labeling errors can significantly impact the performance of deep learning models used for screening chest radiographs. The deep learning model for detecting pulmonary nodules is particularly vulnerable to such errors, mainly because normal chest radiographs and those with nodules obscured by ribs appear similar. Thus, high-quality datasets referred to chest computed tomography (CT) are required to prevent the misclassification of nodular chest radiographs as normal. From this perspective, a deep learning strategy employing chest radiography data with pixel-level annotations referencing chest CT scans may improve nodule detection and localization compared to image-level labels. We trained models using a National Institute of Health chest radiograph-based labeling dataset and an AI-HUB CT-based labeling dataset, employing DenseNet architecture with squeeze-and-excitation blocks. We developed four models to assess whether CT versus chest radiography and pixel-level versus image-level labeling would improve the deep learning model's performance to detect nodules. The models' performance was evaluated using two external validation datasets. The AI-HUB dataset with image-level labeling outperformed the NIH dataset (AUC 0.88 vs 0.71 and 0.78 vs. 0.73 in two external datasets, respectively; both p < 0.001). However, the AI-HUB data annotated at the pixel level produced the best model (AUC 0.91 and 0.86 in external datasets), and in terms of nodule localization, it significantly outperformed models trained with image-level annotation data, with a Dice coefficient ranging from 0.36 to 0.58. Our findings underscore the importance of accurately labeled data in developing reliable deep learning algorithms for nodule detection in chest radiography.


Subject(s)
Deep Learning , Lung Neoplasms , Radiography, Thoracic , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Radiography, Thoracic/methods , Radiography, Thoracic/standards , Lung Neoplasms/diagnostic imaging , Solitary Pulmonary Nodule/diagnostic imaging , Data Accuracy , Radiographic Image Interpretation, Computer-Assisted/methods
7.
BMC Musculoskelet Disord ; 25(1): 547, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39010001

ABSTRACT

OBJECTIVE: This study aimed to evaluate a new deep-learning model for diagnosing avascular necrosis of the femoral head (AVNFH) by analyzing pelvic anteroposterior digital radiography. METHODS: The study sample included 1167 hips. The radiographs were independently classified into 6 stages by a radiologist using their simultaneous MRIs. After that, the radiographs were given to train and test the deep learning models of the project including SVM and ANFIS layer using the Python programming language and TensorFlow library. In the last step, the test set of hip radiographs was provided to two independent radiologists with different work experiences to compare their diagnosis performance to the deep learning models' performance using the F1 score and Mcnemar test analysis. RESULTS: The performance of SVM for AVNFH detection (AUC = 82.88%) was slightly higher than less experienced radiologists (79.68%) and slightly lower than experienced radiologists (88.4%) without reaching significance (p-value > 0.05). Evaluation of the performance of SVM for pre-collapse AVNFH detection with an AUC of 73.58% showed significantly higher performance than less experienced radiologists (AUC = 60.70%, p-value < 0.001). On the other hand, no significant difference is noted between experienced radiologists and SVM for pre-collapse detection. ANFIS algorithm for AVNFH detection with an AUC of 86.60% showed significantly higher performance than less experienced radiologists (AUC = 79.68%, p-value = 0.04). Although reaching less performance compared to experienced radiologists statistically not significant (AUC = 88.40%, p-value = 0.20). CONCLUSIONS: Our study has shed light on the remarkable capabilities of SVM and ANFIS as diagnostic tools for AVNFH detection in radiography. Their ability to achieve high accuracy with remarkable efficiency makes them promising candidates for early detection and intervention, ultimately contributing to improved patient outcomes.


Subject(s)
Deep Learning , Femur Head Necrosis , Humans , Female , Male , Middle Aged , Adult , Femur Head Necrosis/diagnostic imaging , Aged , Magnetic Resonance Imaging/methods , Young Adult , Diagnosis, Differential , Radiographic Image Interpretation, Computer-Assisted/methods , Adolescent
8.
Radiol Cardiothorac Imaging ; 6(4): e230328, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39023373

ABSTRACT

Purpose To investigate the impact of plaque size and density on virtual noncontrast (VNC)-based coronary artery calcium scoring (CACS) using photon-counting detector CT and to provide safety net reconstructions for improved detection of subtle plaques in patients whose VNC-based CACS would otherwise be erroneously zero when compared with true noncontrast (TNC)-based CACS. Materials and Methods In this prospective study, CACS was evaluated in a phantom containing calcifications with different diameters (5, 3, and 1 mm) and densities (800, 400, and 200 mg/cm3) and in participants who underwent TNC and contrast-enhanced cardiac photon-counting detector CT (July 2021-March 2022). VNC images were reconstructed at different virtual monoenergetic imaging (55-80 keV) and quantum iterative reconstruction (QIR) levels (QIR,1-4). TNC scans at 70 keV with QIR off served as the reference standard. In vitro CACS was analyzed using standard settings (3.0-mm sections, kernel Qr36, 130-HU threshold). Calcification detectability and CACS of small and low-density plaques were also evaluated using 1.0-mm sections, kernel Qr44, and 120- or 110-HU thresholds. Safety net reconstructions were defined based on background Agatston scores and evaluated in vivo in TNC plaques initially nondetectable using standard VNC reconstructions. Results The in vivo cohort included 63 participants (57.8 years ± 15.5 [SD]; 37 [59%] male, 26 [41%] female). Correlation and agreement between standard CACSVNC and CACSTNC were higher in large- and medium-sized and high- and medium-density than in low-density plaques (in vitro: intraclass correlation coefficient [ICC] ≥ 0.90; r > 0.9 vs ICC = 0.20-0.48; r = 0.5-0.6). Small plaques were not detectable using standard VNC reconstructions. Calcification detectability was highest using 1.0-mm sections, kernel Qr44, 120- and 110-HU thresholds, and QIR level of 2 or less VNC reconstructions. Compared with standard VNC, using safety net reconstructions (55 keV, QIR 2, 110-HU threshold) for in vivo subtle plaque detection led to higher detection (increased by 89% [50 of 56]) and improved correlation and agreement of CACSVNC with CACSTNC (in vivo: ICC = 0.51-0.61; r = 0.6). Conclusion Compared with TNC-based calcium scoring, VNC-based calcium scoring was limited for small and low-density plaques but improved using safety net reconstructions, which may be particularly useful in patients with low calcium scores who would otherwise be treated based on potentially false-negative results. Keywords: Coronary Artery Calcium CT, Photon-Counting Detector CT, Virtual Noncontrast, Plaque Size, Plaque Density Supplemental material is available for this article. © RSNA, 2024.


Subject(s)
Coronary Artery Disease , Phantoms, Imaging , Plaque, Atherosclerotic , Humans , Male , Female , Prospective Studies , Plaque, Atherosclerotic/diagnostic imaging , Plaque, Atherosclerotic/pathology , Middle Aged , Coronary Artery Disease/diagnostic imaging , Coronary Artery Disease/pathology , Aged , Photons , Coronary Vessels/diagnostic imaging , Coronary Vessels/pathology , Vascular Calcification/diagnostic imaging , Vascular Calcification/pathology , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Coronary Angiography/methods , Contrast Media
9.
Eur Radiol Exp ; 8(1): 80, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39004645

ABSTRACT

INTRODUCTION: Breast arterial calcifications (BAC) are common incidental findings on routine mammograms, which have been suggested as a sex-specific biomarker of cardiovascular disease (CVD) risk. Previous work showed the efficacy of a pretrained convolutional network (CNN), VCG16, for automatic BAC detection. In this study, we further tested the method by a comparative analysis with other ten CNNs. MATERIAL AND METHODS: Four-view standard mammography exams from 1,493 women were included in this retrospective study and labeled as BAC or non-BAC by experts. The comparative study was conducted using eleven pretrained convolutional networks (CNNs) with varying depths from five architectures including Xception, VGG, ResNetV2, MobileNet, and DenseNet, fine-tuned for the binary BAC classification task. Performance evaluation involved area under the receiver operating characteristics curve (AUC-ROC) analysis, F1-score (harmonic mean of precision and recall), and generalized gradient-weighted class activation mapping (Grad-CAM++) for visual explanations. RESULTS: The dataset exhibited a BAC prevalence of 194/1,493 women (13.0%) and 581/5,972 images (9.7%). Among the retrained models, VGG, MobileNet, and DenseNet demonstrated the most promising results, achieving AUC-ROCs > 0.70 in both training and independent testing subsets. In terms of testing F1-score, VGG16 ranked first, higher than MobileNet (0.51) and VGG19 (0.46). Qualitative analysis showed that the Grad-CAM++ heatmaps generated by VGG16 consistently outperformed those produced by others, offering a finer-grained and discriminative localization of calcified regions within images. CONCLUSION: Deep transfer learning showed promise in automated BAC detection on mammograms, where relatively shallow networks demonstrated superior performances requiring shorter training times and reduced resources. RELEVANCE STATEMENT: Deep transfer learning is a promising approach to enhance reporting BAC on mammograms and facilitate developing efficient tools for cardiovascular risk stratification in women, leveraging large-scale mammographic screening programs. KEY POINTS: • We tested different pretrained convolutional networks (CNNs) for BAC detection on mammograms. • VGG and MobileNet demonstrated promising performances, outperforming their deeper, more complex counterparts. • Visual explanations using Grad-CAM++ highlighted VGG16's superior performance in localizing BAC.


Subject(s)
Breast Diseases , Deep Learning , Mammography , Humans , Mammography/methods , Female , Retrospective Studies , Middle Aged , Breast Diseases/diagnostic imaging , Aged , Adult , Breast/diagnostic imaging , Vascular Calcification/diagnostic imaging , Calcinosis/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
10.
Eur Radiol Exp ; 8(1): 84, 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39046565

ABSTRACT

BACKGROUND: Computed tomography (CT) reconstruction algorithms can improve image quality, especially deep learning reconstruction (DLR). We compared DLR, iterative reconstruction (IR), and filtered back projection (FBP) for lesion detection in neck CT. METHODS: Nine patient-mimicking neck phantoms were examined with a 320-slice scanner at six doses: 0.5, 1, 1.6, 2.1, 3.1, and 5.2 mGy. Each of eight phantoms contained one circular lesion (diameter 1 cm; contrast -30 HU to the background) in the parapharyngeal space; one phantom had no lesions. Reconstruction was made using FBP, IR, and DLR. Thirteen readers were tasked with identifying and localizing lesions in 32 images with a lesion and 20 without lesions for each dose and reconstruction algorithm. Receiver operating characteristic (ROC) and localization ROC (LROC) analysis were performed. RESULTS: DLR improved lesion detection with ROC area under the curve (AUC) 0.724 ± 0.023 (mean ± standard error of the mean) using DLR versus 0.696 ± 0.021 using IR (p = 0.037) and 0.671 ± 0.023 using FBP (p < 0.001). Likewise, DLR improved lesion localization, with LROC AUC 0.407 ± 0.039 versus 0.338 ± 0.041 using IR (p = 0.002) and 0.313 ± 0.044 using FBP (p < 0.001). Dose reduction to 0.5 mGy compromised lesion detection in FBP-reconstructed images compared to doses ≥ 2.1 mGy (p ≤ 0.024), while no effect was observed with DLR or IR (p ≥ 0.058). CONCLUSION: DLR improved the detectability of lesions in neck CT imaging. Dose reduction to 0.5 mGy maintained lesion detectability when denoising reconstruction was used. RELEVANCE STATEMENT: Deep learning enhances lesion detection in neck CT imaging compared to iterative reconstruction and filtered back projection, offering improved diagnostic performance and potential for x-ray dose reduction. KEY POINTS: Low-contrast lesion detectability was assessed in anatomically realistic neck CT phantoms. Deep learning reconstruction (DLR) outperformed filtered back projection and iterative reconstruction. Dose has little impact on lesion detectability against anatomical background structures.


Subject(s)
Deep Learning , Head and Neck Neoplasms , Phantoms, Imaging , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Head and Neck Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Radiation Dosage
11.
Math Biosci Eng ; 21(4): 5735-5761, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38872556

ABSTRACT

Precise segmentation of liver tumors from computed tomography (CT) scans is a prerequisite step in various clinical applications. Multi-phase CT imaging enhances tumor characterization, thereby assisting radiologists in accurate identification. However, existing automatic liver tumor segmentation models did not fully exploit multi-phase information and lacked the capability to capture global information. In this study, we developed a pioneering multi-phase feature interaction Transformer network (MI-TransSeg) for accurate liver tumor segmentation and a subsequent microvascular invasion (MVI) assessment in contrast-enhanced CT images. In the proposed network, an efficient multi-phase features interaction module was introduced to enable bi-directional feature interaction among multiple phases, thus maximally exploiting the available multi-phase information. To enhance the model's capability to extract global information, a hierarchical transformer-based encoder and decoder architecture was designed. Importantly, we devised a multi-resolution scales feature aggregation strategy (MSFA) to optimize the parameters and performance of the proposed model. Subsequent to segmentation, the liver tumor masks generated by MI-TransSeg were applied to extract radiomic features for the clinical applications of the MVI assessment. With Institutional Review Board (IRB) approval, a clinical multi-phase contrast-enhanced CT abdominal dataset was collected that included 164 patients with liver tumors. The experimental results demonstrated that the proposed MI-TransSeg was superior to various state-of-the-art methods. Additionally, we found that the tumor mask predicted by our method showed promising potential in the assessment of microvascular invasion. In conclusion, MI-TransSeg presents an innovative paradigm for the segmentation of complex liver tumors, thus underscoring the significance of multi-phase CT data exploitation. The proposed MI-TransSeg network has the potential to assist radiologists in diagnosing liver tumors and assessing microvascular invasion.


Subject(s)
Algorithms , Contrast Media , Liver Neoplasms , Microvessels , Tomography, X-Ray Computed , Humans , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/pathology , Liver Neoplasms/blood supply , Microvessels/diagnostic imaging , Microvessels/pathology , Neoplasm Invasiveness , Image Processing, Computer-Assisted/methods , Liver/diagnostic imaging , Liver/pathology , Liver/blood supply , Radiographic Image Interpretation, Computer-Assisted/methods , Male , Female
12.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(3): 503-510, 2024 Jun 25.
Article in Chinese | MEDLINE | ID: mdl-38932536

ABSTRACT

Automatic detection of pulmonary nodule based on computer tomography (CT) images can significantly improve the diagnosis and treatment of lung cancer. However, there is a lack of effective interactive tools to record the marked results of radiologists in real time and feed them back to the algorithm model for iterative optimization. This paper designed and developed an online interactive review system supporting the assisted diagnosis of lung nodules in CT images. Lung nodules were detected by the preset model and presented to doctors, who marked or corrected the lung nodules detected by the system with their professional knowledge, and then iteratively optimized the AI model with active learning strategy according to the marked results of radiologists to continuously improve the accuracy of the model. The subset 5-9 dataset of the lung nodule analysis 2016(LUNA16) was used for iteration experiments. The precision, F1-score and MioU indexes were steadily improved with the increase of the number of iterations, and the precision increased from 0.213 9 to 0.565 6. The results in this paper show that the system not only uses deep segmentation model to assist radiologists, but also optimizes the model by using radiologists' feedback information to the maximum extent, iteratively improving the accuracy of the model and better assisting radiologists.


Subject(s)
Algorithms , Diagnosis, Computer-Assisted , Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Solitary Pulmonary Nodule/diagnostic imaging , Multiple Pulmonary Nodules/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Machine Learning
13.
BMC Med Imaging ; 24(1): 159, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38926711

ABSTRACT

BACKGROUND: To assess the improvement of image quality and diagnostic acceptance of thinner slice iodine maps enabled by deep learning image reconstruction (DLIR) in abdominal dual-energy CT (DECT). METHODS: This study prospectively included 104 participants with 136 lesions. Four series of iodine maps were generated based on portal-venous scans of contrast-enhanced abdominal DECT: 5-mm and 1.25-mm using adaptive statistical iterative reconstruction-V (Asir-V) with 50% blending (AV-50), and 1.25-mm using DLIR with medium (DLIR-M), and high strength (DLIR-H). The iodine concentrations (IC) and their standard deviations of nine anatomical sites were measured, and the corresponding coefficient of variations (CV) were calculated. Noise-power-spectrum (NPS) and edge-rise-slope (ERS) were measured. Five radiologists rated image quality in terms of image noise, contrast, sharpness, texture, and small structure visibility, and evaluated overall diagnostic acceptability of images and lesion conspicuity. RESULTS: The four reconstructions maintained the IC values unchanged in nine anatomical sites (all p > 0.999). Compared to 1.25-mm AV-50, 1.25-mm DLIR-M and DLIR-H significantly reduced CV values (all p < 0.001) and presented lower noise and noise peak (both p < 0.001). Compared to 5-mm AV-50, 1.25-mm images had higher ERS (all p < 0.001). The difference of the peak and average spatial frequency among the four reconstructions was relatively small but statistically significant (both p < 0.001). The 1.25-mm DLIR-M images were rated higher than the 5-mm and 1.25-mm AV-50 images for diagnostic acceptability and lesion conspicuity (all P < 0.001). CONCLUSIONS: DLIR may facilitate the thinner slice thickness iodine maps in abdominal DECT for improvement of image quality, diagnostic acceptability, and lesion conspicuity.


Subject(s)
Contrast Media , Deep Learning , Radiographic Image Interpretation, Computer-Assisted , Radiography, Abdominal , Radiography, Dual-Energy Scanned Projection , Tomography, X-Ray Computed , Humans , Prospective Studies , Female , Male , Middle Aged , Aged , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Abdominal/methods , Radiography, Dual-Energy Scanned Projection/methods , Adult , Iodine , Aged, 80 and over
14.
IEEE J Transl Eng Health Med ; 12: 457-467, 2024.
Article in English | MEDLINE | ID: mdl-38899144

ABSTRACT

OBJECTIVE: Pulmonary cavity lesion is one of the commonly seen lesions in lung caused by a variety of malignant and non-malignant diseases. Diagnosis of a cavity lesion is commonly based on accurate recognition of the typical morphological characteristics. A deep learning-based model to automatically detect, segment, and quantify the region of cavity lesion on CT scans has potential in clinical diagnosis, monitoring, and treatment efficacy assessment. METHODS: A weakly-supervised deep learning-based method named CSA2-ResNet was proposed to quantitatively characterize cavity lesions in this paper. The lung parenchyma was firstly segmented using a pretrained 2D segmentation model, and then the output with or without cavity lesions was fed into the developed deep neural network containing hybrid attention modules. Next, the visualized lesion was generated from the activation region of the classification network using gradient-weighted class activation mapping, and image processing was applied for post-processing to obtain the expected segmentation results of cavity lesions. Finally, the automatic characteristic measurement of cavity lesions (e.g., area and thickness) was developed and verified. RESULTS: the proposed weakly-supervised segmentation method achieved an accuracy, precision, specificity, recall, and F1-score of 98.48%, 96.80%, 97.20%, 100%, and 98.36%, respectively. There is a significant improvement (P < 0.05) compared to other methods. Quantitative characterization of morphology also obtained good analysis effects. CONCLUSIONS: The proposed easily-trained and high-performance deep learning model provides a fast and effective way for the diagnosis and dynamic monitoring of pulmonary cavity lesions in clinic. Clinical and Translational Impact Statement: This model used artificial intelligence to achieve the detection and quantitative analysis of pulmonary cavity lesions in CT scans. The morphological features revealed in experiments can be utilized as potential indicators for diagnosis and dynamic monitoring of patients with cavity lesions.


Subject(s)
Deep Learning , Lung , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Lung/diagnostic imaging , Lung/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Lung Diseases/diagnostic imaging , Lung Diseases/pathology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Neural Networks, Computer , Supervised Machine Learning , Algorithms
15.
BMC Med Imaging ; 24(1): 151, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38890572

ABSTRACT

BACKGROUND: Abdominal CT scans are vital for diagnosing abdominal diseases but have limitations in tissue analysis and soft tissue detection. Dual-energy CT (DECT) can improve these issues by offering low keV virtual monoenergetic images (VMI), enhancing lesion detection and tissue characterization. However, its cost limits widespread use. PURPOSE: To develop a model that converts conventional images (CI) into generative virtual monoenergetic images at 40 keV (Gen-VMI40keV) of the upper abdomen CT scan. METHODS: Totally 444 patients who underwent upper abdominal spectral contrast-enhanced CT were enrolled and assigned to the training and validation datasets (7:3). Then, 40-keV portal-vein virtual monoenergetic (VMI40keV) and CI, generated from spectral CT scans, served as target and source images. These images were employed to build and train a CI-VMI40keV model. Indexes such as Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) were utilized to determine the best generator mode. An additional 198 cases were divided into three test groups, including Group 1 (58 cases with visible abnormalities), Group 2 (40 cases with hepatocellular carcinoma [HCC]) and Group 3 (100 cases from a publicly available HCC dataset). Both subjective and objective evaluations were performed. Comparisons, correlation analyses and Bland-Altman plot analyses were performed. RESULTS: The 192nd iteration produced the best generator mode (lower MAE and highest PSNR and SSIM). In the Test groups (1 and 2), both VMI40keV and Gen-VMI40keV significantly improved CT values, as well as SNR and CNR, for all organs compared to CI. Significant positive correlations for objective indexes were found between Gen-VMI40keV and VMI40keV in various organs and lesions. Bland-Altman analysis showed that the differences between both imaging types mostly fell within the 95% confidence interval. Pearson's and Spearman's correlation coefficients for objective scores between Gen-VMI40keV and VMI40keV in Groups 1 and 2 ranged from 0.645 to 0.980. In Group 3, Gen-VMI40keV yielded significantly higher CT values for HCC (220.5HU vs. 109.1HU) and liver (220.0HU vs. 112.8HU) compared to CI (p < 0.01). The CNR for HCC/liver was also significantly higher in Gen-VMI40keV (2.0 vs. 1.2) than in CI (p < 0.01). Additionally, Gen-VMI40keV was subjectively evaluated to have a higher image quality compared to CI. CONCLUSION: CI-VMI40keV model can generate Gen-VMI40keV from conventional CT scan, closely resembling VMI40keV.


Subject(s)
Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Female , Male , Middle Aged , Radiography, Abdominal/methods , Aged , Adult , Radiographic Image Interpretation, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Signal-To-Noise Ratio , Radiography, Dual-Energy Scanned Projection/methods , Carcinoma, Hepatocellular/diagnostic imaging , Aged, 80 and over , Contrast Media
16.
BMC Med Imaging ; 24(1): 141, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38862884

ABSTRACT

OBJECTIVE: To evaluate the consistency between doctors and artificial intelligence (AI) software in analysing and diagnosing pulmonary nodules, and assess whether the characteristics of pulmonary nodules derived from the two methods are consistent for the interpretation of carcinomatous nodules. MATERIALS AND METHODS: This retrospective study analysed participants aged 40-74 in the local area from 2011 to 2013. Pulmonary nodules were examined radiologically using a low-dose chest CT scan, evaluated by an expert panel of doctors in radiology, oncology, and thoracic departments, as well as a computer-aided diagnostic(CAD) system based on the three-dimensional(3D) convolutional neural network (CNN) with DenseNet architecture(InferRead CT Lung, IRCL). Consistency tests were employed to assess the uniformity of the radiological characteristics of the pulmonary nodules. The receiver operating characteristic (ROC) curve was used to evaluate the diagnostic accuracy. Logistic regression analysis is utilized to determine whether the two methods yield the same predictive factors for cancerous nodules. RESULTS: A total of 570 subjects were included in this retrospective study. The AI software demonstrated high consistency with the panel's evaluation in determining the position and diameter of the pulmonary nodules (kappa = 0.883, concordance correlation coefficient (CCC) = 0.809, p = 0.000). The comparison of the solid nodules' attenuation characteristics also showed acceptable consistency (kappa = 0.503). In patients diagnosed with lung cancer, the area under the curve (AUC) for the panel and AI were 0.873 (95%CI: 0.829-0.909) and 0.921 (95%CI: 0.884-0.949), respectively. However, there was no significant difference (p = 0.0950). The maximum diameter, solid nodules, subsolid nodules were the crucial factors for interpreting carcinomatous nodules in the analysis of expert panel and IRCL pulmonary nodule characteristics. CONCLUSION: AI software can assist doctors in diagnosing nodules and is consistent with doctors' evaluations and diagnosis of pulmonary nodules.


Subject(s)
Artificial Intelligence , Diagnosis, Computer-Assisted , Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Retrospective Studies , Middle Aged , Male , Aged , Female , Adult , Diagnosis, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Early Detection of Cancer/methods , ROC Curve , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Software
17.
Med Image Anal ; 96: 103212, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38830326

ABSTRACT

Deformable image registration is an essential component of medical image analysis and plays an irreplaceable role in clinical practice. In recent years, deep learning-based registration methods have demonstrated significant improvements in convenience, robustness and execution time compared to traditional algorithms. However, registering images with large displacements, such as those of the liver organ, remains underexplored and challenging. In this study, we present a novel convolutional neural network (CNN)-based unsupervised learning registration method, Cascaded Multi-scale Spatial-Channel Attention-guided Network (CMAN), which addresses the challenge of large deformation fields using a double coarse-to-fine registration approach. The main contributions of CMAN include: (i) local coarse-to-fine registration in the base network, which generates the displacement field for each resolution and progressively propagates these local deformations as auxiliary information for the final deformation field; (ii) global coarse-to-fine registration, which stacks multiple base networks for sequential warping, thereby incorporating richer multi-layer contextual details into the final deformation field; (iii) integration of the spatial-channel attention module in the decoder stage, which better highlights important features and improves the quality of feature maps. The proposed network was trained using two public datasets and evaluated on another public dataset as well as a private dataset across several experimental scenarios. We compared CMAN with four state-of-the-art CNN-based registration methods and two well-known traditional algorithms. The results show that the proposed double coarse-to-fine registration strategy outperforms other methods in most registration evaluation metrics. In conclusion, CMAN can effectively handle the large-deformation registration problem and show potential for application in clinical practice. The source code is made publicly available at https://github.com/LocPham263/CMAN.git.


Subject(s)
Imaging, Three-Dimensional , Liver , Neural Networks, Computer , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Liver/diagnostic imaging , Imaging, Three-Dimensional/methods , Algorithms , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods
18.
Eur J Radiol ; 176: 111538, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38838412

ABSTRACT

OBJECTIVES: This study aimed to investigate the diagnostic performance of computed tomography (CT) fractional flow reserve (CT-FFR) derived from standard images (STD) and images processed via first-generation (SnapShot Freeze, SSF1) and second-generation (SnapShot Freeze 2, SSF2) motion correction algorithms. METHODS: 151 patients who underwent coronary CT angiography (CCTA) and invasive coronary angiography (ICA)/FFR within 3 months were retrospectively included. CCTA images were reconstructed using an iterative reconstruction technique and then further processed through SSF1 and SSF2 algorithms. All images were divided into three groups: STD, SSF1, and SSF2. Obstructive stenosis was defined as a diameter stenosis of ≥ 50 % in the left main artery or ≥ 70 % in other epicardial vessels. Stenosis with an FFR of ≤ 0.8 or a diameter stenosis of ≥ 90 % (as revealed via ICA) was considered ischemic. In patients with multiple lesions, the lesion with lowest CT-FFR was used for patient-level analysis. RESULTS: The overall quality score in SSF2 group (median = 3.67) was markedly higher than that in STD (median = 3) and SSF1 (median = 3) groups (P < 0.001). The best correlation (r = 0.652, P < 0.001) and consistency (mean difference = 0.04) between the CT-FFR and FFR values were observed in the SSF2 group. At the per-lesion level, CT-FFRSSF2 outperformed CT-FFRSSF1 in diagnosing ischemic lesions (area under the curve = 0.887 vs. 0.795, P < 0.001). At the per-patient level, the SSF2 group also demonstrated the highest diagnostic performance. CONCLUSION: The SSF2 algorithm significantly improved CCTA image quality and enhanced its diagnostic performance for evaluating stenosis severity and CT-FFR calculations.


Subject(s)
Algorithms , Computed Tomography Angiography , Coronary Angiography , Coronary Stenosis , Fractional Flow Reserve, Myocardial , Humans , Fractional Flow Reserve, Myocardial/physiology , Female , Male , Computed Tomography Angiography/methods , Middle Aged , Retrospective Studies , Coronary Angiography/methods , Coronary Stenosis/diagnostic imaging , Coronary Stenosis/physiopathology , Aged , Reproducibility of Results , Radiographic Image Interpretation, Computer-Assisted/methods , Sensitivity and Specificity , Motion
19.
Radiol Med ; 129(7): 999-1007, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38935247

ABSTRACT

PURPOSE: To determine the optimal window setting for virtual monoenergetic images (VMI) reconstructed from dual-layer spectral coronary computed tomography angiography (DE-CCTA) datasets. MATERIAL AND METHODS: 50 patients (30 males; mean age 61.1 ± 12.4 years who underwent DE-CCTA from May 2021 to June 2022 for suspected coronary artery disease, were retrospectively included. Image quality assessment was performed on conventional images and VMI reconstructions at 70 and 40 keV. Objective image quality was assessed using contrast-to-noise ratio (CNR). Two independent observers manually identified the best window settings (B-W/L) for VMI 70 and VMI 40 visualization. B-W/L were then normalized with aortic attenuation using linear regression analysis to obtain the optimized W/L (O-W/L) settings. Additionally, subjective image quality was evaluated using a 5-point Likert scale, and vessel diameters were measured to examine any potential impact of different W/L settings. RESULTS: VMI 40 demonstrated higher CNR values compared to conventional and VMI 70. B-W/L settings identified were 1180/280 HU for VMI 70 and 3290/900 HU for VMI 40. Subsequent linear regression analysis yielded O-W/L settings of 1155/270 HU for VMI 70 and 3230/880 HU for VMI 40. VMI 40 O-W/L received the highest scores for each parameter compared to conventional (all p < 0.0027). Using O-W/L settings for VMI 70 and VMI 40 did not result in significant differences in vessel measurements compared to conventional images. CONCLUSION: Optimization of VMI requires adjustments in W/L settings. Our results recommend W/L settings of 1155/270 HU for VMI 70 and 3230/880 HU for VMI 40.


Subject(s)
Computed Tomography Angiography , Coronary Angiography , Coronary Artery Disease , Humans , Male , Middle Aged , Female , Computed Tomography Angiography/methods , Retrospective Studies , Coronary Artery Disease/diagnostic imaging , Coronary Angiography/methods , Aged , Coronary Vessels/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
20.
Pediatr Radiol ; 54(8): 1315-1324, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38839610

ABSTRACT

BACKGROUND: Low-iodine-dose computed tomography (CT) protocols have emerged to mitigate the risks associated with contrast injection, often resulting in decreased image quality. OBJECTIVE: To evaluate the image quality of low-iodine-dose CT combined with an artificial intelligence (AI)-based contrast-boosting technique in abdominal CT, compared to a standard-iodine-dose protocol in children. MATERIALS AND METHODS: This single-center retrospective study included 35 pediatric patients (mean age 9.2 years, range 1-17 years) who underwent sequential abdominal CT scans-one with a standard-iodine-dose protocol (standard-dose group, Iobitridol 350 mgI/mL) and another with a low-iodine-dose protocol (low-dose group, Iohexol 240 mgI/mL)-within a 4-month interval from January 2022 to July 2022. The low-iodine CT protocol was reconstructed using an AI-based contrast-boosting technique (contrast-boosted group). Quantitative and qualitative parameters were measured in the three groups. For qualitative parameters, interobserver agreement was assessed using the intraclass correlation coefficient, and mean values were employed for subsequent analyses. For quantitative analysis of the three groups, repeated measures one-way analysis of variance with post hoc pairwise analysis was used. For qualitative analysis, the Friedman test followed by post hoc pairwise analysis was used. Paired t-tests were employed to compare radiation dose and iodine uptake between the standard- and low-dose groups. RESULTS: The standard-dose group exhibited higher attenuation, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) of organs and vessels compared to the low-dose group (all P-values < 0.05 except for liver SNR, P = 0.12). However, noise levels did not differ between the standard- and low-dose groups (P = 0.86). The contrast-boosted group had increased attenuation, CNR, and SNR of organs and vessels, and reduced noise compared with the low-dose group (all P < 0.05). The contrast-boosted group showed no differences in attenuation, CNR, and SNR of organs and vessels (all P > 0.05), and lower noise (P = 0.002), than the standard-dose group. In qualitative analysis, the contrast-boosted group did not differ regarding vessel enhancement and lesion conspicuity (P > 0.05) but had lower noise (P < 0.05) and higher organ enhancement and artifacts (all P < 0.05) than the standard-dose group. While iodine uptake was significantly reduced in low-iodine-dose CT (P < 0.001), there was no difference in radiation dose between standard- and low-iodine-dose CT (all P > 0.05). CONCLUSION: Low-iodine-dose abdominal CT, combined with an AI-based contrast-boosting technique exhibited comparable organ and vessel enhancement, as well as lesion conspicuity compared to standard-iodine-dose CT in children. Moreover, image noise decreased in the contrast-boosted group, albeit with an increase in artifacts.


Subject(s)
Artificial Intelligence , Contrast Media , Tomography, X-Ray Computed , Humans , Retrospective Studies , Child , Female , Male , Contrast Media/administration & dosage , Child, Preschool , Tomography, X-Ray Computed/methods , Infant , Adolescent , Iohexol/administration & dosage , Radiation Dosage , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Abdominal/methods
SELECTION OF CITATIONS
SEARCH DETAIL