Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.463
Filter
1.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38720391

ABSTRACT

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Subject(s)
Deep Learning , Lung Neoplasms , Multiple Pulmonary Nodules , Phantoms, Imaging , Radiation Dosage , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/pathology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
3.
Radiology ; 311(2): e232178, 2024 May.
Article in English | MEDLINE | ID: mdl-38742970

ABSTRACT

Background Accurate characterization of suspicious small renal masses is crucial for optimized management. Deep learning (DL) algorithms may assist with this effort. Purpose To develop and validate a DL algorithm for identifying benign small renal masses at contrast-enhanced multiphase CT. Materials and Methods Surgically resected renal masses measuring 3 cm or less in diameter at contrast-enhanced CT were included. The DL algorithm was developed by using retrospective data from one hospital between 2009 and 2021, with patients randomly allocated in a training and internal test set ratio of 8:2. Between 2013 and 2021, external testing was performed on data from five independent hospitals. A prospective test set was obtained between 2021 and 2022 from one hospital. Algorithm performance was evaluated by using the area under the receiver operating characteristic curve (AUC) and compared with the results of seven clinicians using the DeLong test. Results A total of 1703 patients (mean age, 56 years ± 12 [SD]; 619 female) with a single renal mass per patient were evaluated. The retrospective data set included 1063 lesions (874 in training set, 189 internal test set); the multicenter external test set included 537 lesions (12.3%, 66 benign) with 89 subcentimeter (≤1 cm) lesions (16.6%); and the prospective test set included 103 lesions (13.6%, 14 benign) with 20 (19.4%) subcentimeter lesions. The DL algorithm performance was comparable with that of urological radiologists: for the external test set, AUC was 0.80 (95% CI: 0.75, 0.85) versus 0.84 (95% CI: 0.78, 0.88) (P = .61); for the prospective test set, AUC was 0.87 (95% CI: 0.79, 0.93) versus 0.92 (95% CI: 0.86, 0.96) (P = .70). For subcentimeter lesions in the external test set, the algorithm and urological radiologists had similar AUC of 0.74 (95% CI: 0.63, 0.83) and 0.81 (95% CI: 0.68, 0.92) (P = .78), respectively. Conclusion The multiphase CT-based DL algorithm showed comparable performance with that of radiologists for identifying benign small renal masses, including lesions of 1 cm or less. Published under a CC BY 4.0 license. Supplemental material is available for this article.


Subject(s)
Contrast Media , Deep Learning , Kidney Neoplasms , Tomography, X-Ray Computed , Humans , Female , Male , Middle Aged , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/pathology , Retrospective Studies , Tomography, X-Ray Computed/methods , Prospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Aged , Algorithms , Kidney/diagnostic imaging , Adult
5.
Radiology ; 311(2): e233270, 2024 May.
Article in English | MEDLINE | ID: mdl-38713028

ABSTRACT

Background Generating radiologic findings from chest radiographs is pivotal in medical image analysis. The emergence of OpenAI's generative pretrained transformer, GPT-4 with vision (GPT-4V), has opened new perspectives on the potential for automated image-text pair generation. However, the application of GPT-4V to real-world chest radiography is yet to be thoroughly examined. Purpose To investigate the capability of GPT-4V to generate radiologic findings from real-world chest radiographs. Materials and Methods In this retrospective study, 100 chest radiographs with free-text radiology reports were annotated by a cohort of radiologists, two attending physicians and three residents, to establish a reference standard. Of 100 chest radiographs, 50 were randomly selected from the National Institutes of Health (NIH) chest radiographic data set, and 50 were randomly selected from the Medical Imaging and Data Resource Center (MIDRC). The performance of GPT-4V at detecting imaging findings from each chest radiograph was assessed in the zero-shot setting (where it operates without prior examples) and few-shot setting (where it operates with two examples). Its outcomes were compared with the reference standard with regards to clinical conditions and their corresponding codes in the International Statistical Classification of Diseases, Tenth Revision (ICD-10), including the anatomic location (hereafter, laterality). Results In the zero-shot setting, in the task of detecting ICD-10 codes alone, GPT-4V attained an average positive predictive value (PPV) of 12.3%, average true-positive rate (TPR) of 5.8%, and average F1 score of 7.3% on the NIH data set, and an average PPV of 25.0%, average TPR of 16.8%, and average F1 score of 18.2% on the MIDRC data set. When both the ICD-10 codes and their corresponding laterality were considered, GPT-4V produced an average PPV of 7.8%, average TPR of 3.5%, and average F1 score of 4.5% on the NIH data set, and an average PPV of 10.9%, average TPR of 4.9%, and average F1 score of 6.4% on the MIDRC data set. With few-shot learning, GPT-4V showed improved performance on both data sets. When contrasting zero-shot and few-shot learning, there were improved average TPRs and F1 scores in the few-shot setting, but there was not a substantial increase in the average PPV. Conclusion Although GPT-4V has shown promise in understanding natural images, it had limited effectiveness in interpreting real-world chest radiographs. © RSNA, 2024 Supplemental material is available for this article.


Subject(s)
Radiography, Thoracic , Humans , Radiography, Thoracic/methods , Retrospective Studies , Female , Male , Middle Aged , Radiographic Image Interpretation, Computer-Assisted/methods , Aged , Adult
6.
Eur Radiol Exp ; 8(1): 54, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38698099

ABSTRACT

BACKGROUND: We aimed to improve the image quality (IQ) of sparse-view computed tomography (CT) images using a U-Net for lung metastasis detection and determine the best tradeoff between number of views, IQ, and diagnostic confidence. METHODS: CT images from 41 subjects aged 62.8 ± 10.6 years (mean ± standard deviation, 23 men), 34 with lung metastasis, 7 healthy, were retrospectively selected (2016-2018) and forward projected onto 2,048-view sinograms. Six corresponding sparse-view CT data subsets at varying levels of undersampling were reconstructed from sinograms using filtered backprojection with 16, 32, 64, 128, 256, and 512 views. A dual-frame U-Net was trained and evaluated for each subsampling level on 8,658 images from 22 diseased subjects. A representative image per scan was selected from 19 subjects (12 diseased, 7 healthy) for a single-blinded multireader study. These slices, for all levels of subsampling, with and without U-Net postprocessing, were presented to three readers. IQ and diagnostic confidence were ranked using predefined scales. Subjective nodule segmentation was evaluated using sensitivity and Dice similarity coefficient (DSC); clustered Wilcoxon signed-rank test was used. RESULTS: The 64-projection sparse-view images resulted in 0.89 sensitivity and 0.81 DSC, while their counterparts, postprocessed with the U-Net, had improved metrics (0.94 sensitivity and 0.85 DSC) (p = 0.400). Fewer views led to insufficient IQ for diagnosis. For increased views, no substantial discrepancies were noted between sparse-view and postprocessed images. CONCLUSIONS: Projection views can be reduced from 2,048 to 64 while maintaining IQ and the confidence of the radiologists on a satisfactory level. RELEVANCE STATEMENT: Our reader study demonstrates the benefit of U-Net postprocessing for regular CT screenings of patients with lung metastasis to increase the IQ and diagnostic confidence while reducing the dose. KEY POINTS: • Sparse-projection-view streak artifacts reduce the quality and usability of sparse-view CT images. • U-Net-based postprocessing removes sparse-view artifacts while maintaining diagnostically accurate IQ. • Postprocessed sparse-view CTs drastically increase radiologists' confidence in diagnosing lung metastasis.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Male , Middle Aged , Tomography, X-Ray Computed/methods , Female , Retrospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Aged
7.
J Appl Clin Med Phys ; 25(5): e14337, 2024 May.
Article in English | MEDLINE | ID: mdl-38576183

ABSTRACT

PURPOSE: The quality of on-board imaging systems, including cone-beam computed tomography (CBCT), plays a vital role in image-guided radiation therapy (IGRT) and adaptive radiotherapy. Recently, there has been an upgrade of the CBCT systems fused in the O-ring linear accelerators called HyperSight, featuring a high imaging performance. As the characterization of a new imaging system is essential, we evaluated the image quality of the HyperSight system by comparing it with Halcyon 3.0 CBCT and providing benchmark data for routine imaging quality assurance. METHODS: The HyperSight features ultra-fast scan time, a larger kilovoltage (kV) detector, a more substantial kV tube, and an advanced reconstruction algorithm. Imaging protocols in the two modes of operation, treatment mode with IGRT and the CBCT for planning (CBCTp) mode were evaluated and compared with Halcyon 3.0 CBCT. Image quality metrics, including spatial resolution, contrast resolution, uniformity, noise, computed tomography (CT) number linearity, and calibration error, were assessed using a Catphan and an electron density phantom and analyzed with TotalQA software. RESULTS: HyperSight demonstrated substantial improvements in contrast-to-noise ratio and noise in both IGRT and CBCTp modes compared to Halcyon 3.0 CBCT. CT number calibration error of HyperSight CBCTp mode (1.06%) closely matches that of a full CT scanner (0.72%), making it suitable for adaptive planning. In addition, the advanced hardware of HyperSight, such as ultra-fast scan time (5.9 s) or 2.5 times larger heat unit capacity, enhanced the clinical efficiency in our experience. CONCLUSIONS: HyperSight represented a significant advancement in CBCT imaging. With its image quality, CT number accuracy, and ultra-fast scans, HyperSight has a potential to transform patient care and treatment outcomes. The enhanced scan speed and image quality of HyperSight are expected to significantly improve the quality and efficiency of treatment, particularly benefiting patients.


Subject(s)
Algorithms , Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Particle Accelerators , Phantoms, Imaging , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted , Radiotherapy, Image-Guided , Cone-Beam Computed Tomography/methods , Particle Accelerators/instrumentation , Humans , Radiotherapy Planning, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Radiotherapy, Image-Guided/methods , Radiotherapy, Intensity-Modulated/methods , Quality Assurance, Health Care/standards , Radiographic Image Interpretation, Computer-Assisted/methods
8.
Radiol Artif Intell ; 6(3): e230375, 2024 May.
Article in English | MEDLINE | ID: mdl-38597784

ABSTRACT

Purpose To explore the stand-alone breast cancer detection performance, at different risk score thresholds, of a commercially available artificial intelligence (AI) system. Materials and Methods This retrospective study included information from 661 695 digital mammographic examinations performed among 242 629 female individuals screened as a part of BreastScreen Norway, 2004-2018. The study sample included 3807 screen-detected cancers and 1110 interval breast cancers. A continuous examination-level risk score by the AI system was used to measure performance as the area under the receiver operating characteristic curve (AUC) with 95% CIs and cancer detection at different AI risk score thresholds. Results The AUC of the AI system was 0.93 (95% CI: 0.92, 0.93) for screen-detected cancers and interval breast cancers combined and 0.97 (95% CI: 0.97, 0.97) for screen-detected cancers. In a setting where 10% of the examinations with the highest AI risk scores were defined as positive and 90% with the lowest scores as negative, 92.0% (3502 of 3807) of the screen-detected cancers and 44.6% (495 of 1110) of the interval breast cancers were identified with AI. In this scenario, 68.5% (10 987 of 16 040) of false-positive screening results (negative recall assessment) were considered negative by AI. When 50% was used as the cutoff, 99.3% (3781 of 3807) of the screen-detected cancers and 85.2% (946 of 1110) of the interval breast cancers were identified as positive by AI, whereas 17.0% (2725 of 16 040) of the false-positive results were considered negative. Conclusion The AI system showed high performance in detecting breast cancers within 2 years of screening mammography and a potential for use to triage low-risk mammograms to reduce radiologist workload. Keywords: Mammography, Breast, Screening, Convolutional Neural Network (CNN), Deep Learning Algorithms Supplemental material is available for this article. © RSNA, 2024 See also commentary by Bahl and Do in this issue.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Early Detection of Cancer , Mammography , Humans , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/epidemiology , Breast Neoplasms/diagnosis , Female , Mammography/methods , Norway/epidemiology , Retrospective Studies , Middle Aged , Early Detection of Cancer/methods , Aged , Adult , Mass Screening/methods , Radiographic Image Interpretation, Computer-Assisted/methods
9.
Radiol Artif Intell ; 6(3): e230318, 2024 May.
Article in English | MEDLINE | ID: mdl-38568095

ABSTRACT

Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (P = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (P < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (P = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (P < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. Keywords: Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Bae in this issue.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Mammography , Sensitivity and Specificity , Humans , Female , Breast Neoplasms/diagnostic imaging , Middle Aged , Mammography/methods , Retrospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Republic of Korea/epidemiology , Deep Learning , Adult , Time Factors , Algorithms , United States , Reproducibility of Results
10.
Comput Biol Med ; 175: 108505, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38688129

ABSTRACT

The latest developments in deep learning have demonstrated the importance of CT medical imaging for the classification of pulmonary nodules. However, challenges remain in fully leveraging the relevant medical annotations of pulmonary nodules and distinguishing between the benign and malignant labels of adjacent nodules. Therefore, this paper proposes the Nodule-CLIP model, which deeply mines the potential relationship between CT images, complex attributes of lung nodules, and benign and malignant attributes of lung nodules through a comparative learning method, and optimizes the model in the image feature extraction network by using its similarities and differences to improve its ability to distinguish similar lung nodules. Firstly, we segment the 3D lung nodule information by U-Net to reduce the interference caused by the background of lung nodules and focus on the lung nodule images. Secondly, the image features, class features, and complex attribute features are aligned by contrastive learning and loss function in Nodule-CLIP to achieve lung nodule image optimization and improve classification ability. A series of testing and ablation experiments were conducted on the public dataset LIDC-IDRI, and the final benign and malignant classification rate was 90.6%, and the recall rate was 92.81%. The experimental results show the advantages of this method in terms of lung nodule classification as well as interpretability.


Subject(s)
Lung Neoplasms , Solitary Pulmonary Nodule , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/classification , Lung Neoplasms/pathology , Tomography, X-Ray Computed/methods , Solitary Pulmonary Nodule/diagnostic imaging , Deep Learning , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Databases, Factual
11.
J Thorac Imaging ; 39(3): 194-199, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38640144

ABSTRACT

PURPOSE: To develop and evaluate a deep convolutional neural network (DCNN) model for the classification of acute and chronic lung nodules from nontuberculous mycobacterial-lung disease (NTM-LD) on computed tomography (CT). MATERIALS AND METHODS: We collected a data set of 650 nodules (316 acute and 334 chronic) from the CT scans of 110 patients with NTM-LD. The data set was divided into training, validation, and test sets in a ratio of 4:1:1. Bounding boxes were used to crop the 2D CT images down to the area of interest. A DCNN model was built using 11 convolutional layers and trained on these images. The performance of the model was evaluated on the hold-out test set and compared with that of 3 radiologists who independently reviewed the images. RESULTS: The DCNN model achieved an area under the receiver operating characteristic curve of 0.806 for differentiating acute and chronic NTM-LD nodules, corresponding to sensitivity, specificity, and accuracy of 76%, 68%, and 72%, respectively. The performance of the model was comparable to that of the 3 radiologists, who had area under the receiver operating characteristic curve, sensitivity, specificity, and accuracy of 0.693 to 0.771, 61% to 82%, 59% to 73%, and 60% to 73%, respectively. CONCLUSIONS: This study demonstrated the feasibility of using a DCNN model for the classification of the activity of NTM-LD nodules on chest CT. The model performance was comparable to that of radiologists. This approach can potentially and efficiently improve the diagnosis and management of NTM-LD.


Subject(s)
Deep Learning , Lung Neoplasms , Pneumonia , Humans , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Retrospective Studies , Lung Neoplasms/diagnostic imaging
12.
Comput Biol Med ; 174: 108420, 2024 May.
Article in English | MEDLINE | ID: mdl-38613896

ABSTRACT

BACKGROUND AND OBJECTIVE: Liver tumor segmentation (LiTS) accuracy on contrast-enhanced computed tomography (CECT) images is higher than that on non-contrast computed tomography (NCCT) images. However, CECT requires contrast medium and repeated scans to obtain multiphase enhanced CT images, which is time-consuming and cost-increasing. Therefore, despite the lower accuracy of LiTS on NCCT images, which still plays an irreplaceable role in some clinical settings, such as guided brachytherapy, ablation, or evaluation of patients with renal function damage. In this study, we intend to generate enhanced high-contrast pseudo-color CT (PCCT) images to improve the accuracy of LiTS and RECIST diameter measurement on NCCT images. METHODS: To generate high-contrast CT liver tumor region images, an intensity-based tumor conspicuity enhancement (ITCE) model was first developed. In the ITCE model, a pseudo color conversion function from an intensity distribution of the tumor was established, and it was applied in NCCT to generate enhanced PCCT images. Additionally, we design a tumor conspicuity enhancement-based liver tumor segmentation (TCELiTS) model, which was applied to improve the segmentation of liver tumors on NCCT images. The TCELiTS model consists of three components: an image enhancement module based on the ITCE model, a segmentation module based on a deep convolutional neural network, and an attention loss module based on restricted activation. Segmentation performance was analyzed using the Dice similarity coefficient (DSC), sensitivity, specificity, and RECIST diameter error. RESULTS: To develop the deep learning model, 100 patients with histopathologically confirmed liver tumors (hepatocellular carcinoma, 64 patients; hepatic hemangioma, 36 patients) were randomly divided into a training set (75 patients) and an independent test set (25 patients). Compared with existing tumor automatic segmentation networks trained on CECT images (U-Net, nnU-Net, DeepLab-V3, Modified U-Net), the DSCs achieved on the enhanced PCCT images are both improved compared with those on NCCT images. We observe improvements of 0.696-0.713, 0.715 to 0.776, 0.748 to 0.788, and 0.733 to 0.799 in U-Net, nnU-Net, DeepLab-V3, and Modified U-Net, respectively, in terms of DSC values. In addition, an observer study including 5 doctors was conducted to compare the segmentation performance of enhanced PCCT images with that of NCCT images and showed that enhanced PCCT images are more advantageous for doctors to segment tumor regions. The results showed an accuracy improvement of approximately 3%-6%, but the time required to segment a single CT image was reduced by approximately 50 %. CONCLUSIONS: Experimental results show that the ITCE model can generate high-contrast enhanced PCCT images, especially in liver regions, and the TCELiTS model can improve LiTS accuracy in NCCT images.


Subject(s)
Liver Neoplasms , Tomography, X-Ray Computed , Humans , Liver Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Male , Female , Radiographic Image Interpretation, Computer-Assisted/methods , Liver/diagnostic imaging , Middle Aged , Aged
13.
Comput Biol Med ; 173: 108361, 2024 May.
Article in English | MEDLINE | ID: mdl-38569236

ABSTRACT

Deep learning plays a significant role in the detection of pulmonary nodules in low-dose computed tomography (LDCT) scans, contributing to the diagnosis and treatment of lung cancer. Nevertheless, its effectiveness often relies on the availability of extensive, meticulously annotated dataset. In this paper, we explore the utilization of an incompletely annotated dataset for pulmonary nodules detection and introduce the FULFIL (Forecasting Uncompleted Labels For Inexpensive Lung nodule detection) algorithm as an innovative approach. By instructing annotators to label only the nodules they are most confident about, without requiring complete coverage, we can substantially reduce annotation costs. Nevertheless, this approach results in an incompletely annotated dataset, which presents challenges when training deep learning models. Within the FULFIL algorithm, we employ Graph Convolution Network (GCN) to discover the relationships between annotated and unannotated nodules for self-adaptively completing the annotation. Meanwhile, a teacher-student framework is employed for self-adaptive learning using the completed annotation dataset. Furthermore, we have designed a Dual-Views loss to leverage different data perspectives, aiding the model in acquiring robust features and enhancing generalization. We carried out experiments using the LUng Nodule Analysis (LUNA) dataset, achieving a sensitivity of 0.574 at a False positives per scan (FPs/scan) of 0.125 with only 10% instance-level annotations for nodules. This performance outperformed comparative methods by 7.00%. Experimental comparisons were conducted to evaluate the performance of our model and human experts on test dataset. The results demonstrate that our model can achieve a comparable level of performance to that of human experts. The comprehensive experimental results demonstrate that FULFIL can effectively leverage an incomplete pulmonary nodule dataset to develop a robust deep learning model, making it a promising tool for assisting in lung nodule detection.


Subject(s)
Deep Learning , Lung Neoplasms , Solitary Pulmonary Nodule , Humans , Solitary Pulmonary Nodule/diagnostic imaging , Lung Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Lung/diagnostic imaging
14.
Biomed Phys Eng Express ; 10(4)2024 May 07.
Article in English | MEDLINE | ID: mdl-38663368

ABSTRACT

The intricate nature of lung cancer treatment poses considerable challenges upon diagnosis. Early detection plays a pivotal role in mitigating its escalating global mortality rates. Consequently, there are pressing demands for robust and dependable early detection and diagnostic systems. However, the technological limitations and complexity of the disease make it challenging to implement an efficient lung cancer screening system. AI-based CT image analysis techniques are showing significant contributions to the development of computer-assisted detection (CAD) systems for lung cancer screening. Various existing research groups are working on implementing CT image analysis systems for assessing and classifying lung cancer. However, the complexity of different structures inside the CT image is high and comprehension of significant information inherited by them is more complex even after applying advanced feature extraction and feature selection techniques. Traditional and classical feature selection techniques may struggle to capture complex interdependencies between features. They may get stuck in local optima and sometimes require additional exploration strategies. Traditional techniques may also struggle with combinatorial optimization problems when applied to a prominent feature space. This paper proposed a methodology to overcome the existing challenges by applying feature extraction using Vision Transformer (FexViT) and Feature selection using the Quantum Computing based Quadratic unconstrained binary optimization (QC-FSelQUBO) technique. This algorithm shows better performance when compared with other existing techniques. The proposed methodology showed better performance as compared to other existing techniques when evaluated by applying necessary output measures, such as accuracy, Area under roc (receiver operating characteristics) curve, precision, sensitivity, and specificity, obtained as 94.28%, 99.10%, 96.17%, 90.16% and 97.46%. The further advancement of CAD systems is essential to meet the demand for more reliable detection and diagnosis of cancer, which can be addressed by leading the proposed quantum computation and growing AI-based technology ahead.


Subject(s)
Algorithms , Lung Neoplasms , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Early Detection of Cancer/methods , ROC Curve , Quantum Theory
15.
J Cancer Res Ther ; 20(2): 615-624, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38687932

ABSTRACT

AIM: The accurate reconstruction of cone-beam computed tomography (CBCT) from sparse projections is one of the most important areas for study. The compressed sensing theory has been widely employed in the sparse reconstruction of CBCT. However, the total variation (TV) approach solely uses information from the i-coordinate, j-coordinate, and k-coordinate gradients to reconstruct the CBCT image. MATERIALS AND METHODS: It is well recognized that the CBCT image can be reconstructed more accurately with more gradient information from different directions. Thus, this study introduces a novel approach, named the new multi-gradient direction total variation minimization method. The method uses gradient information from the ij-coordinate, ik-coordinate, and jk-coordinate directions to reconstruct CBCT images, which incorporates nine different types of gradient information from nine directions. RESULTS: This study assessed the efficacy of the proposed methodology using under-sampled projections from four different experiments, including two digital phantoms, one patient's head dataset, and one physical phantom dataset. The results indicated that the proposed method achieved the lowest RMSE index and the highest SSIM index. Meanwhile, we compared the voxel intensity curves of the reconstructed images to assess the edge structure preservation. Among the various methods compared, the curves generated by the proposed method exhibited the highest level of consistency with the gold standard image curves. CONCLUSION: In summary, the proposed method showed significant potential in enhancing the quality and accuracy of CBCT image reconstruction.


Subject(s)
Algorithms , Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Phantoms, Imaging , Humans , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Head/diagnostic imaging
16.
J Appl Clin Med Phys ; 25(5): e14299, 2024 May.
Article in English | MEDLINE | ID: mdl-38520072

ABSTRACT

A new generation cone-beam computed tomography (CBCT) system with new hardware design and advanced image reconstruction algorithms is available for radiation treatment simulation or adaptive radiotherapy (HyperSight CBCT imaging solution, Varian Medical Systems-a Siemens Healthineers company). This study assesses the CBCT image quality metrics using the criteria routinely used for diagnostic CT scanner accreditation as a first step towards the future use of HyperSight CBCT images for treatment planning and target/organ delineations. Image performance was evaluated using American College of Radiology (ACR) Program accreditation phantom tests for diagnostic computed tomography systems (CTs) and compared HyperSight images with a standard treatment planning diagnostic CT scanner (Siemens SOMATOM Edge) and with existing CBCT systems (Varian TrueBeam version 2.7 and Varian Halcyon version 2.0).  Image quality performance for all Varian HyperSight CBCT vendor-provided imaging protocols were assessed using ACR head and body ring CT phantoms, then compared to existing imaging modalities. Image quality analysis metrics included contrast-to-noise (CNR), spatial resolution, Hounsfield number (HU) accuracy, image scaling, and uniformity. All image quality assessments were made following the recommendations and passing criteria provided by the ACR. The Varian HyperSight CBCT imaging system demonstrated excellent image quality, with the majority of vendor-provided imaging protocols capable of passing all ACR CT accreditation standards. Nearly all (8/11) vendor-provided protocols passed ACR criteria using the ACR head phantom, with the Abdomen Large, Pelvis Large, and H&N vendor-provided protocols produced HU uniformity values slightly exceeding passing criteria but remained within the allowable minor deviation levels (5-7 HU maximum differences). Compared to other existing CT and CBCT imaging modalities, both HyperSight Head and Pelvis imaging protocols matched the performance of the SOMATOM CT scanner, and both the HyperSight and SOMATOM CT substantially surpassed the performance of the Halcyon 2.0 and TrueBeam version 2.7 systems. Varian HyperSight CBCT imaging system could pass almost all tests for all vendor-provided protocols using ACR accreditation criteria, with image quality similar to those produced by diagnostic CT scanners and significantly better than existing linac-based CBCT imaging systems.


Subject(s)
Benchmarking , Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Particle Accelerators , Phantoms, Imaging , Radiotherapy Planning, Computer-Assisted , Humans , Cone-Beam Computed Tomography/methods , Cone-Beam Computed Tomography/instrumentation , Particle Accelerators/instrumentation , Image Processing, Computer-Assisted/methods , Radiotherapy Planning, Computer-Assisted/methods , Algorithms , Radiotherapy, Intensity-Modulated/methods , Radiotherapy Dosage , Accreditation , Radiographic Image Interpretation, Computer-Assisted/methods
17.
J Am Heart Assoc ; 13(6): e032665, 2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38497470

ABSTRACT

BACKGROUND: Dual-layer spectral-detector dual-energy computed tomography angiography (DLCTA) can distinguish components of carotid plaques. Data on identifying symptomatic carotid plaques in patients using DLCTA are not available. METHODS AND RESULTS: In this prospective observational study, patients with carotid plaques were enrolled and received DLCTA. The attenuation for both polyenergetic image and virtual monoenergetic images (40, 70, 100, and 140 keV), as well as Z-effective value, were recorded in the noncalcified regions of plaques. Logistic regression models were used to assess the association between attenuations of DLCTA and the presence of symptomatic carotid plaques. In total, 100 participants (mean±SD age, 64.37±8.31 years; 82.0% were men) were included, and 36% of the cases were identified with the symptomatic group. DLCTA parameters were different between 2 groups (symptomatic versus asymptomatic: computed tomography [CT] 40 keV, 152.63 [interquartile range (IQR), 70.22-259.78] versus 256.78 [IQR, 150.34-408.13]; CT 70 keV, 81.28 [IQR, 50.13-119.33] versus 108.87 [IQR, 77.01-165.88]; slope40-140 keV, 0.91 [IQR, 0.35-1.87] versus 1.92 [IQR, 0.96-3.00]; Z-effective value, 7.92 [IQR, 7.53-8.46] versus 8.41 [IQR, 7.94-8.92]), whereas no difference was found in conventional polyenergetic images. The risk of symptomatic plaque was lower in the highest tertiles of attenuations in CT 40 keV (adjusted odds ratio [OR], 0.243 [95% CI, 0.078-0.754]), CT 70 keV (adjusted OR, 0.313 [95% CI, 0.104-0.940]), Z-effective values (adjusted OR, 0.138 [95% CI, 0.039-0.490]), and slope40-140 keV (adjusted OR, 0.157 [95% CI, 0.046-0.539]), with all P values and P trends <0.05. The areas under the curve for CT 40 keV, CT 70 keV, slope 40 to 140 keV, and Z-effective values were 0.64, 0.61, 0.64, and 0.63, respectively. CONCLUSIONS: Parameters of DLCTA might help assist in distinguishing symptomatic carotid plaques. Further studies with a larger sample size may address the overlap and improve the diagnostic accuracy.


Subject(s)
Carotid Artery Diseases , Plaque, Atherosclerotic , Male , Humans , Middle Aged , Aged , Female , Computed Tomography Angiography/methods , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods , Carotid Artery Diseases/diagnostic imaging , Retrospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods
18.
Cancer Imaging ; 24(1): 40, 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38509635

ABSTRACT

BACKGROUND: Low-dose computed tomography (LDCT) has been shown useful in early lung cancer detection. This study aimed to develop a novel deep learning model for detecting pulmonary nodules on chest LDCT images. METHODS: In this secondary analysis, three lung nodule datasets, including Lung Nodule Analysis 2016 (LUNA16), Lung Nodule Received Operation (LNOP), and Lung Nodule in Health Examination (LNHE), were used to train and test deep learning models. The 3D region proposal network (RPN) was modified via a series of pruning experiments for better predictive performance. The performance of each modified deep leaning model was evaluated based on sensitivity and competition performance metric (CPM). Furthermore, the performance of the modified 3D RPN trained on three datasets was evaluated by 10-fold cross validation. Temporal validation was conducted to assess the reliability of the modified 3D RPN for detecting lung nodules. RESULTS: The results of pruning experiments indicated that the modified 3D RPN composed of the Cross Stage Partial Network (CSPNet) approach to Residual Network (ResNet) Xt (CSP-ResNeXt) module, feature pyramid network (FPN), nearest anchor method, and post-processing masking, had the optimal predictive performance with a CPM of 92.2%. The modified 3D RPN trained on the LUNA16 dataset had the highest CPM (90.1%), followed by the LNOP dataset (CPM: 74.1%) and the LNHE dataset (CPM: 70.2%). When the modified 3D RPN trained and tested on the same datasets, the sensitivities were 94.6%, 84.8%, and 79.7% for LUNA16, LNOP, and LNHE, respectively. The temporal validation analysis revealed that the modified 3D RPN tested on LNOP test set achieved a CPM of 71.6% and a sensitivity of 85.7%, and the modified 3D RPN tested on LNHE test set had a CPM of 71.7% and a sensitivity of 83.5%. CONCLUSION: A modified 3D RPN for detecting lung nodules on LDCT scans was designed and validated, which may serve as a computer-aided diagnosis system to facilitate lung nodule detection and lung cancer diagnosis.


A modified 3D RPN for detecting lung nodules on CT images that exhibited greater sensitivity and CPM than did several previously reported CAD detection models was established.


Subject(s)
Lung Neoplasms , Solitary Pulmonary Nodule , Humans , Solitary Pulmonary Nodule/diagnostic imaging , Reproducibility of Results , Imaging, Three-Dimensional/methods , Lung , Tomography, X-Ray Computed/methods , Lung Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
19.
PLoS One ; 19(3): e0300325, 2024.
Article in English | MEDLINE | ID: mdl-38512860

ABSTRACT

Worldwide, lung cancer is the leading cause of cancer-related deaths. To manage lung nodules, radiologists observe computed tomography images, review various imaging findings, and record these in radiology reports. The report contents should be of high quality and uniform regardless of the radiologist. Here, we propose an artificial intelligence system that automatically generates descriptions related to lung nodules in computed tomography images. Our system consists of an image recognition method for extracting contents-namely, bronchopulmonary segments and nodule characteristics from images-and a natural language processing method to generate fluent descriptions. To verify our system's clinical usefulness, we conducted an experiment in which two radiologists created nodule descriptions of findings using our system. Through our system, the similarity of the described contents between the two radiologists (p = 0.001) and the comprehensiveness of the contents (p = 0.025) improved, while the accuracy did not significantly deteriorate (p = 0.484).


Subject(s)
Lung Neoplasms , Solitary Pulmonary Nodule , Humans , Artificial Intelligence , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Lung , Radiologists , Solitary Pulmonary Nodule/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
20.
Clin Radiol ; 79(5): e651-e658, 2024 May.
Article in English | MEDLINE | ID: mdl-38433041

ABSTRACT

AIM: To investigate the improvement in image quality of triple-low-protocol (low radiation, low contrast medium dose, low injection speed) renal artery computed tomography (CT) angiography (RACTA) using deep-learning image reconstruction (DLIR), in comparison with standard-dose single- and dual-energy CT (DECT) using adaptive statistical iterative reconstruction-Veo (ASIR-V) algorithm. MATERIALS AND METHODS: Ninety patients for RACTA were divided into different groups: standard-dose single-energy CT (S group) using ASIR-V at 60% strength (60%ASIR-V), DECT (DE group) with 60%ASIR-V including virtual monochromatic images at 40 keV (DE40 group) and 70 keV (DE70 group), and the triple-low protocol single-energy CT (L group) with DLIR at high level (DLIR-H). The effective dose (ED), contrast medium dose, injection speed, standard deviation (SD), signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of abdominal aorta (AA), and left/right renal artery (LRA, RRA), and subjective scores were compared among the different groups. RESULTS: The L group significantly reduced ED by 37.6% and 31.2%, contrast medium dose by 33.9% and 30.5%, and injection speed by 30% and 30%, respectively, compared to the S and DE groups. The L group had the lowest SD values for all arteries compared to the other groups (p<0.001). The SNR of RRA and LRA in the L group, and the CNR of all arteries in the DE40 group had highest value compared to others (p<0.05). The L group had the best comprehensive score with good consistency (p<0.05). CONCLUSIONS: The triple-low protocol RACTA with DLIR-H significantly reduces the ED, contrast medium doses, and injection speed, while providing good comprehensive image quality.


Subject(s)
Computed Tomography Angiography , Deep Learning , Humans , Renal Artery/diagnostic imaging , Tomography, X-Ray Computed/methods , Angiography , Image Processing, Computer-Assisted , Radiographic Image Interpretation, Computer-Assisted/methods , Radiation Dosage , Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL
...