Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 6 de 6
1.
Radiol Artif Intell ; 6(4): e230275, 2024 Jul.
Article En | MEDLINE | ID: mdl-38717293

Purpose To explore the potential benefits of deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Materials and Methods In this retrospective study, a U-Net was trained for artifact reduction on simulated sparse-view cranial CT scans in 3000 patients, obtained from a public dataset and reconstructed with varying sparse-view levels. Additionally, EfficientNet-B2 was trained on full-view CT data from 17 545 patients for automated hemorrhage detection. Detection performance was evaluated using the area under the receiver operating characteristic curve (AUC), with differences assessed using the DeLong test, along with confusion matrices. A total variation (TV) postprocessing approach, commonly applied to sparse-view CT, served as the basis for comparison. A Bonferroni-corrected significance level of .001/6 = .00017 was used to accommodate for multiple hypotheses testing. Results Images with U-Net postprocessing were better than unprocessed and TV-processed images with respect to image quality and automated hemorrhage detection. With U-Net postprocessing, the number of views could be reduced from 4096 (AUC: 0.97 [95% CI: 0.97, 0.98]) to 512 (0.97 [95% CI: 0.97, 0.98], P < .00017) and to 256 views (0.97 [95% CI: 0.96, 0.97], P < .00017) with a minimal decrease in hemorrhage detection performance. This was accompanied by mean structural similarity index measure increases of 0.0210 (95% CI: 0.0210, 0.0211) and 0.0560 (95% CI: 0.0559, 0.0560) relative to unprocessed images. Conclusion U-Net-based artifact reduction substantially enhanced automated hemorrhage detection in sparse-view cranial CT scans. Keywords: CT, Head/Neck, Hemorrhage, Diagnosis, Supervised Learning Supplemental material is available for this article. © RSNA, 2024.


Artifacts , Deep Learning , Tomography, X-Ray Computed , Humans , Retrospective Studies , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Male , Female , Intracranial Hemorrhages/diagnostic imaging , Intracranial Hemorrhages/diagnosis
2.
Commun Med (Lond) ; 2(1): 147, 2022 Nov 21.
Article En | MEDLINE | ID: mdl-36411311

BACKGROUND: Currently, alternative medical imaging methods for the assessment of pulmonary involvement in patients infected with COVID-19 are sought that combine a higher sensitivity than conventional (attenuation-based) chest radiography with a lower radiation dose than CT imaging. METHODS: Sixty patients with COVID-19-associated lung changes in a CT scan and 40 subjects without pathologic lung changes visible in the CT scan were included (in total, 100, 59 male, mean age 58 ± 14 years). All patients gave written informed consent. We employed a clinical setup for grating-based dark-field chest radiography, obtaining both a dark-field and a conventional attenuation image in one image acquisition. Attenuation images alone, dark-field images alone, and both displayed simultaneously were assessed for the presence of COVID-19-associated lung changes on a scale from 1 to 6 (1 = surely not, 6 = surely) by four blinded radiologists. Statistical analysis was performed by evaluation of the area under the receiver-operator-characteristics curves (AUC) using Obuchowski's method with a 0.05 level of significance. RESULTS: We show that dark-field imaging has a higher sensitivity for COVID-19-pneumonia than attenuation-based imaging and that the combination of both is superior to one imaging modality alone. Furthermore, a quantitative image analysis shows a significant reduction of dark-field signals for COVID-19-patients. CONCLUSIONS: Dark-field imaging complements and improves conventional radiography for the visualisation and detection of COVID-19-pneumonia.


Computed tomography (CT) imaging uses X-rays to obtain images of the inside of the body. It is used to look at lung damage in patients with COVID-19. However, CT imaging exposes the patient to a considerable amount of radiation. As radiation exposure can lead to the development of cancer, exposure should be minimised. Conventional plain X-ray imaging uses lower amounts of radiation but lacks sensitivity. We used dark-field chest X-ray imaging, which also uses low amounts of radiation, to assess the lungs of patients with COVID-19. Radiologists identified pneumonia in patients more easily from dark-field images than from usual plain X-ray images. We anticipate dark-field X-ray imaging will be useful to follow-up patients suspected of having lung damage.

3.
Sci Rep ; 11(1): 15857, 2021 08 04.
Article En | MEDLINE | ID: mdl-34349135

We present a method to generate synthetic thorax radiographs with realistic nodules from CT scans, and a perfect ground truth knowledge. We evaluated the detection performance of nine radiologists and two convolutional neural networks in a reader study. Nodules were artificially inserted into the lung of a CT volume and synthetic radiographs were obtained by forward-projecting the volume. Hence, our framework allowed for a detailed evaluation of CAD systems' and radiologists' performance due to the availability of accurate ground-truth labels for nodules from synthetic data. Radiographs for network training (U-Net and RetinaNet) were generated from 855 CT scans of a public dataset. For the reader study, 201 radiographs were generated from 21 nodule-free CT scans with altering nodule positions, sizes and nodule counts of inserted nodules. Average true positive detections by nine radiologists were 248.8 nodules, 51.7 false positive predicted nodules and 121.2 false negative predicted nodules. The best performing CAD system achieved 268 true positives, 66 false positives and 102 false negatives. Corresponding weighted alternative free response operating characteristic figure-of-merits (wAFROC FOM) for the radiologists range from 0.54 to 0.87 compared to a value of 0.81 (CI 0.75-0.87) for the best performing CNN. The CNN did not perform significantly better against the combined average of the 9 readers (p = 0.49). Paramediastinal nodules accounted for most false positive and false negative detections by readers, which can be explained by the presence of more tissue in this area.


Multiple Pulmonary Nodules/diagnosis , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Radiologists/statistics & numerical data , Solitary Pulmonary Nodule/diagnosis , Humans , Observer Variation , ROC Curve
4.
Sci Rep ; 10(1): 12987, 2020 07 31.
Article En | MEDLINE | ID: mdl-32737389

Lung cancer is a major cause of death worldwide. As early detection can improve outcome, regular screening is of great interest, especially for certain risk groups. Besides low-dose computed tomography, chest X-ray is a potential option for screening. Convolutional network (CNN) based computer aided diagnosis systems have proven their ability of identifying nodules in radiographies and thus may assist radiologists in clinical practice. Based on segmented pulmonary nodules, we trained a CNN based one-stage detector (RetinaNet) with 257 annotated radiographs and 154 additional radiographs from a public dataset. We compared the performance of the convolutional network with the performance of two radiologists by conducting a reader study with 75 cases. Furthermore, the potential use for screening on patient level and the impact of foreign bodies with respect to false-positive detections was investigated. For nodule location detection, the architecture achieved a performance of 43 true-positives, 26 false-positives and 22 false-negatives. In comparison, performance of the two readers was 42 ± 2 true-positives, 28 ± 0 false-positives and 23 ± 2 false-negatives. For the screening task, we retrieved a ROC AUC value of 0.87 for the reader study test set. We found the trained RetinaNet architecture to be only slightly prone to foreign bodies in terms of misclassifications: out of 59 additional radiographs containing foreign bodies, false-positives in two radiographs were falsely detected due to foreign bodies.


Foreign Bodies/diagnostic imaging , Lung/diagnostic imaging , Neural Networks, Computer , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed , False Positive Reactions , Humans
5.
PLoS One ; 15(7): e0235765, 2020.
Article En | MEDLINE | ID: mdl-32667947

Automatic evaluation of 3D volumes is a topic of importance in order to speed up clinical decision making. We describe a method to classify computed tomography scans on volume level for the presence of non-acute cerebral infarction. This is not a trivial task, as the lesions are often similar to other areas in the brain regarding shape and intensity. A three stage architecture is used for classification: 1) A cranial cavity segmentation network is developed, trained and applied. 2) Region proposals are generated 3) Connected regions are classified using a multi-resolution, densely connected 3D convolutional network. Mean area under curve values for subject level classification are 0.95 for the unstratified test set, 0.88 for stratification by patient age and 0.93 for stratification by CT scanner model. We use a partly segmented dataset of 555 scans of which 186 scans are used in the unstratified test set. Furthermore we examine possible dataset bias for scanner model and patient age parameters. We show a successful application of the proposed three-stage model for full volume classification. In contrast to black-box approaches, the convolutional network's decision can be further assessed by examination of intermediate segmentation results.


Algorithms , Cerebral Infarction/classification , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Aged , Automation , Case-Control Studies , Cerebral Infarction/diagnostic imaging , Cerebral Infarction/pathology , Female , Humans , Male , Retrospective Studies
6.
Biomed Phys Eng Express ; 6(1): 015038, 2020 01 30.
Article En | MEDLINE | ID: mdl-33438626

PURPOSE: To evaluate the benefit of the additional available information present in spectral CT datasets, as compared to conventional CT datasets, when utilizing convolutional neural networks for fully automatic localisation and classification of liver lesions in CT images. MATERIALS AND METHODS: Conventional and spectral CT images (iodine maps, virtual monochromatic images (VMI)) were obtained from a spectral dual-layer CT system. Patient diagnosis were known from the clinical reports and classified into healthy, cyst and hypodense metastasis. In order to compare the value of spectral versus conventional datasets when being passed as input to machine learning algorithms, we implemented a weakly-supervised convolutional neural network (CNN) that learns liver lesion localisation without pixel-level ground truth annotations. Regions-of-interest are selected automatically based on the localisation results and are used to train a second CNN for liver lesion classification (healthy, cyst, hypodense metastasis). The accuracy of lesion localisation was evaluated using the Euclidian distances between the ground truth centres of mass and the predicted centres of mass. Lesion classification was evaluated by precision, recall, accuracy and F1-Score. RESULTS: Lesion localisation showed the best results for spectral information with distances of 8.22 ± 10.72 mm, 8.78 ± 15.21 mm and 8.29 ± 12.97 mm for iodine maps, 40 keV and 70 keV VMIs, respectively. With conventional data distances of 10.58 ± 17.65 mm were measured. For lesion classification, the 40 keV VMIs achieved the highest overall accuracy of 0.899 compared to 0.854 for conventional data. CONCLUSION: An enhanced localisation and classification is reported for spectral CT data, which demonstrates that combining machine-learning technology with spectral CT information may in the future improve the clinical workflow as well as the diagnostic accuracy.


Algorithms , Liver Diseases/pathology , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Dual-Energy Scanned Projection/methods , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods , Humans , Liver Diseases/classification , Machine Learning
...