Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 9 de 9
1.
Eur J Radiol ; 172: 111347, 2024 Mar.
Article En | MEDLINE | ID: mdl-38325189

OBJECTIVES: This study aimed to evaluate the performance of a deep learning radiomics (DLR) model, which integrates multimodal MRI features and clinical information, in diagnosing sacroiliitis related to axial spondyloarthritis (axSpA). MATERIAL & METHODS: A total of 485 patients diagnosed with sacroiliitis related to axSpA (n = 288) or non-sacroiliitis (n = 197) by sacroiliac joint (SIJ) MRI between May 2018 and October 2022 were retrospectively included in this study. The patients were randomly divided into training (n = 388) and testing (n = 97) cohorts. Data were collected using three MRI scanners. We applied a convolutional neural network (CNN) called 3D U-Net for automated SIJ segmentation. Additionally, three CNNs (ResNet50, ResNet101, and DenseNet121) were used to diagnose axSpA-related sacroiliitis using a single modality. The prediction results of all the CNN models across different modalities were integrated using a stacking method based on different algorithms to construct ensemble models, and the optimal ensemble model was used as DLR signature. A combined model incorporating DLR signature with clinical factors was developed using multivariable logistic regression. The performance of the models was evaluated using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). RESULTS: Automated deep learning-based segmentation and manual delineation showed good correlation. ResNet50, as the optimal basic model, achieved an area under the curve (AUC) and accuracy of 0.839 and 0.804, respectively. The combined model yielded the highest performance in diagnosing axSpA-related sacroiliitis (AUC: 0.910; accuracy: 0.856) and outperformed the best ensemble model (AUC: 0.868; accuracy: 0.825) (all P < 0.05). Moreover, the DCA showed good clinical utility in the combined model. CONCLUSION: We developed a diagnostic model for axSpA-related sacroiliitis by combining the DLR signature with clinical factors, which resulted in excellent diagnostic performance.


Axial Spondyloarthritis , Deep Learning , Sacroiliitis , Humans , Magnetic Resonance Imaging/methods , Radiomics , Retrospective Studies , Sacroiliac Joint/diagnostic imaging , Sacroiliitis/diagnostic imaging
2.
Otol Neurotol ; 45(3): e193-e197, 2024 Mar 01.
Article En | MEDLINE | ID: mdl-38361299

OBJECTIVE: To validate how an automated model for vestibular schwannoma (VS) segmentation developed on an external homogeneous dataset performs when applied to internal heterogeneous data. PATIENTS: The external dataset comprised 242 patients with previously untreated, sporadic unilateral VS undergoing Gamma Knife radiosurgery, with homogeneous magnetic resonance imaging (MRI) scans. The internal dataset comprised 10 patients from our institution, with heterogeneous MRI scans. INTERVENTIONS: An automated VS segmentation model was developed on the external dataset. The model was tested on the internal dataset. MAIN OUTCOME MEASURE: Dice score, which measures agreement between ground truth and predicted segmentations. RESULTS: When applied to the internal patient scans, the automated model achieved a mean Dice score of 61% across all 10 images. There were three tumors that were not detected. These tumors were 0.01 ml on average (SD = 0.00 ml). The mean Dice score for the seven tumors that were detected was 87% (SD = 14%). There was one outlier with Dice of 55%-on further review of this scan, it was discovered that hyperintense petrous bone had been included in the tumor segmentation. CONCLUSIONS: We show that an automated segmentation model developed using a restrictive set of siloed institutional data can be successfully adapted for data from different imaging systems and patient populations. This is an important step toward the validation of automated VS segmentation. However, there are significant shortcomings that likely reflect limitations of the data used to train the model. Further validation is needed to make automated segmentation for VS generalizable.


Neuroma, Acoustic , Humans , Neuroma, Acoustic/diagnostic imaging , Magnetic Resonance Imaging/methods
3.
J Comput Assist Tomogr ; 48(1): 55-63, 2024.
Article En | MEDLINE | ID: mdl-37558647

OBJECTIVE: The aim of this study was to compare diatrizoate and iohexol regarding patient acceptance and fecal-tagging performance in noncathartic computed tomography colonography. METHODS: This study enrolled 284 volunteers with fecal tagging by either diatrizoate or iohexol at an iodine concentration of 13.33 mg/mL and an iodine load of 24 g. Patient acceptance was rated on a 4-point scale of gastrointestinal discomfort. Two gastrointestinal radiologists jointly analyzed image quality, fecal-tagging density and homogeneity, and residual contrast agent in the small intestine. The results were compared by the generalized estimating equation method. RESULTS: Patient acceptance was comparable between the 2 groups (3.95 ± 0.22 vs 3.96 ± 0.20, P = 0.777). The diatrizoate group had less residual fluid and stool than the iohexol group ( P = 0.019, P = 0.004, respectively). There was no significant difference in colorectal distention, residual fluid, and stool tagging quality between the 2 groups (all P 's > 0.05). The mean 2-dimensional image quality score was 4.59 ± 0.68 with diatrizoate and 3.60 ± 1.14 with iohexol ( P < 0.001). The attenuation of tagged feces was 581 ± 66 HU with diatrizoate and 1038 ± 117 HU with iohexol ( P < 0.001). Residual contrast agent in the small intestine was assessed at 55.3% and 62.3% for the diatrizoate group and iohexol group, respectively ( P = 0.003). CONCLUSIONS: Compared with iohexol, diatrizoate had better image quality, proper fecal-tagging density, and more homogeneous tagging along with comparable excellent patient acceptance, and might be more suitable for fecal tagging in noncathartic computed tomography colonography.


Colonography, Computed Tomographic , Iodine , Humans , Contrast Media , Iohexol , Diatrizoate , Colonography, Computed Tomographic/methods , Feces
4.
Radiol Artif Intell ; 5(3): e220082, 2023 May.
Article En | MEDLINE | ID: mdl-37293342

Purpose: To investigate the correlation between differences in data distributions and federated deep learning (Fed-DL) algorithm performance in tumor segmentation on CT and MR images. Materials and Methods: Two Fed-DL datasets were retrospectively collected (from November 2020 to December 2021): one dataset of liver tumor CT images (Federated Imaging in Liver Tumor Segmentation [or, FILTS]; three sites, 692 scans) and one publicly available dataset of brain tumor MR images (Federated Tumor Segmentation [or, FeTS]; 23 sites, 1251 scans). Scans from both datasets were grouped according to site, tumor type, tumor size, dataset size, and tumor intensity. To quantify differences in data distributions, the following four distance metrics were calculated: earth mover's distance (EMD), Bhattacharyya distance (BD), χ2 distance (CSD), and Kolmogorov-Smirnov distance (KSD). Both federated and centralized nnU-Net models were trained by using the same grouped datasets. Fed-DL model performance was evaluated by using the ratio of Dice coefficients, θ, between federated and centralized models trained and tested on the same 80:20 split datasets. Results: The Dice coefficient ratio (θ) between federated and centralized models was strongly negatively correlated with the distances between data distributions, with correlation coefficients of -0.920 for EMD, -0.893 for BD, and -0.899 for CSD. However, KSD was weakly correlated with θ, with a correlation coefficient of -0.479. Conclusion: Performance of Fed-DL models in tumor segmentation on CT and MRI datasets was strongly negatively correlated with the distances between data distributions.Keywords: CT, Abdomen/GI, Liver, Comparative Studies, MR Imaging, Brain/Brain Stem, Convolutional Neural Network (CNN), Federated Deep Learning, Tumor Segmentation, Data Distribution Supplemental material is available for this article. © RSNA, 2023See also the commentary by Kwak and Bai in this issue.

5.
J Digit Imaging ; 36(5): 2025-2034, 2023 10.
Article En | MEDLINE | ID: mdl-37268841

Ankylosing spondylitis (AS) is a chronic inflammatory disease that causes inflammatory low back pain and may even limit activity. The grading diagnosis of sacroiliitis on imaging plays a central role in diagnosing AS. However, the grading diagnosis of sacroiliitis on computed tomography (CT) images is viewer-dependent and may vary between radiologists and medical institutions. In this study, we aimed to develop a fully automatic method to segment sacroiliac joint (SIJ) and further grading diagnose sacroiliitis associated with AS on CT. We studied 435 CT examinations from patients with AS and control at two hospitals. No-new-UNet (nnU-Net) was used to segment the SIJ, and a 3D convolutional neural network (CNN) was used to grade sacroiliitis with a three-class method, using the grading results of three veteran musculoskeletal radiologists as the ground truth. We defined grades 0-I as class 0, grade II as class 1, and grades III-IV as class 2 according to modified New York criteria. nnU-Net segmentation of SIJ achieved Dice, Jaccard, and relative volume difference (RVD) coefficients of 0.915, 0.851, and 0.040 with the validation set, respectively, and 0.889, 0.812, and 0.098 with the test set, respectively. The areas under the curves (AUCs) of classes 0, 1, and 2 using the 3D CNN were 0.91, 0.80, and 0.96 with the validation set, respectively, and 0.94, 0.82, and 0.93 with the test set, respectively. 3D CNN was superior to the junior and senior radiologists in the grading of class 1 for the validation set and inferior to expert for the test set (P < 0.05). The fully automatic method constructed in this study based on a convolutional neural network could be used for SIJ segmentation and then accurately grading and diagnosis of sacroiliitis associated with AS on CT images, especially for class 0 and class 2. The method for class 1 was less effective but still more accurate than that of the senior radiologist.


Sacroiliitis , Spondylitis, Ankylosing , Humans , Spondylitis, Ankylosing/diagnosis , Sacroiliitis/diagnostic imaging , Sacroiliac Joint/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods
6.
Cancers (Basel) ; 15(3)2023 Jan 20.
Article En | MEDLINE | ID: mdl-36765610

BACKGROUND: Cancer patients infected with COVID-19 were shown in a multitude of studies to have poor outcomes on the basis of older age and weak immune systems from cancer as well as chemotherapy. In this study, the CT examinations of 22 confirmed COVID-19 cancer patients were analyzed. METHODOLOGY: A retrospective analysis was conducted on 28 cancer patients, of which 22 patients were COVID positive. The CT scan changes before and after treatment and the extent of structural damage to the lungs after COVID-19 infection was analyzed. Structural damage to a lung was indicated by a change in density measured in Hounsfield units (HUs) and by lung volume reduction. A 3D radiometric analysis was also performed and lung and lesion histograms were compared. RESULTS: A total of 22 cancer patients were diagnosed with COVID-19 infection. A repeat CT scan were performed in 15 patients after they recovered from infection. Most of the study patients were diagnosed with leukemia. A secondary clinical analysis was performed to show the associations of COVID treatment on the study subjects, lab data, and outcome on mortality. It was found that post COVID there was a decrease of >50% in lung volume and a higher density in the form of HUs due to scar tissue formation post infection. CONCLUSION: It was concluded that COVID-19 infection may have further detrimental effects on the lungs of cancer patients, thereby, decreasing their lung volume and increasing their lung density due to scar formation.

7.
Acad Radiol ; 27(12): 1665-1678, 2020 12.
Article En | MEDLINE | ID: mdl-33046370

OBJECTIVE: This study was to investigate the CT quantification of COVID-19 pneumonia and its impacts on the assessment of disease severity and the prediction of clinical outcomes in the management of COVID-19 patients. MATERIALS METHODS: Ninety-nine COVID-19 patients who were confirmed by positive nucleic acid test (NAT) of RT-PCR and hospitalized from January 19, 2020 to February 19, 2020 were collected for this retrospective study. All patients underwent arterial blood gas test, routine blood test, chest CT examination, and physical examination on admission. In addition, follow-up clinical data including the disease severity, clinical treatment, and clinical outcomes were collected for each patient. Lung volume, lesion volume, nonlesion lung volume (NLLV) (lung volume - lesion volume), and fraction of nonlesion lung volume (%NLLV) (nonlesion lung volume / lung volume) were quantified in CT images by using two U-Net models trained for segmentation of lung and COVID-19 lesions in CT images. Furthermore, we calculated 20 histogram textures for lesions volume and NLLV, respectively. To investigate the validity of CT quantification in the management of COVID-19, we built random forest (RF) models for the purpose of classification and regression to assess the disease severity (Moderate, Severe, and Critical) and to predict the need and length of ICU stay, the duration of oxygen inhalation, hospitalization, sputum NAT-positive, and patient prognosis. The performance of RF classifiers was evaluated using the area under the receiver operating characteristic curves (AUC) and that of RF regressors using the root-mean-square error. RESULTS: Patients were classified into three groups of disease severity: moderate (n = 25), severe (n = 47) and critical (n = 27), according to the clinical staging. Of which, a total of 32 patients, 1 (1/25) moderate, 6 (6/47) severe, and 25 critical (25/27), respectively, were admitted to ICU. The median values of ICU stay were 0, 0, and 12 days, the duration of oxygen inhalation 10, 15, and 28 days, the hospitalization 12, 16, and 28 days, and the sputum NAT-positive 8, 9, and 13 days, in three severity groups, respectively. The clinical outcomes were complete recovery (n = 3), partial recovery with residual pulmonary damage (n = 80), prolonged recovery (n = 15), and death (n = 1). The %NLLV in three severity groups were 92.18 ± 9.89%, 82.94 ± 16.49%, and 66.19 ± 24.15% with p value <0.05 among each two groups. The AUCs of RF classifiers using hybrid models were 0.927 and 0.929 in classification of moderate vs (severe + critical), and severe vs critical, respectively, which were significantly higher than either radiomics models or clinical models (p < 0.05). The root-mean-square errors of RF regressors were 0.88 weeks for prediction of duration of hospitalization (mean: 2.60 ± 1.01 weeks), 0.92 weeks for duration of oxygen inhalation (mean: 2.44 ± 1.08 weeks), 0.90 weeks for duration of sputum NAT-positive (mean: 1.59 ± 0.98 weeks), and 0.69 weeks for stay of ICU (mean: 1.32 ± 0.67 weeks), respectively. The AUCs for prediction of ICU treatment and prognosis (partial recovery vs prolonged recovery) were 0.945 and 0.960, respectively. CONCLUSION: CT quantification and machine-learning models show great potentials for assisting decision-making in the management of COVID-19 patients by assessing disease severity and predicting clinical outcomes.


Coronavirus Infections , Lung , Pandemics , Pneumonia, Viral , Betacoronavirus , COVID-19 , Humans , Lung/diagnostic imaging , Machine Learning , Prognosis , Retrospective Studies , SARS-CoV-2 , Tomography, X-Ray Computed
8.
IEEE Trans Pattern Anal Mach Intell ; 42(6): 1289-1302, 2020 Jun.
Article En | MEDLINE | ID: mdl-30794166

This paper proposes a disocclusion inpainting framework for depth-based view synthesis. It consists of four modules: foreground extraction, motion compensation, improved background reconstruction, and inpainting. The foreground extraction module detects the foreground objects and removes them from both depth map and rendered video; the motion compensation module guarantees the background reconstruction model to suit for moving camera scenarios; the improved background reconstruction module constructs a stable background video by exploiting the temporal correlation information in both 2D video and its corresponding depth map; and the constructed background video and inpainting module are used to eliminate the holes in the synthesized view. The analysis and experiment indicate that the proposed framework has good generality, scalability and effectiveness, which means most of the existing background reconstruction methods and image inpainting methods can be employed or extended as the modules in our framework. Our comparison results have demonstrated that the proposed framework achieves better synthesized quality, temporal consistency, and has lower running time compared to the other methods.

9.
SLAS Technol ; 22(5): 557-564, 2017 10.
Article En | MEDLINE | ID: mdl-28314109

Bioinformatics studies have emerged in the domain of larval behavior analysis in recent years. A dynamic survival detection and analysis system for automatically monitoring a large amount of mosquito larvae in bioassays with multiwell plates by acquiring and processing videos is proposed in this article. In our system, equipment is designed for acquiring the video of the mosquito larvae in several multiwell plates simultaneously by a camera, and a video analysis module is developed for detecting the survival states of larvae in each well in real time. Also, a novel model and a new image registration algorithm are proposed to accurately obtain the survival state by analyzing the larval motion activities and the weights of larvae in each well. In our experiments, several spinosad bioassays against 2-instar Aedes aegypti with 96-well plates are used to evaluate the proposed system, and the accuracy of the larval survival state in our system is more than 85%. Moreover, this investigation has indicated that the developed system not only can be used in the mosquito larval bioassays but also can be suitable to detect and analyze the behaviors of large amount of other larvae.


Aedes/drug effects , Automation, Laboratory/methods , Biological Assay/methods , Image Processing, Computer-Assisted/methods , Insecticides/pharmacology , Survival Analysis , Video Recording/methods , Animals , Automation, Laboratory/instrumentation , Biological Assay/instrumentation , Drug Combinations , Image Processing, Computer-Assisted/instrumentation , Larva/drug effects , Macrolides/pharmacology , Video Recording/instrumentation
...