Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Eur J Radiol ; 164: 110858, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37209462

RESUMO

PURPOSE: To develop a generative adversarial network (GAN) to quantify COVID-19 pneumonia on chest radiographs automatically. MATERIALS AND METHODS: This retrospective study included 50,000 consecutive non-COVID-19 chest CT scans in 2015-2017 for training. Anteroposterior virtual chest, lung, and pneumonia radiographs were generated from whole, segmented lung, and pneumonia pixels from each CT scan. Two GANs were sequentially trained to generate lung images from radiographs and to generate pneumonia images from lung images. GAN-driven pneumonia extent (pneumonia area/lung area) was expressed from 0% to 100%. We examined the correlation of GAN-driven pneumonia extent with semi-quantitative Brixia X-ray severity score (one dataset, n = 4707) and quantitative CT-driven pneumonia extent (four datasets, n = 54-375), along with analyzing a measurement difference between the GAN and CT extents. Three datasets (n = 243-1481), where unfavorable outcomes (respiratory failure, intensive care unit admission, and death) occurred in 10%, 38%, and 78%, respectively, were used to examine the predictive power of GAN-driven pneumonia extent. RESULTS: GAN-driven radiographic pneumonia was correlated with the severity score (0.611) and CT-driven extent (0.640). 95% limits of agreements between GAN and CT-driven extents were -27.1% to 17.4%. GAN-driven pneumonia extent provided odds ratios of 1.05-1.18 per percent for unfavorable outcomes in the three datasets, with areas under the receiver operating characteristic curve (AUCs) of 0.614-0.842. When combined with demographic information only and with both demographic and laboratory information, the prediction models yielded AUCs of 0.643-0.841 and 0.688-0.877, respectively. CONCLUSION: The generative adversarial network automatically quantified COVID-19 pneumonia on chest radiographs and identified patients with unfavorable outcomes.


Assuntos
COVID-19 , Pneumonia , Humanos , COVID-19/diagnóstico por imagem , Estudos Retrospectivos , SARS-CoV-2 , Pneumonia/diagnóstico por imagem , Pulmão/diagnóstico por imagem
2.
Abdom Radiol (NY) ; 48(8): 2547-2556, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37222771

RESUMO

PURPOSE: Liver Imaging Reporting and Data System (LI-RADS) is limited by interreader variability. Thus, our study aimed to develop a deep-learning model for classifying LI-RADS major features using subtraction images using magnetic resonance imaging (MRI). METHODS: This single-center retrospective study included 222 consecutive patients who underwent resection for hepatocellular carcinoma (HCC) between January, 2015 and December, 2017. Subtraction arterial, portal venous, and transitional phase images of preoperative gadoxetic acid-enhanced MRI were used to train and test the deep-learning models. Initially, a three-dimensional (3D) nnU-Net-based deep-learning model was developed for HCC segmentation. Subsequently, a 3D U-Net-based deep-learning model was developed to assess three LI-RADS major features (nonrim arterial phase hyperenhancement [APHE], nonperipheral washout, and enhancing capsule [EC]), utilizing the results determined by board-certified radiologists as reference standards. The HCC segmentation performance was assessed using the Dice similarity coefficient (DSC), sensitivity, and precision. The sensitivity, specificity, and accuracy of the deep-learning model for classifying LI-RADS major features were calculated. RESULTS: The average DSC, sensitivity, and precision of our model for HCC segmentation were 0.884, 0.891, and 0.887, respectively, across all the phases. Our model demonstrated a sensitivity, specificity, and accuracy of 96.6% (28/29), 66.7% (4/6), and 91.4% (32/35), respectively, for nonrim APHE; 95.0% (19/20), 50.0% (4/8), and 82.1% (23/28), respectively, for nonperipheral washout; and 86.7% (26/30), 54.2% (13/24), and 72.2% (39/54) for EC, respectively. CONCLUSION: We developed an end-to-end deep-learning model that classifies the LI-RADS major features using subtraction MRI images. Our model exhibited satisfactory performance in classifying LI-RADS major features.


Assuntos
Carcinoma Hepatocelular , Aprendizado Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Estudos Retrospectivos , Meios de Contraste , Sensibilidade e Especificidade , Imageamento por Ressonância Magnética/métodos
3.
Quant Imaging Med Surg ; 13(2): 747-762, 2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36819253

RESUMO

Background: This study aimed (I) to investigate the clinical implication of computed tomography (CT) cavity volume in tuberculosis (TB) and non-tuberculous mycobacterial pulmonary disease (NTM-PD), and (II) to develop a three-dimensional (3D) nnU-Net model to automatically detect and quantify cavity volume on CT images. Methods: We retrospectively included conveniently sampled 206 TB and 186 NTM-PD patients in a tertiary referral hospital, who underwent thin-section chest CT scans from 2012 through 2019. TB was microbiologically confirmed, and NTM-PD was diagnosed by 2007 Infectious Diseases Society of America/American Thoracic Society guideline. The reference cavities were semi-automatically segmented on CT images and a 3D nnU-Net model was built with 298 cases (240 cases for training, 20 for tuning, and 38 for internal validation). Receiver operating characteristic curves were used to evaluate the accuracy of the CT cavity volume for two clinically relevant parameters: sputum smear positivity in TB and treatment in NTM-PD. The sensitivity and false-positive rate were calculated to assess the cavity detection of nnU-Net using radiologist-detected cavities as references, and the intraclass correlation coefficient (ICC) between the reference and the U-Net-derived cavity volumes was analyzed. Results: The mean CT cavity volumes in TB and NTM-PD patients were 11.3 and 16.4 cm3, respectively, and were significantly greater in smear-positive TB (P<0.001) and NTM-PD necessitating treatment (P=0.020). The CT cavity volume provided areas under the curve of 0.701 [95% confidence interval (CI): 0.620-0.782] for TB sputum positivity and 0.834 (95% CI: 0.773-0.894) for necessity of NTM-PD treatment. The nnU-Net provided per-patient sensitivity of 100% (19/19) and per-lesion sensitivity of 83.7% (41/49) in the validation dataset, with an average of 0.47 false-positive small cavities per patient (median volume, 0.26 cm3). The mean Dice similarity coefficient between the manually segmented cavities and the U-Net-derived cavities was 78.9. The ICCs between the reference and U-Net-derived volumes were 0.991 (95% CI: 0.983-0.995) and 0.933 (95% CI: 0.897-0.957) on a per-patient and per-lesion basis, respectively. Conclusions: CT cavity volume was associated with sputum positivity in TB and necessity of treatment in NTM-PD. The 3D nnU-Net model could automatically detect and quantify mycobacterial cavities on chest CT, helping assess TB infectivity and initiate NTM-TB treatment.

4.
J Magn Reson Imaging ; 57(3): 871-881, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-35775971

RESUMO

BACKGROUND: Accurate and rapid measurement of the MRI volume of meningiomas is essential in clinical practice to determine the growth rate of the tumor. Imperfect automation and disappointing performance for small meningiomas of previous automated volumetric tools limit their use in routine clinical practice. PURPOSE: To develop and validate a computational model for fully automated meningioma segmentation and volume measurement on contrast-enhanced MRI scans using deep learning. STUDY TYPE: Retrospective. POPULATION: A total of 659 intracranial meningioma patients (median age, 59.0 years; interquartile range: 53.0-66.0 years) including 554 women and 105 men. FIELD STRENGTH/SEQUENCE: The 1.0 T, 1.5 T, and 3.0 T; three-dimensional, T1 -weighted gradient-echo imaging with contrast enhancement. ASSESSMENT: The tumors were manually segmented by two neurosurgeons, H.K. and C.-K.P., with 10 and 26 years of clinical experience, respectively, for use as the ground truth. Deep learning models based on U-Net and nnU-Net were trained using 459 subjects and tested for 100 patients from a single institution (internal validation set [IVS]) and 100 patients from other 24 institutions (external validation set [EVS]), respectively. The performance of each model was evaluated with the Sørensen-Dice similarity coefficient (DSC) compared with the ground truth. STATISTICAL TESTS: According to the normality of the data distribution verified by the Shapiro-Wilk test, variables with three or more categories were compared by the Kruskal-Wallis test with Dunn's post hoc analysis. RESULTS: A two-dimensional (2D) nnU-Net showed the highest median DSCs of 0.922 and 0.893 for the IVS and EVS, respectively. The nnU-Nets achieved superior performance in meningioma segmentation than the U-Nets. The DSCs of the 2D nnU-Net for small meningiomas less than 1 cm3 were 0.769 and 0.780 with the IVS and EVS, respectively. DATA CONCLUSION: A fully automated and accurate volumetric measurement tool for meningioma with clinically applicable performance for small meningioma using nnU-Net was developed. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.


Assuntos
Aprendizado Profundo , Neoplasias Meníngeas , Meningioma , Masculino , Humanos , Feminino , Pessoa de Meia-Idade , Meningioma/diagnóstico por imagem , Estudos Retrospectivos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias Meníngeas/diagnóstico por imagem
5.
Radiology ; 306(3): e220292, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36283113

RESUMO

Background Total lung capacity (TLC) has been estimated with use of chest radiographs based on time-consuming methods, such as planimetric techniques and manual measurements. Purpose To develop a deep learning-based, multidimensional model capable of estimating TLC from chest radiographs and demographic variables and validate its technical performance and clinical utility with use of multicenter retrospective data sets. Materials and Methods A deep learning model was pretrained with use of 50 000 consecutive chest CT scans performed between January 2015 and June 2017. The model was fine-tuned on 3523 pairs of posteroanterior chest radiographs and plethysmographic TLC measurements from consecutive patients who underwent pulmonary function testing on the same day. The model was tested with multicenter retrospective data sets from two tertiary care centers and one community hospital, including (a) an external test set 1 (n = 207) and external test set 2 (n = 216) for technical performance and (b) patients with idiopathic pulmonary fibrosis (n = 217) for clinical utility. Technical performance was evaluated with use of various agreement measures, and clinical utility was assessed in terms of the prognostic value for overall survival with use of multivariable Cox regression. Results The mean absolute difference and within-subject SD between observed and estimated TLC were 0.69 L and 0.73 L, respectively, in the external test set 1 (161 men; median age, 70 years [IQR: 61-76 years]) and 0.52 L and 0.53 L in the external test set 2 (113 men; median age, 63 years [IQR: 51-70 years]). In patients with idiopathic pulmonary fibrosis (145 men; median age, 67 years [IQR: 61-73 years]), greater estimated TLC percentage was associated with lower mortality risk (adjusted hazard ratio, 0.97 per percent; 95% CI: 0.95, 0.98; P < .001). Conclusion A fully automatic, deep learning-based model estimated total lung capacity from chest radiographs, and the model predicted survival in idiopathic pulmonary fibrosis. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Sorkness in this issue.


Assuntos
Aprendizado Profundo , Fibrose Pulmonar Idiopática , Masculino , Humanos , Idoso , Pessoa de Meia-Idade , Estudos Retrospectivos , Radiografia , Fibrose Pulmonar Idiopática/diagnóstico por imagem , Medidas de Volume Pulmonar , Pulmão/diagnóstico por imagem
6.
Clin Nutr ; 40(8): 5038-5046, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34365038

RESUMO

BACKGROUND & AIMS: Body composition analysis on CT images is a valuable tool for sarcopenia assessment. We aimed to develop and validate a deep neural network applicable to whole-body CT images of PET-CT scan for the automatic volumetric segmentation of body composition. METHODS: For model development, one hundred whole-body or torso 18F-fluorodeoxyglucose PET-CT scans of 100 patients were retrospectively included. Two radiologists semi-automatically labeled the following seven body components in every CT image slice, providing a total of 46,967 image slices from the 100 scans for training the 3D U-Net (training, 39,268 slices; tuning, 3116 slices; internal validation, 4583 slices): skin, bone, muscle, abdominal visceral fat, subcutaneous fat, internal organs with vessels, and central nervous system. The segmentation accuracy was assessed using reference masks from three external datasets: two Korean centers (4668 and 4796 image slices from 20 CT scans, each) and a French public dataset (3763 image slices from 24 CT scans). The 3D U-Net-driven values were clinically validated using bioelectrical impedance analysis (BIA) and by assessing the model's diagnostic performance for sarcopenia in a community-based elderly cohort (n = 522). RESULTS: The 3D U-Net achieved accurate body composition segmentation with an average dice similarity coefficient of 96.5%-98.9% for all masks and 92.3%-99.3% for muscle, abdominal visceral fat, and subcutaneous fat in the validation datasets. The 3D U-Net-derived torso volume of skeletal muscle and fat tissue and the average area of those tissues in the waist were correlated with BIA-derived appendicular lean mass (correlation coefficients: 0.71 and 0.72, each) and fat mass (correlation coefficients: 0.95 and 0.93, each). The 3D U-Net-derived average areas of skeletal muscle and fat tissue in the waist were independently associated with sarcopenia (P < .001, each) with adjustment for age and sex, providing an area under the curve of 0.858 (95% CI, 0.815 to 0.901). CONCLUSIONS: This deep neural network model enabled the automatic volumetric segmentation of body composition on whole-body CT images, potentially expanding adjunctive sarcopenia assessment on PET-CT scan and volumetric assessment of metabolism in whole-body muscle and fat tissues.


Assuntos
Composição Corporal , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Sarcopenia/diagnóstico , Imagem Corporal Total/métodos , Abdome/diagnóstico por imagem , Idoso , Feminino , Fluordesoxiglucose F18 , Humanos , Gordura Intra-Abdominal/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Músculo Esquelético/diagnóstico por imagem , Avaliação Nutricional , Compostos Radiofarmacêuticos , República da Coreia , Estudos Retrospectivos , Gordura Subcutânea/diagnóstico por imagem
7.
Eur Radiol ; 31(12): 9012-9021, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34009411

RESUMO

OBJECTIVES: To develop a deep learning-based pulmonary vessel segmentation algorithm (DLVS) from noncontrast chest CT and to investigate its clinical implications in assessing vascular remodeling of chronic obstructive lung disease (COPD) patients. METHODS: For development, 104 pulmonary CT angiography scans (49,054 slices) using a dual-source CT were collected, and spatiotemporally matched virtual noncontrast and 50-keV images were generated. Vessel maps were extracted from the 50-keV images. The 3-dimensional U-Net-based DLVS was trained to segment pulmonary vessels (with a vessel map as the output) from virtual noncontrast images (as the input). For external validation, vendor-independent noncontrast CT images (n = 14) and the VESSEL 12 challenge open dataset (n = 3) were used. For each case, 200 points were selected including 20 intra-lesional points, and the probability value for each point was extracted. For clinical validation, we included 281 COPD patients with low-dose noncontrast CTs. The DLVS-calculated volume of vessels with a cross-sectional area < 5 mm2 (PVV5) and the PVV5 divided by total vessel volume (%PVV5) were measured. RESULTS: DLVS correctly segmented 99.1% of the intravascular points (1,387/1,400) and 93.1% of the extravascular points (1,309/1,400). The areas-under-the receiver-operating characteristic curve (AUROCs) were 0.977 and 0.969 for the two external validation datasets. For the COPD patients, both PPV5 and %PPV5 successfully differentiated severe patients whose FEV1 < 50 (AUROCs; 0.715 and 0.804) and were significantly correlated with the emphysema index (Ps < .05). CONCLUSIONS: DLVS successfully segmented pulmonary vessels on noncontrast chest CT by utilizing spatiotemporally matched 50-keV images from a dual-source CT scanner and showed promising clinical applicability in COPD. KEY POINTS: • We developed a deep learning pulmonary vessel segmentation algorithm using virtual noncontrast images and 50-keV enhanced images produced by a dual-source CT scanner. • Our algorithm successfully segmented vessels on diseased lungs. • Our algorithm showed promising results in assessing the loss of small vessel density in COPD patients.


Assuntos
Aprendizado Profundo , Algoritmos , Angiografia por Tomografia Computadorizada , Humanos , Tórax , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA