Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-39371589

RESUMO

Volumetric assessment of edema due to anasarca can help monitor the progression of diseases such as kidney, liver or heart failure. The ability to measure edema non-invasively by automatic segmentation from abdominal CT scans may be of clinical importance. The current state-of-the-art method for edema segmentation using intensity priors is susceptible to false positives or under-segmentation errors. The application of modern supervised deep learning methods for 3D edema segmentation is limited due to challenges in manual annotation of edema. In the absence of accurate 3D annotations of edema, we propose a weakly supervised learning method that uses edema segmentations produced by intensity priors as pseudo-labels, along with pseudo-labels of muscle, subcutaneous and visceral adipose tissues for context, to produce more refined segmentations with demonstrably lower segmentation errors. The proposed method employs nnU-Nets in multiple stages to produce the final edema segmentation. The results demonstrate the potential of weakly supervised learning using edema and tissue pseudo-labels in improved quantification of edema for clinical applications.

2.
Med Biol Eng Comput ; 2024 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-39448511

RESUMO

Ultrasound (US)-based patient-specific rupture risk analysis of abdominal aortic aneurysms (AAAs) has shown promising results. Input for these models is the patient-specific geometry of the AAA. However, segmentation of the intraluminal thrombus (ILT) remains challenging in US images due to the low ILT-blood contrast. This study aims to improve AAA and ILT segmentation in time-resolved three-dimensional (3D + t) US images using a deep learning approach. In this study a "no new net" (nnU-Net) model was trained on 3D + t US data using either US-based or (co-registered) computed tomography (CT)-based annotations. The optimal training strategy for this low-contrast data was determined for a limited dataset. The merit of augmentation was investigated, as well as the inclusion of low-contrast areas. Segmentation results were validated with CT-based geometries as the ground truth. The model trained on CT-based masks showed the best performance in terms of DICE index, Hausdorff distance, and diameter differences, covering a larger part of the AAA. With a higher accuracy and less manual input the model outperforms conventional methods, with a mean Hausdorff distance of 4.4 mm for the vessel and 7.8 mm for the lumen. However, visibility of the lumen-ILT interface remains the limiting factor, necessitating improvements in image acquisition to ensure broader patient inclusion and enable rupture risk assessment of AAAs in the future.

3.
J Imaging Inform Med ; 2024 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-39384719

RESUMO

Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.

4.
Heliyon ; 10(19): e38118, 2024 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-39398015

RESUMO

Purpose: To develop a deep learning-based algorithm that automatically and accurately classifies patients as either having pulmonary emboli or not in CT pulmonary angiography (CTPA) examinations. Materials and methods: For model development, 700 CTPA examinations from 652 patients performed at a single institution were used, of which 149 examinations contained 1497 PE traced by radiologists. The nnU-Net deep learning-based segmentation framework was trained using 5-fold cross-validation. To enhance classification, we applied logical rules based on PE volume and probability thresholds. External model evaluation was performed in 770 and 34 CTPAs from two independent datasets. Results: A total of 1483 CTPA examinations were evaluated. In internal cross-validation and test set, the trained model correctly classified 123 of 128 examinations as positive for PE (sensitivity 96.1 %; 95 % C.I. 91-98 %; P < .05) and 521 of 551 as negative (specificity 94.6 %; 95 % C.I. 92-96 %; P < .05), achieving an area under the receiver operating characteristic (AUROC) of 96.4 % (95 % C.I. 79-99 %; P < .05). In the first external test dataset, the trained model correctly classified 31 of 32 examinations as positive (sensitivity 96.9 %; 95 % C.I. 84-99 %; P < .05) and 2 of 2 as negative (specificity 100 %; 95 % C.I. 34-100 %; P < .05), achieving an AUROC of 98.6 % (95 % C.I. 83-100 %; P < .05). In the second external test dataset, the trained model correctly classified 379 of 385 examinations as positive (sensitivity 98.4 %; 95 % C.I. 97-99 %; P < .05) and 346 of 385 as negative (specificity 89.9 %; 95 % C.I. 86-93 %; P < .05), achieving an AUROC of 98.5 % (95 % C.I. 83-100 %; P < .05). Conclusion: Our automatic pipeline achieved beyond state-of-the-art diagnostic performance of PE in CTPA using nnU-Net for segmentation and volume- and probability-based post-processing for classification.

5.
J Bone Oncol ; 48: 100630, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39281712

RESUMO

Objective: Variability exists in the subjective delineation of tumor areas in MRI scans of patients with spinal bone metastases. This research aims to investigate the efficacy of the nnUNet radiomics model for automatic segmentation and identification of spinal bone metastases. Methods: A cohort of 118 patients diagnosed with spinal bone metastases at our institution between January 2020 and December 2023 was enrolled. They were randomly divided into a training set (n = 78) and a test set (n = 40). The nnUNet radiomics segmentation model was developed, employing manual delineations of tumor areas by physicians as the reference standard. Both methods were utilized to compute tumor area measurements, and the segmentation performance and consistency of the nnUNet model were assessed. Results: The nnUNet model demonstrated effective localization and segmentation of metastases, including smaller lesions. The Dice coefficients for the training and test sets were 0.926 and 0.824, respectively. Within the test set, the Dice coefficients for lumbar and thoracic vertebrae were 0.838 and 0.785, respectively. Strong linear correlation was observed between the nnUNet model segmentation and physician-delineated tumor areas in 40 patients (R 2 = 0.998, P < 0.001). Conclusions: The nnUNet model exhibits efficacy in automatically localizing and segmenting spinal bone metastases in MRI scans.

6.
BMC Med Imaging ; 24(1): 233, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39243001

RESUMO

High-Intensity Focused Ultrasound (HIFU) ablation represents a rapidly advancing non-invasive treatment modality that has achieved considerable success in addressing uterine fibroids, which constitute over 50% of benign gynecological tumors. Preoperative Magnetic Resonance Imaging (MRI) plays a pivotal role in the planning and guidance of HIFU surgery for uterine fibroids, wherein the segmentation of tumors holds critical significance. The segmentation process was previously manually executed by medical experts, entailing a time-consuming and labor-intensive procedure heavily reliant on clinical expertise. This study introduced deep learning-based nnU-Net models, offering a cost-effective approach for their application in the segmentation of uterine fibroids utilizing preoperative MRI images. Furthermore, 3D reconstruction of the segmented targets was implemented to guide HIFU surgery. The evaluation of segmentation and 3D reconstruction performance was conducted with a focus on enhancing the safety and effectiveness of HIFU surgery. Results demonstrated the nnU-Net's commendable performance in the segmentation of uterine fibroids and their surrounding organs. Specifically, 3D nnU-Net achieved Dice Similarity Coefficients (DSC) of 92.55% for the uterus, 95.63% for fibroids, 92.69% for the spine, 89.63% for the endometrium, 97.75% for the bladder, and 90.45% for the urethral orifice. Compared to other state-of-the-art methods such as HIFUNet, U-Net, R2U-Net, ConvUNeXt and 2D nnU-Net, 3D nnU-Net demonstrated significantly higher DSC values, highlighting its superior accuracy and robustness. In conclusion, the efficacy of the 3D nnU-Net model for automated segmentation of the uterus and its surrounding organs was robustly validated. When integrated with intra-operative ultrasound imaging, this segmentation method and 3D reconstruction hold substantial potential to enhance the safety and efficiency of HIFU surgery in the clinical treatment of uterine fibroids.


Assuntos
Ablação por Ultrassom Focalizado de Alta Intensidade , Imageamento Tridimensional , Leiomioma , Imageamento por Ressonância Magnética , Neoplasias Uterinas , Humanos , Leiomioma/diagnóstico por imagem , Leiomioma/cirurgia , Feminino , Imageamento Tridimensional/métodos , Ablação por Ultrassom Focalizado de Alta Intensidade/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias Uterinas/diagnóstico por imagem , Neoplasias Uterinas/cirurgia , Aprendizado Profundo , Cirurgia Assistida por Computador/métodos
7.
Neuroinformatics ; 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39259472

RESUMO

This study concentrates on the segmentation of intracranial aneurysms, a pivotal aspect of diagnosis and treatment planning. We aim to overcome the inherent instance imbalance and morphological variability by introducing a novel morphology and texture loss reweighting approach. Our innovative method involves the incorporation of tailored weights within the loss function of deep neural networks. Specifically designed to account for aneurysm size, shape, and texture, this approach strategically guides the model to focus on capturing discriminative information from imbalanced features. The study conducted extensive experimentation utilizing ADAM and RENJI TOF-MRA datasets to validate the proposed approach. The results of our experimentation demonstrate the remarkable effectiveness of the introduced methodology in improving aneurysm segmentation accuracy. By dynamically adapting to the variances present in aneurysm features, our model showcases promising outcomes for accurate diagnostic insights. The nuanced consideration of morphological and textural nuances within the loss function proves instrumental in overcoming the challenge posed by instance imbalance. In conclusion, our study presents a nuanced solution to the intricate challenge of intracranial aneurysm segmentation. The proposed morphology and texture loss reweighting approach, with its tailored weights and dynamic adaptability, proves to be instrumental in enhancing segmentation precision. The promising outcomes from our experimentation suggest the potential for accurate diagnostic insights and informed treatment strategies, marking a significant advancement in this critical domain of medical imaging.

8.
Artigo em Inglês | MEDLINE | ID: mdl-39271574

RESUMO

PURPOSE: Anasarca is a condition that results from organ dysfunctions, such as heart, kidney, or liver failure, characterized by the presence of edema throughout the body. The quantification of accumulated edema may have potential clinical benefits. This work focuses on accurately estimating the amount of edema non-invasively using abdominal CT scans, with minimal false positives. However, edema segmentation is challenging due to the complex appearance of edema and the lack of manually annotated volumes. METHODS: We propose a weakly supervised approach for edema segmentation using initial edema labels from the current state-of-the-art method for edema segmentation (Intensity Prior), along with labels of surrounding tissues as anatomical priors. A multi-class 3D nnU-Net was employed as the segmentation network, and training was performed using an iterative annotation workflow. RESULTS: We evaluated segmentation accuracy on a test set of 25 patients with edema. The average Dice Similarity Coefficient of the proposed method was similar to Intensity Prior (61.5% vs. 61.7%; p = 0.83 ). However, the proposed method reduced the average False Positive Rate significantly, from 1.8% to 1.1% ( p < 0.001 ). Edema volumes computed using automated segmentation had a strong correlation with manual annotation ( R 2 = 0.87 ). CONCLUSION: Weakly supervised learning using 3D multi-class labels and iterative annotation is an efficient way to perform high-quality edema segmentation with minimal false positives. Automated edema segmentation can produce edema volume estimates that are highly correlated with manual annotation. The proposed approach is promising for clinical applications to monitor anasarca using estimated edema volumes.

9.
Radiol Artif Intell ; 6(5): e230115, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39166971

RESUMO

Purpose To evaluate nnU-Net-based segmentation models for automated delineation of medulloblastoma tumors on multi-institutional MRI scans. Materials and Methods This retrospective study included 78 pediatric patients (52 male, 26 female), with ages ranging from 2 to 18 years, with medulloblastomas, from three different sites (28 from hospital A, 18 from hospital B, and 32 from hospital C), who had data available from three clinical MRI protocols (gadolinium-enhanced T1-weighted, T2-weighted, and fluid-attenuated inversion recovery). The scans were retrospectively collected from the year 2000 until May 2019. Reference standard annotations of the tumor habitat, including enhancing tumor, edema, and cystic core plus nonenhancing tumor subcompartments, were performed by two experienced neuroradiologists. Preprocessing included registration to age-appropriate atlases, skull stripping, bias correction, and intensity matching. The two models were trained as follows: (a) the transfer learning nnU-Net model was pretrained on an adult glioma cohort (n = 484) and fine-tuned on medulloblastoma studies using Models Genesis and (b) the direct deep learning nnU-Net model was trained directly on the medulloblastoma datasets, across fivefold cross-validation. Model robustness was evaluated on the three datasets when using different combinations of training and test sets, with data from two sites at a time used for training and data from the third site used for testing. Results Analysis on the three test sites yielded Dice scores of 0.81, 0.86, and 0.86 and 0.80, 0.86, and 0.85 for tumor habitat; 0.68, 0.84, and 0.77 and 0.67, 0.83, and 0.76 for enhancing tumor; 0.56, 0.71, and 0.69 and 0.56, 0.71, and 0.70 for edema; and 0.32, 0.48, and 0.43 and 0.29, 0.44, and 0.41 for cystic core plus nonenhancing tumor for the transfer learning and direct nnU-Net models, respectively. The models were largely robust to site-specific variations. Conclusion nnU-Net segmentation models hold promise for accurate, robust automated delineation of medulloblastoma tumor subcompartments, potentially leading to more effective radiation therapy planning in pediatric medulloblastoma. Keywords: Pediatrics, MR Imaging, Segmentation, Transfer Learning, Medulloblastoma, nnU-Net, MRI Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Rudie and Correia de Verdier in this issue.


Assuntos
Neoplasias Cerebelares , Meduloblastoma , Imageamento por Ressonância Magnética Multiparamétrica , Humanos , Meduloblastoma/diagnóstico por imagem , Meduloblastoma/patologia , Criança , Adolescente , Feminino , Masculino , Estudos Retrospectivos , Neoplasias Cerebelares/diagnóstico por imagem , Neoplasias Cerebelares/patologia , Pré-Escolar , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
10.
Stud Health Technol Inform ; 316: 606-610, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176815

RESUMO

Machine Learning (ML) has evolved beyond being a specialized technique exclusively used by computer scientists. Besides the general ease of use, automated pipelines allow for training sophisticated ML models with minimal knowledge of computer science. In recent years, Automated ML (AutoML) frameworks have become serious competitors for specialized ML models and have even been able to outperform the latter for specific tasks. Moreover, this success is not limited to simple tasks but also complex ones, like tumor segmentation in histopathological tissue, a very time-consuming task requiring years of expertise by medical professionals. Regarding medical image segmentation, the leading AutoML frameworks are nnU-Net and deepflash2. In this work, we begin to compare those two frameworks in the area of histopathological image segmentation. This use case proves especially challenging, as tumor and healthy tissue are often not clearly distinguishable by hard borders but rather through heterogeneous transitions. A dataset of 103 whole-slide images from 56 glioblastoma patients was used for the evaluation. Training and evaluation were run on a notebook with consumer hardware, determining the suitability of the frameworks for their application in clinical scenarios rather than high-performance scenarios in research labs.


Assuntos
Glioblastoma , Humanos , Glioblastoma/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado de Máquina , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
11.
Med Phys ; 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39137294

RESUMO

BACKGROUND: The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE: This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS: The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS: Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS: This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.

12.
J Imaging Inform Med ; 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39138749

RESUMO

Segmentation of infarcts is clinically important in ischemic stroke management and prognostication. It is unclear what role the combination of DWI, ADC, and FLAIR MRI sequences provide for deep learning in infarct segmentation. Recent technologies in model self-configuration have promised greater performance and generalizability through automated optimization. We assessed the utility of DWI, ADC, and FLAIR sequences on ischemic stroke segmentation, compared self-configuring nnU-Net models to conventional U-Net models without manual optimization, and evaluated the generalizability of results on an external clinical dataset. 3D self-configuring nnU-Net models and standard 3D U-Net models with MONAI were trained on 200 infarcts using DWI, ADC, and FLAIR sequences separately and in all combinations. Segmentation results were compared between models using paired t-test comparison on a hold-out test set of 50 cases. The highest performing model was externally validated on a clinical dataset of 50 MRIs. nnU-Net with DWI sequences attained a Dice score of 0.810 ± 0.155. There was no statistically significant difference when DWI sequences were supplemented with ADC and FLAIR images (Dice score of 0.813 ± 0.150; p = 0.15). nnU-Net models significantly outperformed standard U-Net models for all sequence combinations (p < 0.001). On the external dataset, Dice scores measured 0.704 ± 0.199 for positive cases with false positives with intracranial hemorrhage. Highly optimized neural networks such as nnU-Net provide excellent stroke segmentation even when only provided DWI images, without significant improvement from other sequences. This differs from-and significantly outperforms-standard U-Net architectures. Results translated well to the external clinical environment and provide the groundwork for optimized acute stroke segmentation on MRI.

13.
J Imaging Inform Med ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38955963

RESUMO

Abnormalities in adrenal gland size may be associated with various diseases. Monitoring the volume of adrenal gland can provide a quantitative imaging indicator for such conditions as adrenal hyperplasia, adrenal adenoma, and adrenal cortical adenocarcinoma. However, current adrenal gland segmentation models have notable limitations in sample selection and imaging parameters, particularly the need for more training on low-dose imaging parameters, which limits the generalization ability of the models, restricting their widespread application in routine clinical practice. We developed a fully automated adrenal gland volume quantification and visualization tool based on the no new U-Net (nnU-Net) for the automatic segmentation of deep learning models to address these issues. We established this tool by using a large dataset with multiple parameters, machine types, radiation doses, slice thicknesses, scanning modes, phases, and adrenal gland morphologies to achieve high accuracy and broad adaptability. The tool can meet clinical needs such as screening, monitoring, and preoperative visualization assistance for adrenal gland diseases. Experimental results demonstrate that our model achieves an overall dice coefficient of 0.88 on all images and 0.87 on low-dose CT scans. Compared to other deep learning models and nnU-Net model tools, our model exhibits higher accuracy and broader adaptability in adrenal gland segmentation.

14.
Front Oncol ; 14: 1423774, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38966060

RESUMO

Purpose: Addressing the challenges of unclear tumor boundaries and the confusion between cysts and tumors in liver tumor segmentation, this study aims to develop an auto-segmentation method utilizing Gaussian filter with the nnUNet architecture to effectively distinguish between tumors and cysts, enhancing the accuracy of liver tumor auto-segmentation. Methods: Firstly, 130 cases of liver tumorsegmentation challenge 2017 (LiTS2017) were used for training and validating nnU-Net-based auto-segmentation model. Then, 14 cases of 3D-IRCADb dataset and 25 liver cancer cases retrospectively collected in our hospital were used for testing. The dice similarity coefficient (DSC) was used to evaluate the accuracy of auto-segmentation model by comparing with manual contours. Results: The nnU-Net achieved an average DSC value of 0.86 for validation set (20 LiTS cases) and 0.82 for public testing set (14 3D-IRCADb cases). For clinical testing set, the standalone nnU-Net model achieved an average DSC value of 0.75, which increased to 0.81 after post-processing with the Gaussian filter (P<0.05), demonstrating its effectiveness in mitigating the influence of liver cysts on liver tumor segmentation. Conclusion: Experiments show that Gaussian filter is beneficial to improve the accuracy of liver tumor segmentation in clinic.

15.
Diagnostics (Basel) ; 14(13)2024 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-39001223

RESUMO

PURPOSE: Type A aortic dissection (TAAD) is a life-threatening aortic disease. The tear involves the ascending aorta and progresses into the separation of the layers of the aortic wall and the occurrence of a false lumen. Accurate segmentation of TAAD could provide assistance for disease assessment and guidance for clinical treatment. METHODS: This study applied nnU-Net, a state-of-the-art biomedical segmentation network architecture, to segment contrast-enhanced CT images and quantify the morphological features for TAAD. CT datasets were acquired from 24 patients with TAAD. Manual segmentation and annotation of the CT images was used as the ground-truth. Two-dimensional (2D) nnU-Net and three-dimensional (3D) nnU-Net architectures with Dice- and cross entropy-based loss functions were utilized to segment the true lumen (TL), false lumen (FL), and intimal flap on the images. Four-fold cross validation was performed to evaluate the performance of the two nnU-Net architectures. Six metrics, including accuracy, precision, recall, Intersection of Union, Dice similarity coefficient (DSC), and Hausdorff distance, were calculated to evaluate the performance of the 2D and 3D nnU-Net algorithms in TAAD datasets. Aortic morphological features from both 2D and 3D nnU-Net algorithms were quantified based on the segmented results and compared. RESULTS: Overall, 3D nnU-Net architectures had better performance in TAAD CT datasets, with TL and FL segmentation accuracy up to 99.9%. The DSCs of TLs and FLs based on the 3D nnU-Net were 88.42% and 87.10%. For the aortic TL and FL diameters, the FL area calculated from the segmentation results of the 3D nnU-Net architecture had smaller relative errors (3.89-6.80%), compared to the 2D nnU-Net architecture (relative errors: 4.35-9.48%). CONCLUSIONS: The nnU-Net architectures may serve as a basis for automatic segmentation and quantification of TAAD, which could aid in rapid diagnosis, surgical planning, and subsequent biomechanical simulation of the aorta.

16.
Comput Biol Med ; 179: 108853, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39013341

RESUMO

BACKGROUND: Methods to monitor cardiac functioning non-invasively can accelerate preclinical and clinical research into novel treatment options for heart failure. However, manual image analysis of cardiac substructures is resource-intensive and error-prone. While automated methods exist for clinical CT images, translating these to preclinical µCT data is challenging. We employed deep learning to automate the extraction of quantitative data from both CT and µCT images. METHODS: We collected a public dataset of cardiac CT images of human patients, as well as acquired µCT images of wild-type and accelerated aging mice. The left ventricle, myocardium, and right ventricle were manually segmented in the µCT training set. After template-based heart detection, two separate segmentation neural networks were trained using the nnU-Net framework. RESULTS: The mean Dice score of the CT segmentation results (0.925 ± 0.019, n = 40) was superior to those achieved by state-of-the-art algorithms. Automated and manual segmentations of the µCT training set were nearly identical. The estimated median Dice score (0.940) of the test set results was comparable to existing methods. The automated volume metrics were similar to manual expert observations. In aging mice, ejection fractions had significantly decreased, and myocardial volume increased by age 24 weeks. CONCLUSIONS: With further optimization, automated data extraction expands the application of (µ)CT imaging, while reducing subjectivity and workload. The proposed method efficiently measures the left and right ventricular ejection fraction and myocardial mass. With uniform translation between image types, cardiac functioning in diastolic and systolic phases can be monitored in both animals and humans.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada por Raios X , Camundongos , Animais , Humanos , Tomografia Computadorizada por Raios X/métodos , Ventrículos do Coração/diagnóstico por imagem , Ventrículos do Coração/fisiopatologia , Redes Neurais de Computação , Microtomografia por Raio-X , Processamento de Imagem Assistida por Computador/métodos
17.
Sci Rep ; 14(1): 11987, 2024 05 25.
Artigo em Inglês | MEDLINE | ID: mdl-38796521

RESUMO

Unenhanced CT scans exhibit high specificity in detecting moderate-to-severe hepatic steatosis. Even though many CTs are scanned from health screening and various diagnostic contexts, their potential for hepatic steatosis detection has largely remained unexplored. The accuracy of previous methodologies has been limited by the inclusion of non-parenchymal liver regions. To overcome this limitation, we present a novel deep-learning (DL) based method tailored for the automatic selection of parenchymal portions in CT images. This innovative method automatically delineates circular regions for effectively detecting hepatic steatosis. We use 1,014 multinational CT images to develop a DL model for segmenting liver and selecting the parenchymal regions. The results demonstrate outstanding performance in both tasks. By excluding non-parenchymal portions, our DL-based method surpasses previous limitations, achieving radiologist-level accuracy in liver attenuation measurements and hepatic steatosis detection. To ensure the reproducibility, we have openly shared 1014 annotated CT images and the DL system codes. Our novel research contributes to the refinement the automated detection methodologies of hepatic steatosis on CT images, enhancing the accuracy and efficiency of healthcare screening processes.


Assuntos
Aprendizado Profundo , Fígado Gorduroso , Fígado , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Fígado Gorduroso/diagnóstico por imagem , Fígado Gorduroso/patologia , Fígado/diagnóstico por imagem , Fígado/patologia , Masculino , Reprodutibilidade dos Testes , Feminino
18.
J Med Imaging (Bellingham) ; 11(3): 034502, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38817711

RESUMO

Purpose: Evaluation of lung fissure integrity is required to determine whether emphysema patients have complete fissures and are candidates for endobronchial valve (EBV) therapy. We propose a deep learning (DL) approach to segment fissures using a three-dimensional patch-based convolutional neural network (CNN) and quantitatively assess fissure integrity on CT to evaluate it in subjects with severe emphysema. Approach: From an anonymized image database of patients with severe emphysema, 129 CT scans were used. Lung lobe segmentations were performed to identify lobar regions, and the boundaries among these regions were used to construct approximate interlobar regions of interest (ROIs). The interlobar ROIs were annotated by expert image analysts to identify voxels where the fissure was present and create a reference ROI that excluded non-fissure voxels (where the fissure is incomplete). A CNN configured by nnU-Net was trained using 86 CT scans and their corresponding reference ROIs to segment the ROIs of left oblique fissure (LOF), right oblique fissure (ROF), and right horizontal fissure (RHF). For an independent test set of 43 cases, fissure integrity was quantified by mapping the segmented fissure ROI along the interlobar ROI. A fissure integrity score (FIS) was then calculated as the percentage of labeled fissure voxels divided by total voxels in the interlobar ROI. Predicted FIS (p-FIS) was quantified from the CNN output, and statistical analyses were performed comparing p-FIS and reference FIS (r-FIS). Results: The absolute percent error mean (±SD) between r-FIS and p-FIS for the test set was 4.0% (±4.1%), 6.0% (±9.3%), and 12.2% (±12.5%) for the LOF, ROF, and RHF, respectively. Conclusions: A DL approach was developed to segment lung fissures on CT images and accurately quantify FIS. It has potential to assist in the identification of emphysema patients who would benefit from EBV treatment.

19.
Radiol Artif Intell ; 6(4): e230471, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38809148

RESUMO

Sex-specific abdominal organ volume and proton density fat fraction (PDFF) in people with obesity during a weight loss intervention was assessed with automated multiorgan segmentation of quantitative water-fat MRI. An nnU-Net architecture was employed for automatic segmentation of abdominal organs, including visceral and subcutaneous adipose tissue, liver, and psoas and erector spinae muscle, based on quantitative chemical shift-encoded MRI and using ground truth labels generated from participants of the Lifestyle Intervention (LION) study. Each organ's volume and fat content were examined in 127 participants (73 female and 54 male participants; body mass index, 30-39.9 kg/m2) and in 81 (54 female and 32 male participants) of these participants after an 8-week formula-based low-calorie diet. Dice scores ranging from 0.91 to 0.97 were achieved for the automatic segmentation. PDFF was found to be lower in visceral adipose tissue compared with subcutaneous adipose tissue in both male and female participants. Before intervention, female participants exhibited higher PDFF in subcutaneous adipose tissue (90.6% vs 89.7%; P < .001) and lower PDFF in liver (8.6% vs 13.3%; P < .001) and visceral adipose tissue (76.4% vs 81.3%; P < .001) compared with male participants. This relation persisted after intervention. As a response to caloric restriction, male participants lost significantly more visceral adipose tissue volume (1.76 L vs 0.91 L; P < .001) and showed a higher decrease in subcutaneous adipose tissue PDFF (2.7% vs 1.5%; P < .001) than female participants. Automated body composition analysis on quantitative water-fat MRI data provides new insights for understanding sex-specific metabolic response to caloric restriction and weight loss in people with obesity. Keywords: Obesity, Chemical Shift-encoded MRI, Abdominal Fat Volume, Proton Density Fat Fraction, nnU-Net ClinicalTrials.gov registration no. NCT04023942 Supplemental material is available for this article. Published under a CC BY 4.0 license.


Assuntos
Gordura Abdominal , Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Imageamento por Ressonância Magnética/métodos , Gordura Abdominal/diagnóstico por imagem , Pessoa de Meia-Idade , Adulto , Fatores Sexuais , Obesidade/diagnóstico por imagem , Obesidade/dietoterapia , Prótons , Restrição Calórica
20.
Clin Biomech (Bristol, Avon) ; 116: 106265, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38810478

RESUMO

BACKGROUND: Metastatic femoral tumors may lead to pathological fractures during daily activities. A CT-based finite element analysis of a patient's femurs was shown to assist orthopedic surgeons in making informed decisions about the risk of fracture and the need for a prophylactic fixation. Improving the accuracy of such analyses ruqires an automatic and accurate segmentation of the tumors and their automatic inclusion in the finite element model. We present herein a deep learning algorithm (nnU-Net) to automatically segment lytic tumors within the femur. METHOD: A dataset consisting of fifty CT scans of patients with manually annotated femoral tumors was created. Forty of them, chosen randomly, were used for training the nnU-Net, while the remaining ten CT scans were used for testing. The deep learning model's performance was compared to two experienced radiologists. FINDINGS: The proposed algorithm outperformed the current state-of-the-art solutions, achieving dice similarity scores of 0.67 and 0.68 on the test data when compared to two experienced radiologists, while the dice similarity score for inter-individual variability between the radiologists was 0.73. INTERPRETATION: The automatic algorithm may segment lytic femoral tumors in CT scans as accurately as experienced radiologists with similar dice similarity scores. The influence of the realistic tumors inclusion in an autonomous finite element algorithm is presented in (Rachmil et al., "The Influence of Femoral Lytic Tumors Segmentation on Autonomous Finite Element Analyses", Clinical Biomechanics, 112, paper 106192, (2024)).


Assuntos
Algoritmos , Aprendizado Profundo , Neoplasias Femorais , Fêmur , Análise de Elementos Finitos , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Fêmur/diagnóstico por imagem , Fêmur/fisiopatologia , Neoplasias Femorais/diagnóstico por imagem , Masculino , Feminino , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA