Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 148
Filter
1.
Neuroimage Clin ; 42: 103611, 2024.
Article in English | MEDLINE | ID: mdl-38703470

ABSTRACT

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D U-Nets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1-weighted and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI also includes a lesion location annotation tool, labeling lesions as periventricular, infratentorial, and juxtacortical according to the 2017 McDonald criteria, and, additionally, as subcortical. We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10 mm3 and 100 mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.


Subject(s)
Deep Learning , Magnetic Resonance Imaging , Multiple Sclerosis , White Matter , Humans , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Magnetic Resonance Imaging/methods , White Matter/diagnostic imaging , White Matter/pathology , Brain/diagnostic imaging , Brain/pathology , Image Processing, Computer-Assisted/methods , Female , Neuroimaging/methods , Neuroimaging/standards , Male , Adult
2.
Sci Data ; 11(1): 496, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38750041

ABSTRACT

Meningiomas are the most common primary intracranial tumors and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on brain MRI for diagnosis, treatment planning, and longitudinal treatment monitoring. However, automated, objective, and quantitative tools for non-invasive assessment of meningiomas on multi-sequence MR images are not available. Here we present the BraTS Pre-operative Meningioma Dataset, as the largest multi-institutional expert annotated multilabel meningioma multi-sequence MR image dataset to date. This dataset includes 1,141 multi-sequence MR images from six sites, each with four structural MRI sequences (T2-, T2/FLAIR-, pre-contrast T1-, and post-contrast T1-weighted) accompanied by expert manually refined segmentations of three distinct meningioma sub-compartments: enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Basic demographic data are provided including age at time of initial imaging, sex, and CNS WHO grade. The goal of releasing this dataset is to facilitate the development of automated computational methods for meningioma segmentation and expedite their incorporation into clinical practice, ultimately targeting improvement in the care of meningioma patients.


Subject(s)
Magnetic Resonance Imaging , Meningeal Neoplasms , Meningioma , Meningioma/diagnostic imaging , Humans , Meningeal Neoplasms/diagnostic imaging , Male , Female , Image Processing, Computer-Assisted/methods , Middle Aged , Aged
3.
Neuro Oncol ; 2024 May 30.
Article in English | MEDLINE | ID: mdl-38813990

ABSTRACT

BACKGROUND: Surgical resection is the standard of care for patients with large or symptomatic brain metastases (BMs). Despite improved local control after adjuvant stereotactic radiotherapy, the risk of local failure (LF) persists. Therefore, we aimed to develop and externally validate a pre-therapeutic radiomics-based prediction tool to identify patients at high LF risk. METHODS: Data were collected from A Multicenter Analysis of Stereotactic Radiotherapy to the Resection Cavity of Brain Metastases (AURORA) retrospective study (training cohort: 253 patients from two centers; external test cohort: 99 patients from five centers). Radiomic features were extracted from the contrast-enhancing BM (T1-CE MRI sequence) and the surrounding edema (FLAIR sequence). Different combinations of radiomic and clinical features were compared. The final models were trained on the entire training cohort with the best parameter set previously determined by internal 5-fold cross-validation and tested on the external test set. RESULTS: The best performance in the external test was achieved by an elastic net regression model trained with a combination of radiomic and clinical features with a concordance index (CI) of 0.77, outperforming any clinical model (best CI: 0.70). The model effectively stratified patients by LF risk in a Kaplan-Meier analysis (p < 0.001) and demonstrated an incremental net clinical benefit. At 24 months, we found LF in 9% and 74% of the low and high-risk groups, respectively. CONCLUSIONS: A combination of clinical and radiomic features predicted freedom from LF better than any clinical feature set alone. Patients at high risk for LF may benefit from stricter follow-up routines or intensified therapy.

4.
Radiother Oncol ; 197: 110338, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38782301

ABSTRACT

BACKGROUND: Volume of interest (VOI) segmentation is a crucial step for Radiomics analyses and radiotherapy (RT) treatment planning. Because it can be time-consuming and subject to inter-observer variability, we developed and tested a Deep Learning-based automatic segmentation (DLBAS) algorithm to reproducibly predict the primary gross tumor as VOI for Radiomics analyses in extremity soft tissue sarcomas (STS). METHODS: A DLBAS algorithm was trained on a cohort of 157 patients and externally tested on an independent cohort of 87 patients using contrast-enhanced MRI. Manual tumor delineations by a radiation oncologist served as ground truths (GTs). A benchmark study with 20 cases from the test cohort compared the DLBAS predictions against manual VOI segmentations of two residents (ERs) and clinical delineations of two radiation oncologists (ROs). The ROs rated DLBAS predictions regarding their direct applicability. RESULTS: The DLBAS achieved a median dice similarity coefficient (DSC) of 0.88 against the GTs in the entire test cohort (interquartile range (IQR): 0.11) and a median DSC of 0.89 (IQR 0.07) and 0.82 (IQR 0.10) in comparison to ERs and ROs, respectively. Radiomics feature stability was high with a median intraclass correlation coefficient of 0.97, 0.95 and 0.94 for GTs, ERs, and ROs, respectively. DLBAS predictions were deemed clinically suitable by the two ROs in 35% and 20% of cases, respectively. CONCLUSION: The results demonstrate that the DLBAS algorithm provides reproducible VOI predictions for radiomics feature extraction. Variability remains regarding direct clinical applicability of predictions for RT treatment planning.

5.
Nat Methods ; 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38649742

ABSTRACT

Automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. Here, we present DELiVR, a virtual reality-trained deep-learning pipeline for detecting c-Fos+ cells as markers for neuronal activity in cleared mouse brains. Virtual reality annotation substantially accelerated training data generation, enabling DELiVR to outperform state-of-the-art cell-segmenting approaches. Our pipeline is available in a user-friendly Docker container that runs with a standalone Fiji plugin. DELiVR features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using Fiji for dataset-specific training. We applied DELiVR to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. Overall, DELiVR is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.

6.
Neurooncol Adv ; 6(1): vdad171, 2024.
Article in English | MEDLINE | ID: mdl-38435962

ABSTRACT

Background: The diffuse growth pattern of glioblastoma is one of the main challenges for accurate treatment. Computational tumor growth modeling has emerged as a promising tool to guide personalized therapy. Here, we performed clinical and biological validation of a novel growth model, aiming to close the gap between the experimental state and clinical implementation. Methods: One hundred and twenty-four patients from The Cancer Genome Archive (TCGA) and 397 patients from the UCSF Glioma Dataset were assessed for significant correlations between clinical data, genetic pathway activation maps (generated with PARADIGM; TCGA only), and infiltration (Dw) as well as proliferation (ρ) parameters stemming from a Fisher-Kolmogorov growth model. To further evaluate clinical potential, we performed the same growth modeling on preoperative magnetic resonance imaging data from 30 patients of our institution and compared model-derived tumor volume and recurrence coverage with standard radiotherapy plans. Results: The parameter ratio Dw/ρ (P < .05 in TCGA) as well as the simulated tumor volume (P < .05 in TCGA/UCSF) were significantly inversely correlated with overall survival. Interestingly, we found a significant correlation between 11 proliferation pathways and the estimated proliferation parameter. Depending on the cutoff value for tumor cell density, we observed a significant improvement in recurrence coverage without significantly increased radiation volume utilizing model-derived target volumes instead of standard radiation plans. Conclusions: Identifying a significant correlation between computed growth parameters and clinical and biological data, we highlight the potential of tumor growth modeling for individualized therapy of glioblastoma. This might improve the accuracy of radiation planning in the near future.

7.
ArXiv ; 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38495563

ABSTRACT

Biophysical modeling, particularly involving partial differential equations (PDEs), offers significant potential for tailoring disease treatment protocols to individual patients. However, the inverse problem-solving aspect of these models presents a substantial challenge, either due to the high computational requirements of model-based approaches or the limited robustness of deep learning (DL) methods. We propose a novel framework that leverages the unique strengths of both approaches in a synergistic manner. Our method incorporates a DL ensemble for initial parameter estimation, facilitating efficient downstream evolutionary sampling initialized with this DL-based prior. We showcase the effectiveness of integrating a rapid deep-learning algorithm with a high-precision evolution strategy in estimating brain tumor cell concentrations from magnetic resonance images. The DL-Prior plays a pivotal role, significantly constraining the effective sampling-parameter space. This reduction results in a fivefold convergence acceleration and a Dice-score of 95.

8.
Med Image Anal ; 94: 103099, 2024 May.
Article in English | MEDLINE | ID: mdl-38395009

ABSTRACT

Statistical shape models are an essential tool for various tasks in medical image analysis, including shape generation, reconstruction and classification. Shape models are learned from a population of example shapes, which are typically obtained through segmentation of volumetric medical images. In clinical practice, highly anisotropic volumetric scans with large slice distances are prevalent, e.g., to reduce radiation exposure in CT or image acquisition time in MR imaging. For existing shape modeling approaches, the resolution of the emerging model is limited to the resolution of the training shapes. Therefore, any missing information between slices prohibits existing methods from learning a high-resolution shape prior. We propose a novel shape modeling approach that can be trained on sparse, binary segmentation masks with large slice distances. This is achieved through employing continuous shape representations based on neural implicit functions. After training, our model can reconstruct shapes from various sparse inputs at high target resolutions beyond the resolution of individual training examples. We successfully reconstruct high-resolution shapes from as few as three orthogonal slices. Furthermore, our shape model allows us to embed various sparse segmentation masks into a common, low-dimensional latent space - independent of the acquisition direction, resolution, spacing, and field of view. We show that the emerging latent representation discriminates between healthy and pathological shapes, even when provided with sparse segmentation masks. Lastly, we qualitatively demonstrate that the emerging latent space is smooth and captures characteristic modes of shape variation. We evaluate our shape model on two anatomical structures: the lumbar vertebra and the distal femur, both from publicly available datasets.


Subject(s)
Algorithms , Models, Statistical , Humans , Magnetic Resonance Imaging , Image Processing, Computer-Assisted/methods
9.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38347140

ABSTRACT

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Subject(s)
Artificial Intelligence
10.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3784-3795, 2024 May.
Article in English | MEDLINE | ID: mdl-38198270

ABSTRACT

Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of four backbone AI models for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.


Subject(s)
Algorithms , Artificial Intelligence , Magnetic Resonance Imaging , Fetus/diagnostic imaging , Brain/diagnostic imaging
11.
IEEE Trans Med Imaging ; 43(6): 2061-2073, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38224512

ABSTRACT

Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Retinal Vessels , Tomography, Optical Coherence , Tomography, Optical Coherence/methods , Humans , Retinal Vessels/diagnostic imaging , Image Processing, Computer-Assisted/methods , Angiography/methods
12.
IEEE Trans Med Imaging ; 43(6): 2074-2085, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38241120

ABSTRACT

Ultra-wideband raster-scan optoacoustic mesoscopy (RSOM) is a novel modality that has demonstrated unprecedented ability to visualize epidermal and dermal structures in-vivo. However, an automatic and quantitative analysis of three-dimensional RSOM datasets remains unexplored. In this work we present our framework: Deep Learning RSOM Analysis Pipeline (DeepRAP), to analyze and quantify morphological skin features recorded by RSOM and extract imaging biomarkers for disease characterization. DeepRAP uses a multi-network segmentation strategy based on convolutional neural networks with transfer learning. This strategy enabled the automatic recognition of skin layers and subsequent segmentation of dermal microvasculature with an accuracy equivalent to human assessment. DeepRAP was validated against manual segmentation on 25 psoriasis patients under treatment and our biomarker extraction was shown to characterize disease severity and progression well with a strong correlation to physician evaluation and histology. In a unique validation experiment, we applied DeepRAP in a time series sequence of occlusion-induced hyperemia from 10 healthy volunteers. We observe how the biomarkers decrease and recover during the occlusion and release process, demonstrating accurate performance and reproducibility of DeepRAP. Furthermore, we analyzed a cohort of 75 volunteers and defined a relationship between aging and microvascular features in-vivo. More precisely, this study revealed that fine microvascular features in the dermal layer have the strongest correlation to age. The ability of our newly developed framework to enable the rapid study of human skin morphology and microvasculature in-vivo promises to replace biopsy studies, increasing the translational potential of RSOM.


Subject(s)
Biomarkers , Photoacoustic Techniques , Psoriasis , Skin , Humans , Psoriasis/diagnostic imaging , Photoacoustic Techniques/methods , Skin/diagnostic imaging , Skin/blood supply , Deep Learning , Machine Learning , Adult , Skin Aging/physiology , Female , Middle Aged , Male
13.
medRxiv ; 2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38045345

ABSTRACT

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D-UNets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1w and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI additionally includes a lesion location annotation tool, labeling lesion location according to the 2017 McDonald criteria (periventricular, infratentorial, juxtacortical, subcortical). We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10mm3 and 100mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.

14.
ArXiv ; 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38076515

ABSTRACT

Predicting the infiltration of Glioblastoma (GBM) from medical MRI scans is crucial for understanding tumor growth dynamics and designing personalized radiotherapy treatment plans.Mathematical models of GBM growth can complement the data in the prediction of spatial distributions of tumor cells. However, this requires estimating patient-specific parameters of the model from clinical data, which is a challenging inverse problem due to limited temporal data and the limited time between imaging and diagnosis. This work proposes a method that uses Physics-Informed Neural Networks (PINNs) to estimate patient-specific parameters of a reaction-diffusion PDE model of GBM growth from a single 3D structural MRI snapshot. PINNs embed both the data and the PDE into a loss function, thus integrating theory and data. Key innovations include the identification and estimation of characteristic non-dimensional parameters, a pre-training step that utilizes the non-dimensional parameters and a fine-tuning step to determine the patient specific parameters. Additionally, the diffuse domain method is employed to handle the complex brain geometry within the PINN framework. Our method is validated both on synthetic and patient datasets, and shows promise for real-time parametric inference in the clinical setting for personalized GBM treatment.

15.
ArXiv ; 2024 Mar 08.
Article in English | MEDLINE | ID: mdl-37292481

ABSTRACT

Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.

16.
Med Image Anal ; 91: 103029, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37988921

ABSTRACT

Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.


Subject(s)
Cerebral Small Vessel Diseases , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Reproducibility of Results , Cerebral Small Vessel Diseases/diagnostic imaging , Cerebral Hemorrhage , Computers
17.
Brainlesion ; 13769: 68-79, 2023.
Article in English | MEDLINE | ID: mdl-37928819

ABSTRACT

Convolutional neural networks (CNNs) have shown promising performance in various 2D computer vision tasks due to availability of large amounts of 2D training data. Contrarily, medical imaging deals with 3D data and usually lacks the equivalent extent and diversity of data, for developing AI models. Transfer learning provides the means to use models trained for one application as a starting point to another application. In this work, we leverage 2D pre-trained models as a starting point in 3D medical applications by exploring the concept of Axial-Coronal-Sagittal (ACS) convolutions. We have incorporated ACS as an alternative of native 3D convolutions in the Generally Nuanced Deep Learning Framework (GaNDLF), providing various well-established and state-of-the-art network architectures with the availability of pre-trained encoders from 2D data. Results of our experimental evaluation on 3D MRI data of brain tumor patients for i) tumor segmentation and ii) radiogenomic classification, show model size reduction by ~22% and improvement in validation accuracy by ~33%. Our findings support the advantage of ACS convolutions in pre-trained 2D CNNs over 3D CNN without pre-training, for 3D segmentation and classification tasks, democratizing existing models trained in datasets of unprecedented size and showing promise in the field of healthcare.

18.
Eur Radiol Exp ; 7(1): 70, 2023 11 14.
Article in English | MEDLINE | ID: mdl-37957426

ABSTRACT

BACKGROUND: Automated segmentation of spinal magnetic resonance imaging (MRI) plays a vital role both scientifically and clinically. However, accurately delineating posterior spine structures is challenging. METHODS: This retrospective study, approved by the ethical committee, involved translating T1-weighted and T2-weighted images into computed tomography (CT) images in a total of 263 pairs of CT/MR series. Landmark-based registration was performed to align image pairs. We compared two-dimensional (2D) paired - Pix2Pix, denoising diffusion implicit models (DDIM) image mode, DDIM noise mode - and unpaired (SynDiff, contrastive unpaired translation) image-to-image translation using "peak signal-to-noise ratio" as quality measure. A publicly available segmentation network segmented the synthesized CT datasets, and Dice similarity coefficients (DSC) were evaluated on in-house test sets and the "MRSpineSeg Challenge" volumes. The 2D findings were extended to three-dimensional (3D) Pix2Pix and DDIM. RESULTS: 2D paired methods and SynDiff exhibited similar translation performance and DCS on paired data. DDIM image mode achieved the highest image quality. SynDiff, Pix2Pix, and DDIM image mode demonstrated similar DSC (0.77). For craniocaudal axis rotations, at least two landmarks per vertebra were required for registration. The 3D translation outperformed the 2D approach, resulting in improved DSC (0.80) and anatomically accurate segmentations with higher spatial resolution than that of the original MRI series. CONCLUSIONS: Two landmarks per vertebra registration enabled paired image-to-image translation from MRI to CT and outperformed all unpaired approaches. The 3D techniques provided anatomically correct segmentations, avoiding underprediction of small structures like the spinous process. RELEVANCE STATEMENT: This study addresses the unresolved issue of translating spinal MRI to CT, making CT-based tools usable for MRI data. It generates whole spine segmentation, previously unavailable in MRI, a prerequisite for biomechanical modeling and feature extraction for clinical applications. KEY POINTS: • Unpaired image translation lacks in converting spine MRI to CT effectively. • Paired translation needs registration with two landmarks per vertebra at least. • Paired image-to-image enables segmentation transfer to other domains. • 3D translation enables super resolution from MRI to CT. • 3D translation prevents underprediction of small structures.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods , Retrospective Studies , Tomography, X-Ray Computed/methods , Magnetic Resonance Imaging/methods , Spine/diagnostic imaging
19.
Radiother Oncol ; 188: 109901, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37678623

ABSTRACT

BACKGROUND: Many automatic approaches to brain tumor segmentation employ multiple magnetic resonance imaging (MRI) sequences. The goal of this project was to compare different combinations of input sequences to determine which MRI sequences are needed for effective automated brain metastasis (BM) segmentation. METHODS: We analyzed preoperative imaging (T1-weighted sequence ± contrast-enhancement (T1/T1-CE), T2-weighted sequence (T2), and T2 fluid-attenuated inversion recovery (T2-FLAIR) sequence) from 339 patients with BMs from seven centers. A baseline 3D U-Net with all four sequences and six U-Nets with plausible sequence combinations (T1-CE, T1, T2-FLAIR, T1-CE + T2-FLAIR, T1-CE + T1 + T2-FLAIR, T1-CE + T1) were trained on 239 patients from two centers and subsequently tested on an external cohort of 100 patients from five centers. RESULTS: The model based on T1-CE alone achieved the best segmentation performance for BM segmentation with a median Dice similarity coefficient (DSC) of 0.96. Models trained without T1-CE performed worse (T1-only: DSC = 0.70 and T2-FLAIR-only: DSC = 0.73). For edema segmentation, models that included both T1-CE and T2-FLAIR performed best (DSC = 0.93), while the remaining four models without simultaneous inclusion of these both sequences reached a median DSC of 0.81-0.89. CONCLUSIONS: A T1-CE-only protocol suffices for the segmentation of BMs. The combination of T1-CE and T2-FLAIR is important for edema segmentation. Missing either T1-CE or T2-FLAIR decreases performance. These findings may improve imaging routines by omitting unnecessary sequences, thus allowing for faster procedures in daily clinical practice while enabling optimal neural network-based target definitions.

20.
Phys Med Biol ; 68(19)2023 09 18.
Article in English | MEDLINE | ID: mdl-37567235

ABSTRACT

Objective. In MR-only clinical workflow, replacing CT with MR image is of advantage for workflow efficiency and reduces radiation to the patient. An important step required to eliminate CT scan from the workflow is to generate the information provided by CT via an MR image. In this work, we aim to demonstrate a method to generate accurate synthetic CT (sCT) from an MR image to suit the radiation therapy (RT) treatment planning workflow. We show the feasibility of the method and make way for a broader clinical evaluation.Approach. We present a machine learning method for sCT generation from zero-echo-time (ZTE) MRI aimed at structural and quantitative accuracies of the image, with a particular focus on the accurate bone density value prediction. The misestimation of bone density in the radiation path could lead to unintended dose delivery to the target volume and results in suboptimal treatment outcome. We propose a loss function that favors a spatially sparse bone region in the image. We harness the ability of the multi-task network to produce correlated outputs as a framework to enable localization of region of interest (RoI) via segmentation, emphasize regression of values within RoI and still retain the overall accuracy via global regression. The network is optimized by a composite loss function that combines a dedicated loss from each task.Main results. We have included 54 brain patient images in this study and tested the sCT images against reference CT on a subset of 20 cases. A pilot dose evaluation was performed on 9 of the 20 test cases to demonstrate the viability of the generated sCT in RT planning. The average quantitative metrics produced by the proposed method over the test set were-(a) mean absolute error (MAE) of 70 ± 8.6 HU; (b) peak signal-to-noise ratio (PSNR) of 29.4 ± 2.8 dB; structural similarity metric (SSIM) of 0.95 ± 0.02; and (d) Dice coefficient of the body region of 0.984 ± 0.Significance. We demonstrate that the proposed method generates sCT images that resemble visual characteristics of a real CT image and has a quantitative accuracy that suits RT dose planning application. We compare the dose calculation from the proposed sCT and the real CT in a radiation therapy treatment planning setup and show that sCT based planning falls within 0.5% target dose error. The method presented here with an initial dose evaluation makes an encouraging precursor to a broader clinical evaluation of sCT based RT planning on different anatomical regions.


Subject(s)
Image Processing, Computer-Assisted , Machine Learning , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Radiotherapy Planning, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Radiotherapy Dosage
SELECTION OF CITATIONS
SEARCH DETAIL
...