Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
1.
Cell ; 185(26): 5040-5058.e19, 2022 12 22.
Article in English | MEDLINE | ID: mdl-36563667

ABSTRACT

Spatial molecular profiling of complex tissues is essential to investigate cellular function in physiological and pathological states. However, methods for molecular analysis of large biological specimens imaged in 3D are lacking. Here, we present DISCO-MS, a technology that combines whole-organ/whole-organism clearing and imaging, deep-learning-based image analysis, robotic tissue extraction, and ultra-high-sensitivity mass spectrometry. DISCO-MS yielded proteome data indistinguishable from uncleared samples in both rodent and human tissues. We used DISCO-MS to investigate microglia activation along axonal tracts after brain injury and characterized early- and late-stage individual amyloid-beta plaques in a mouse model of Alzheimer's disease. DISCO-bot robotic sample extraction enabled us to study the regional heterogeneity of immune cells in intact mouse bodies and aortic plaques in a complete human heart. DISCO-MS enables unbiased proteome analysis of preclinical and clinical tissues after unbiased imaging of entire specimens in 3D, identifying diagnostic and therapeutic opportunities for complex diseases. VIDEO ABSTRACT.


Subject(s)
Alzheimer Disease , Proteome , Mice , Humans , Animals , Proteome/analysis , Proteomics/methods , Alzheimer Disease/pathology , Amyloid beta-Peptides , Mass Spectrometry , Plaque, Amyloid
2.
Nat Methods ; 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38649742

ABSTRACT

Automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. Here, we present DELiVR, a virtual reality-trained deep-learning pipeline for detecting c-Fos+ cells as markers for neuronal activity in cleared mouse brains. Virtual reality annotation substantially accelerated training data generation, enabling DELiVR to outperform state-of-the-art cell-segmenting approaches. Our pipeline is available in a user-friendly Docker container that runs with a standalone Fiji plugin. DELiVR features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using Fiji for dataset-specific training. We applied DELiVR to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. Overall, DELiVR is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.

3.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38347140

ABSTRACT

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Subject(s)
Artificial Intelligence
4.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38347141

ABSTRACT

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Machine Learning , Semantics
5.
Eur Radiol ; 33(8): 5882-5893, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36928566

ABSTRACT

OBJECTIVES: T2-weighted (w) fat sat (fs) sequences, which are important in spine MRI, require a significant amount of scan time. Generative adversarial networks (GANs) can generate synthetic T2-w fs images. We evaluated the potential of synthetic T2-w fs images by comparing them to their true counterpart regarding image and fat saturation quality, and diagnostic agreement in a heterogenous, multicenter dataset. METHODS: A GAN was used to synthesize T2-w fs from T1- and non-fs T2-w. The training dataset comprised scans of 73 patients from two scanners, and the test dataset, scans of 101 patients from 38 multicenter scanners. Apparent signal- and contrast-to-noise ratios (aSNR/aCNR) were measured in true and synthetic T2-w fs. Two neuroradiologists graded image (5-point scale) and fat saturation quality (3-point scale). To evaluate whether the T2-w fs images are indistinguishable, a Turing test was performed by eleven neuroradiologists. Six pathologies were graded on the synthetic protocol (with synthetic T2-w fs) and the original protocol (with true T2-w fs) by the two neuroradiologists. RESULTS: aSNR and aCNR were not significantly different between the synthetic and true T2-w fs images. Subjective image quality was graded higher for synthetic T2-w fs (p = 0.023). In the Turing test, synthetic and true T2-w fs could not be distinguished from each other. The intermethod agreement between synthetic and original protocol ranged from substantial to almost perfect agreement for the evaluated pathologies. DISCUSSION: The synthetic T2-w fs might replace a physical T2-w fs. Our approach validated on a challenging, multicenter dataset is highly generalizable and allows for shorter scan protocols. KEY POINTS: • Generative adversarial networks can be used to generate synthetic T2-weighted fat sat images from T1- and non-fat sat T2-weighted images of the spine. • The synthetic T2-weighted fat sat images might replace a physically acquired T2-weighted fat sat showing a better image quality and excellent diagnostic agreement with the true T2-weighted fat images. • The present approach validated on a challenging, multicenter dataset is highly generalizable and allows for significantly shorter scan protocols.


Subject(s)
Magnetic Resonance Imaging , Spine , Humans , Spine/diagnostic imaging , Magnetic Resonance Imaging/methods , Radionuclide Imaging
6.
Neuroradiology ; 63(11): 1831-1851, 2021 Nov.
Article in English | MEDLINE | ID: mdl-33835238

ABSTRACT

PURPOSE: Advanced MRI-based biomarkers offer comprehensive and quantitative information for the evaluation and characterization of brain tumors. In this study, we report initial clinical experience in routine glioma imaging with a novel, fully 3D multiparametric quantitative transient-state imaging (QTI) method for tissue characterization based on T1 and T2 values. METHODS: To demonstrate the viability of the proposed 3D QTI technique, nine glioma patients (grade II-IV), with a variety of disease states and treatment histories, were included in this study. First, we investigated the feasibility of 3D QTI (6:25 min scan time) for its use in clinical routine imaging, focusing on image reconstruction, parameter estimation, and contrast-weighted image synthesis. Second, for an initial assessment of 3D QTI-based quantitative MR biomarkers, we performed a ROI-based analysis to characterize T1 and T2 components in tumor and peritumoral tissue. RESULTS: The 3D acquisition combined with a compressed sensing reconstruction and neural network-based parameter inference produced parametric maps with high isotropic resolution (1.125 × 1.125 × 1.125 mm3 voxel size) and whole-brain coverage (22.5 × 22.5 × 22.5 cm3 FOV), enabling the synthesis of clinically relevant T1-weighted, T2-weighted, and FLAIR contrasts without any extra scan time. Our study revealed increased T1 and T2 values in tumor and peritumoral regions compared to contralateral white matter, good agreement with healthy volunteer data, and high inter-subject consistency. CONCLUSION: 3D QTI demonstrated comprehensive tissue assessment of tumor substructures captured in T1 and T2 parameters. Aiming for fast acquisition of quantitative MR biomarkers, 3D QTI has potential to improve disease characterization in brain tumor patients under tight clinical time-constraints.


Subject(s)
Glioma , Protons , Brain , Feasibility Studies , Glioma/diagnostic imaging , Humans , Imaging, Three-Dimensional , Magnetic Resonance Imaging
7.
Nat Biotechnol ; 42(4): 617-627, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37430076

ABSTRACT

Whole-body imaging techniques play a vital role in exploring the interplay of physiological systems in maintaining health and driving disease. We introduce wildDISCO, a new approach for whole-body immunolabeling, optical clearing and imaging in mice, circumventing the need for transgenic reporter animals or nanobody labeling and so overcoming existing technical limitations. We identified heptakis(2,6-di-O-methyl)-ß-cyclodextrin as a potent enhancer of cholesterol extraction and membrane permeabilization, enabling deep, homogeneous penetration of standard antibodies without aggregation. WildDISCO facilitates imaging of peripheral nervous systems, lymphatic vessels and immune cells in whole mice at cellular resolution by labeling diverse endogenous proteins. Additionally, we examined rare proliferating cells and the effects of biological perturbations, as demonstrated in germ-free mice. We applied wildDISCO to map tertiary lymphoid structures in the context of breast cancer, considering both primary tumor and metastases throughout the mouse body. An atlas of high-resolution images showcasing mouse nervous, lymphatic and vascular systems is accessible at http://discotechnologies.org/wildDISCO/atlas/index.php .


Subject(s)
Imaging, Three-Dimensional , Immunoglobulin G , Mice , Animals
8.
Neurooncol Adv ; 6(1): vdad171, 2024.
Article in English | MEDLINE | ID: mdl-38435962

ABSTRACT

Background: The diffuse growth pattern of glioblastoma is one of the main challenges for accurate treatment. Computational tumor growth modeling has emerged as a promising tool to guide personalized therapy. Here, we performed clinical and biological validation of a novel growth model, aiming to close the gap between the experimental state and clinical implementation. Methods: One hundred and twenty-four patients from The Cancer Genome Archive (TCGA) and 397 patients from the UCSF Glioma Dataset were assessed for significant correlations between clinical data, genetic pathway activation maps (generated with PARADIGM; TCGA only), and infiltration (Dw) as well as proliferation (ρ) parameters stemming from a Fisher-Kolmogorov growth model. To further evaluate clinical potential, we performed the same growth modeling on preoperative magnetic resonance imaging data from 30 patients of our institution and compared model-derived tumor volume and recurrence coverage with standard radiotherapy plans. Results: The parameter ratio Dw/ρ (P < .05 in TCGA) as well as the simulated tumor volume (P < .05 in TCGA/UCSF) were significantly inversely correlated with overall survival. Interestingly, we found a significant correlation between 11 proliferation pathways and the estimated proliferation parameter. Depending on the cutoff value for tumor cell density, we observed a significant improvement in recurrence coverage without significantly increased radiation volume utilizing model-derived target volumes instead of standard radiation plans. Conclusions: Identifying a significant correlation between computed growth parameters and clinical and biological data, we highlight the potential of tumor growth modeling for individualized therapy of glioblastoma. This might improve the accuracy of radiation planning in the near future.

9.
ArXiv ; 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38495563

ABSTRACT

Biophysical modeling, particularly involving partial differential equations (PDEs), offers significant potential for tailoring disease treatment protocols to individual patients. However, the inverse problem-solving aspect of these models presents a substantial challenge, either due to the high computational requirements of model-based approaches or the limited robustness of deep learning (DL) methods. We propose a novel framework that leverages the unique strengths of both approaches in a synergistic manner. Our method incorporates a DL ensemble for initial parameter estimation, facilitating efficient downstream evolutionary sampling initialized with this DL-based prior. We showcase the effectiveness of integrating a rapid deep-learning algorithm with a high-precision evolution strategy in estimating brain tumor cell concentrations from magnetic resonance images. The DL-Prior plays a pivotal role, significantly constraining the effective sampling-parameter space. This reduction results in a fivefold convergence acceleration and a Dice-score of 95.

10.
medRxiv ; 2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38045345

ABSTRACT

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D-UNets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1w and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI additionally includes a lesion location annotation tool, labeling lesion location according to the 2017 McDonald criteria (periventricular, infratentorial, juxtacortical, subcortical). We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10mm3 and 100mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.

11.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3784-3795, 2024 May.
Article in English | MEDLINE | ID: mdl-38198270

ABSTRACT

Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of four backbone AI models for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.


Subject(s)
Algorithms , Artificial Intelligence , Magnetic Resonance Imaging , Fetus/diagnostic imaging , Brain/diagnostic imaging
12.
Neuroimage Clin ; 42: 103611, 2024.
Article in English | MEDLINE | ID: mdl-38703470

ABSTRACT

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D U-Nets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1-weighted and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI also includes a lesion location annotation tool, labeling lesions as periventricular, infratentorial, and juxtacortical according to the 2017 McDonald criteria, and, additionally, as subcortical. We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10 mm3 and 100 mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.


Subject(s)
Deep Learning , Magnetic Resonance Imaging , Multiple Sclerosis , White Matter , Humans , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Magnetic Resonance Imaging/methods , White Matter/diagnostic imaging , White Matter/pathology , Brain/diagnostic imaging , Brain/pathology , Image Processing, Computer-Assisted/methods , Female , Neuroimaging/methods , Neuroimaging/standards , Male , Adult
13.
Neuro Oncol ; 2024 May 30.
Article in English | MEDLINE | ID: mdl-38813990

ABSTRACT

BACKGROUND: Surgical resection is the standard of care for patients with large or symptomatic brain metastases (BMs). Despite improved local control after adjuvant stereotactic radiotherapy, the risk of local failure (LF) persists. Therefore, we aimed to develop and externally validate a pre-therapeutic radiomics-based prediction tool to identify patients at high LF risk. METHODS: Data were collected from A Multicenter Analysis of Stereotactic Radiotherapy to the Resection Cavity of Brain Metastases (AURORA) retrospective study (training cohort: 253 patients from two centers; external test cohort: 99 patients from five centers). Radiomic features were extracted from the contrast-enhancing BM (T1-CE MRI sequence) and the surrounding edema (FLAIR sequence). Different combinations of radiomic and clinical features were compared. The final models were trained on the entire training cohort with the best parameter set previously determined by internal 5-fold cross-validation and tested on the external test set. RESULTS: The best performance in the external test was achieved by an elastic net regression model trained with a combination of radiomic and clinical features with a concordance index (CI) of 0.77, outperforming any clinical model (best CI: 0.70). The model effectively stratified patients by LF risk in a Kaplan-Meier analysis (p < 0.001) and demonstrated an incremental net clinical benefit. At 24 months, we found LF in 9% and 74% of the low and high-risk groups, respectively. CONCLUSIONS: A combination of clinical and radiomic features predicted freedom from LF better than any clinical feature set alone. Patients at high risk for LF may benefit from stricter follow-up routines or intensified therapy.

14.
ArXiv ; 2024 Feb 23.
Article in English | MEDLINE | ID: mdl-36945687

ABSTRACT

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

15.
ArXiv ; 2024 Mar 08.
Article in English | MEDLINE | ID: mdl-37292481

ABSTRACT

Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.

16.
Sci Data ; 11(1): 496, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38750041

ABSTRACT

Meningiomas are the most common primary intracranial tumors and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on brain MRI for diagnosis, treatment planning, and longitudinal treatment monitoring. However, automated, objective, and quantitative tools for non-invasive assessment of meningiomas on multi-sequence MR images are not available. Here we present the BraTS Pre-operative Meningioma Dataset, as the largest multi-institutional expert annotated multilabel meningioma multi-sequence MR image dataset to date. This dataset includes 1,141 multi-sequence MR images from six sites, each with four structural MRI sequences (T2-, T2/FLAIR-, pre-contrast T1-, and post-contrast T1-weighted) accompanied by expert manually refined segmentations of three distinct meningioma sub-compartments: enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Basic demographic data are provided including age at time of initial imaging, sex, and CNS WHO grade. The goal of releasing this dataset is to facilitate the development of automated computational methods for meningioma segmentation and expedite their incorporation into clinical practice, ultimately targeting improvement in the care of meningioma patients.


Subject(s)
Magnetic Resonance Imaging , Meningeal Neoplasms , Meningioma , Meningioma/diagnostic imaging , Humans , Meningeal Neoplasms/diagnostic imaging , Male , Female , Image Processing, Computer-Assisted/methods , Middle Aged , Aged
17.
Med Image Anal ; 91: 103029, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37988921

ABSTRACT

Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.


Subject(s)
Cerebral Small Vessel Diseases , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Reproducibility of Results , Cerebral Small Vessel Diseases/diagnostic imaging , Cerebral Hemorrhage , Computers
18.
ArXiv ; 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38235066

ABSTRACT

The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neuro-vascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited public datasets with annotations on CoW anatomy, especially for CTA. Therefore we organized the TopCoW Challenge in 2023 with the release of an annotated CoW dataset. The TopCoW dataset was the first public dataset with voxel-level annotations for thirteen possible CoW vessel components, enabled by virtual-reality (VR) technology. It was also the first large dataset with paired MRA and CTA from the same patients. TopCoW challenge formalized the CoW characterization problem as a multiclass anatomical segmentation task with an emphasis on topological metrics. We invited submissions worldwide for the CoW segmentation task, which attracted over 140 registered participants from four continents. The top performing teams managed to segment many CoW components to Dice scores around 90%, but with lower scores for communicating arteries and rare variants. There were also topological mistakes for predictions with high Dice scores. Additional topological analysis revealed further areas for improvement in detecting certain CoW components and matching CoW variant topology accurately. TopCoW represented a first attempt at benchmarking the CoW anatomical segmentation task for MRA and CTA, both morphologically and topologically.

19.
Diagnostics (Basel) ; 13(5)2023 Mar 03.
Article in English | MEDLINE | ID: mdl-36900118

ABSTRACT

(1) Background and Purpose: In magnetic resonance imaging (MRI) of the spine, T2-weighted (T2-w) fat-saturated (fs) images improve the diagnostic assessment of pathologies. However, in the daily clinical setting, additional T2-w fs images are frequently missing due to time constraints or motion artifacts. Generative adversarial networks (GANs) can generate synthetic T2-w fs images in a clinically feasible time. Therefore, by simulating the radiological workflow with a heterogenous dataset, this study's purpose was to evaluate the diagnostic value of additional synthetic, GAN-based T2-w fs images in the clinical routine. (2) Methods: 174 patients with MRI of the spine were retrospectively identified. A GAN was trained to synthesize T2-w fs images from T1-w, and non-fs T2-w images of 73 patients scanned in our institution. Subsequently, the GAN was used to create synthetic T2-w fs images for the previously unseen 101 patients from multiple institutions. In this test dataset, the additional diagnostic value of synthetic T2-w fs images was assessed in six pathologies by two neuroradiologists. Pathologies were first graded on T1-w and non-fs T2-w images only, then synthetic T2-w fs images were added, and pathologies were graded again. Evaluation of the additional diagnostic value of the synthetic protocol was performed by calculation of Cohen's ĸ and accuracy in comparison to a ground truth (GT) grading based on real T2-w fs images, pre- or follow-up scans, other imaging modalities, and clinical information. (3) Results: The addition of the synthetic T2-w fs to the imaging protocol led to a more precise grading of abnormalities than when grading was based on T1-w and non-fs T2-w images only (mean ĸ GT versus synthetic protocol = 0.65; mean ĸ GT versus T1/T2 = 0.56; p = 0.043). (4) Conclusions: The implementation of synthetic T2-w fs images in the radiological workflow significantly improves the overall assessment of spine pathologies. Thereby, high-quality, synthetic T2-w fs images can be virtually generated by a GAN from heterogeneous, multicenter T1-w and non-fs T2-w contrasts in a clinically feasible time, which underlines the reproducibility and generalizability of our approach.

20.
Med Image Anal ; 83: 102672, 2023 01.
Article in English | MEDLINE | ID: mdl-36395623

ABSTRACT

Current treatment planning of patients diagnosed with a brain tumor, such as glioma, could significantly benefit by accessing the spatial distribution of tumor cell concentration. Existing diagnostic modalities, e.g. magnetic resonance imaging (MRI), contrast sufficiently well areas of high cell density. In gliomas, however, they do not portray areas of low cell concentration, which can often serve as a source for the secondary appearance of the tumor after treatment. To estimate tumor cell densities beyond the visible boundaries of the lesion, numerical simulations of tumor growth could complement imaging information by providing estimates of full spatial distributions of tumor cells. Over recent years a corpus of literature on medical image-based tumor modeling was published. It includes different mathematical formalisms describing the forward tumor growth model. Alongside, various parametric inference schemes were developed to perform an efficient tumor model personalization, i.e. solving the inverse problem. However, the unifying drawback of all existing approaches is the time complexity of the model personalization which prohibits a potential integration of the modeling into clinical settings. In this work, we introduce a deep learning based methodology for inferring the patient-specific spatial distribution of brain tumors from T1Gd and FLAIR MRI medical scans. Coined as Learn-Morph-Infer, the method achieves real-time performance in the order of minutes on widely available hardware and the compute time is stable across tumor models of different complexity, such as reaction-diffusion and reaction-advection-diffusion models. We believe the proposed inverse solution approach not only bridges the way for clinical translation of brain tumor personalization but can also be adopted to other scientific and engineering domains.


Subject(s)
Brain Neoplasms , Humans , Brain Neoplasms/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL