RESUMEN
The zebrafish Danio rerio has become a popular model host to explore disease pathology caused by infectious agents. A main advantage is its transparency at an early age, which enables live imaging of infection dynamics. While multispecies infections are common in patients, the zebrafish model is rarely used to study them, although the model would be ideal for investigating pathogen-pathogen and pathogen-host interactions. This may be due to the absence of an established multispecies infection protocol for a defined organ and the lack of suitable image analysis pipelines for automated image processing. To address these issues, we developed a protocol for establishing and tracking single and multispecies bacterial infections in the inner ear structure (otic vesicle) of the zebrafish by imaging. Subsequently, we generated an image analysis pipeline that involved deep learning for the automated segmentation of the otic vesicle, and scripts for quantifying pathogen frequencies through fluorescence intensity measures. We used Pseudomonas aeruginosa, Acinetobacter baumannii, and Klebsiella pneumoniae, three of the difficult-to-treat ESKAPE pathogens, to show that our infection protocol and image analysis pipeline work both for single pathogens and pairwise pathogen combinations. Thus, our protocols provide a comprehensive toolbox for studying single and multispecies infections in real-time in zebrafish.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Pseudomonas aeruginosa , Pez Cebra , Pez Cebra/microbiología , Animales , Procesamiento de Imagen Asistido por Computador/métodos , Infecciones Bacterianas/microbiología , Infecciones Bacterianas/diagnóstico por imagen , Acinetobacter baumannii/patogenicidad , Modelos Animales de Enfermedad , Interacciones Huésped-Patógeno , Klebsiella pneumoniae/patogenicidad , Oído Interno/microbiología , Oído Interno/diagnóstico por imagen , Aprendizaje ProfundoRESUMEN
BACKGROUND: Surgical resection is the standard of care for patients with large or symptomatic brain metastases (BMs). Despite improved local control after adjuvant stereotactic radiotherapy, the risk of local failure (LF) persists. Therefore, we aimed to develop and externally validate a pre-therapeutic radiomics-based prediction tool to identify patients at high LF risk. METHODS: Data were collected from A Multicenter Analysis of Stereotactic Radiotherapy to the Resection Cavity of BMs (AURORA) retrospective study (training cohort: 253 patients from 2 centers; external test cohort: 99 patients from 5 centers). Radiomic features were extracted from the contrast-enhancing BM (T1-CE MRI sequence) and the surrounding edema (T2-FLAIR sequence). Different combinations of radiomic and clinical features were compared. The final models were trained on the entire training cohort with the best parameter set previously determined by internal 5-fold cross-validation and tested on the external test set. RESULTS: The best performance in the external test was achieved by an elastic net regression model trained with a combination of radiomic and clinical features with a concordance index (CI) of 0.77, outperforming any clinical model (best CI: 0.70). The model effectively stratified patients by LF risk in a Kaplan-Meier analysis (Pâ <â .001) and demonstrated an incremental net clinical benefit. At 24 months, we found LF in 9% and 74% of the low and high-risk groups, respectively. CONCLUSIONS: A combination of clinical and radiomic features predicted freedom from LF better than any clinical feature set alone. Patients at high risk for LF may benefit from stricter follow-up routines or intensified therapy.
Asunto(s)
Neoplasias Encefálicas , Imagen por Resonancia Magnética , Radiocirugia , Humanos , Neoplasias Encefálicas/secundario , Neoplasias Encefálicas/cirugía , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/radioterapia , Radiocirugia/métodos , Masculino , Femenino , Estudios Retrospectivos , Persona de Mediana Edad , Imagen por Resonancia Magnética/métodos , Anciano , Pronóstico , Estudios de Seguimiento , Adulto , RadiómicaRESUMEN
BACKGROUND: Volume of interest (VOI) segmentation is a crucial step for Radiomics analyses and radiotherapy (RT) treatment planning. Because it can be time-consuming and subject to inter-observer variability, we developed and tested a Deep Learning-based automatic segmentation (DLBAS) algorithm to reproducibly predict the primary gross tumor as VOI for Radiomics analyses in extremity soft tissue sarcomas (STS). METHODS: A DLBAS algorithm was trained on a cohort of 157 patients and externally tested on an independent cohort of 87 patients using contrast-enhanced MRI. Manual tumor delineations by a radiation oncologist served as ground truths (GTs). A benchmark study with 20 cases from the test cohort compared the DLBAS predictions against manual VOI segmentations of two residents (ERs) and clinical delineations of two radiation oncologists (ROs). The ROs rated DLBAS predictions regarding their direct applicability. RESULTS: The DLBAS achieved a median dice similarity coefficient (DSC) of 0.88 against the GTs in the entire test cohort (interquartile range (IQR): 0.11) and a median DSC of 0.89 (IQR 0.07) and 0.82 (IQR 0.10) in comparison to ERs and ROs, respectively. Radiomics feature stability was high with a median intraclass correlation coefficient of 0.97, 0.95 and 0.94 for GTs, ERs, and ROs, respectively. DLBAS predictions were deemed clinically suitable by the two ROs in 35% and 20% of cases, respectively. CONCLUSION: The results demonstrate that the DLBAS algorithm provides reproducible VOI predictions for radiomics feature extraction. Variability remains regarding direct clinical applicability of predictions for RT treatment planning.
Asunto(s)
Algoritmos , Benchmarking , Aprendizaje Profundo , Extremidades , Imagen por Resonancia Magnética , Sarcoma , Humanos , Sarcoma/diagnóstico por imagen , Sarcoma/radioterapia , Sarcoma/patología , Imagen por Resonancia Magnética/métodos , Masculino , Femenino , Extremidades/diagnóstico por imagen , Persona de Mediana Edad , Adulto , Anciano , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias de los Tejidos Blandos/diagnóstico por imagen , Neoplasias de los Tejidos Blandos/radioterapia , Neoplasias de los Tejidos Blandos/patología , RadiómicaRESUMEN
Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D U-Nets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1-weighted and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI also includes a lesion location annotation tool, labeling lesions as periventricular, infratentorial, and juxtacortical according to the 2017 McDonald criteria, and, additionally, as subcortical. We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10 mm3 and 100 mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.
Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Esclerosis Múltiple , Sustancia Blanca , Humanos , Esclerosis Múltiple/diagnóstico por imagen , Esclerosis Múltiple/patología , Imagen por Resonancia Magnética/métodos , Sustancia Blanca/diagnóstico por imagen , Sustancia Blanca/patología , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Procesamiento de Imagen Asistido por Computador/métodos , Femenino , Neuroimagen/métodos , Neuroimagen/normas , Masculino , AdultoRESUMEN
Meningiomas are the most common primary intracranial tumors and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on brain MRI for diagnosis, treatment planning, and longitudinal treatment monitoring. However, automated, objective, and quantitative tools for non-invasive assessment of meningiomas on multi-sequence MR images are not available. Here we present the BraTS Pre-operative Meningioma Dataset, as the largest multi-institutional expert annotated multilabel meningioma multi-sequence MR image dataset to date. This dataset includes 1,141 multi-sequence MR images from six sites, each with four structural MRI sequences (T2-, T2/FLAIR-, pre-contrast T1-, and post-contrast T1-weighted) accompanied by expert manually refined segmentations of three distinct meningioma sub-compartments: enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Basic demographic data are provided including age at time of initial imaging, sex, and CNS WHO grade. The goal of releasing this dataset is to facilitate the development of automated computational methods for meningioma segmentation and expedite their incorporation into clinical practice, ultimately targeting improvement in the care of meningioma patients.
Asunto(s)
Imagen por Resonancia Magnética , Neoplasias Meníngeas , Meningioma , Meningioma/diagnóstico por imagen , Humanos , Neoplasias Meníngeas/diagnóstico por imagen , Masculino , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Persona de Mediana Edad , AncianoRESUMEN
Automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. Here, we present DELiVR, a virtual reality-trained deep-learning pipeline for detecting c-Fos+ cells as markers for neuronal activity in cleared mouse brains. Virtual reality annotation substantially accelerated training data generation, enabling DELiVR to outperform state-of-the-art cell-segmenting approaches. Our pipeline is available in a user-friendly Docker container that runs with a standalone Fiji plugin. DELiVR features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using Fiji for dataset-specific training. We applied DELiVR to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. Overall, DELiVR is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.
Asunto(s)
Encéfalo , Aprendizaje Profundo , Realidad Virtual , Animales , Encéfalo/diagnóstico por imagen , Ratones , Neuronas , Programas Informáticos , Procesamiento de Imagen Asistido por Computador/métodos , Proteínas Proto-Oncogénicas c-fos/metabolismo , HumanosRESUMEN
Biophysical modeling, particularly involving partial differential equations (PDEs), offers significant potential for tailoring disease treatment protocols to individual patients. However, the inverse problem-solving aspect of these models presents a substantial challenge, either due to the high computational requirements of model-based approaches or the limited robustness of deep learning (DL) methods. We propose a novel framework that leverages the unique strengths of both approaches in a synergistic manner. Our method incorporates a DL ensemble for initial parameter estimation, facilitating efficient downstream evolutionary sampling initialized with this DL-based prior. We showcase the effectiveness of integrating a rapid deep-learning algorithm with a high-precision evolution strategy in estimating brain tumor cell concentrations from magnetic resonance images. The DL-Prior plays a pivotal role, significantly constraining the effective sampling-parameter space. This reduction results in a fivefold convergence acceleration and a Dice-score of 95.
RESUMEN
Background: The diffuse growth pattern of glioblastoma is one of the main challenges for accurate treatment. Computational tumor growth modeling has emerged as a promising tool to guide personalized therapy. Here, we performed clinical and biological validation of a novel growth model, aiming to close the gap between the experimental state and clinical implementation. Methods: One hundred and twenty-four patients from The Cancer Genome Archive (TCGA) and 397 patients from the UCSF Glioma Dataset were assessed for significant correlations between clinical data, genetic pathway activation maps (generated with PARADIGM; TCGA only), and infiltration (Dw) as well as proliferation (ρ) parameters stemming from a Fisher-Kolmogorov growth model. To further evaluate clinical potential, we performed the same growth modeling on preoperative magnetic resonance imaging data from 30 patients of our institution and compared model-derived tumor volume and recurrence coverage with standard radiotherapy plans. Results: The parameter ratio Dw/ρ (Pâ <â .05 in TCGA) as well as the simulated tumor volume (Pâ <â .05 in TCGA/UCSF) were significantly inversely correlated with overall survival. Interestingly, we found a significant correlation between 11 proliferation pathways and the estimated proliferation parameter. Depending on the cutoff value for tumor cell density, we observed a significant improvement in recurrence coverage without significantly increased radiation volume utilizing model-derived target volumes instead of standard radiation plans. Conclusions: Identifying a significant correlation between computed growth parameters and clinical and biological data, we highlight the potential of tumor growth modeling for individualized therapy of glioblastoma. This might improve the accuracy of radiation planning in the near future.
RESUMEN
Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.
Asunto(s)
Inteligencia ArtificialRESUMEN
Statistical shape models are an essential tool for various tasks in medical image analysis, including shape generation, reconstruction and classification. Shape models are learned from a population of example shapes, which are typically obtained through segmentation of volumetric medical images. In clinical practice, highly anisotropic volumetric scans with large slice distances are prevalent, e.g., to reduce radiation exposure in CT or image acquisition time in MR imaging. For existing shape modeling approaches, the resolution of the emerging model is limited to the resolution of the training shapes. Therefore, any missing information between slices prohibits existing methods from learning a high-resolution shape prior. We propose a novel shape modeling approach that can be trained on sparse, binary segmentation masks with large slice distances. This is achieved through employing continuous shape representations based on neural implicit functions. After training, our model can reconstruct shapes from various sparse inputs at high target resolutions beyond the resolution of individual training examples. We successfully reconstruct high-resolution shapes from as few as three orthogonal slices. Furthermore, our shape model allows us to embed various sparse segmentation masks into a common, low-dimensional latent space - independent of the acquisition direction, resolution, spacing, and field of view. We show that the emerging latent representation discriminates between healthy and pathological shapes, even when provided with sparse segmentation masks. Lastly, we qualitatively demonstrate that the emerging latent space is smooth and captures characteristic modes of shape variation. We evaluate our shape model on two anatomical structures: the lumbar vertebra and the distal femur, both from publicly available datasets.
Asunto(s)
Algoritmos , Modelos Estadísticos , Humanos , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images.
Asunto(s)
Angiografía , Vasos Retinianos , Tomografía de Coherencia Óptica , Angiografía/métodos , Vasos Retinianos/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Profundo , Aprendizaje AutomáticoRESUMEN
Ultra-wideband raster-scan optoacoustic mesoscopy (RSOM) is a novel modality that has demonstrated unprecedented ability to visualize epidermal and dermal structures in-vivo. However, an automatic and quantitative analysis of three-dimensional RSOM datasets remains unexplored. In this work we present our framework: Deep Learning RSOM Analysis Pipeline (DeepRAP), to analyze and quantify morphological skin features recorded by RSOM and extract imaging biomarkers for disease characterization. DeepRAP uses a multi-network segmentation strategy based on convolutional neural networks with transfer learning. This strategy enabled the automatic recognition of skin layers and subsequent segmentation of dermal microvasculature with an accuracy equivalent to human assessment. DeepRAP was validated against manual segmentation on 25 psoriasis patients under treatment and our biomarker extraction was shown to characterize disease severity and progression well with a strong correlation to physician evaluation and histology. In a unique validation experiment, we applied DeepRAP in a time series sequence of occlusion-induced hyperemia from 10 healthy volunteers. We observe how the biomarkers decrease and recover during the occlusion and release process, demonstrating accurate performance and reproducibility of DeepRAP. Furthermore, we analyzed a cohort of 75 volunteers and defined a relationship between aging and microvascular features in-vivo. More precisely, this study revealed that fine microvascular features in the dermal layer have the strongest correlation to age. The ability of our newly developed framework to enable the rapid study of human skin morphology and microvasculature in-vivo promises to replace biopsy studies, increasing the translational potential of RSOM.
Asunto(s)
Biomarcadores , Técnicas Fotoacústicas , Psoriasis , Piel , Humanos , Psoriasis/diagnóstico por imagen , Técnicas Fotoacústicas/métodos , Piel/diagnóstico por imagen , Piel/irrigación sanguínea , Aprendizaje Profundo , Aprendizaje Automático , Adulto , Envejecimiento de la Piel/fisiología , Femenino , Persona de Mediana Edad , MasculinoRESUMEN
Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of four backbone AI models for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.
Asunto(s)
Algoritmos , Inteligencia Artificial , Imagen por Resonancia Magnética , Feto/diagnóstico por imagen , Encéfalo/diagnóstico por imagenRESUMEN
Predicting the infiltration of Glioblastoma (GBM) from medical MRI scans is crucial for understanding tumor growth dynamics and designing personalized radiotherapy treatment plans.Mathematical models of GBM growth can complement the data in the prediction of spatial distributions of tumor cells. However, this requires estimating patient-specific parameters of the model from clinical data, which is a challenging inverse problem due to limited temporal data and the limited time between imaging and diagnosis. This work proposes a method that uses Physics-Informed Neural Networks (PINNs) to estimate patient-specific parameters of a reaction-diffusion PDE model of GBM growth from a single 3D structural MRI snapshot. PINNs embed both the data and the PDE into a loss function, thus integrating theory and data. Key innovations include the identification and estimation of characteristic non-dimensional parameters, a pre-training step that utilizes the non-dimensional parameters and a fine-tuning step to determine the patient specific parameters. Additionally, the diffuse domain method is employed to handle the complex brain geometry within the PINN framework. Our method is validated both on synthetic and patient datasets, and shows promise for real-time parametric inference in the clinical setting for personalized GBM treatment.
RESUMEN
Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D-UNets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1w and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI additionally includes a lesion location annotation tool, labeling lesion location according to the 2017 McDonald criteria (periventricular, infratentorial, juxtacortical, subcortical). We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10mm3 and 100mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.
RESUMEN
Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.
RESUMEN
Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.
Asunto(s)
Enfermedades de los Pequeños Vasos Cerebrales , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados , Enfermedades de los Pequeños Vasos Cerebrales/diagnóstico por imagen , Hemorragia Cerebral , ComputadoresRESUMEN
Convolutional neural networks (CNNs) have shown promising performance in various 2D computer vision tasks due to availability of large amounts of 2D training data. Contrarily, medical imaging deals with 3D data and usually lacks the equivalent extent and diversity of data, for developing AI models. Transfer learning provides the means to use models trained for one application as a starting point to another application. In this work, we leverage 2D pre-trained models as a starting point in 3D medical applications by exploring the concept of Axial-Coronal-Sagittal (ACS) convolutions. We have incorporated ACS as an alternative of native 3D convolutions in the Generally Nuanced Deep Learning Framework (GaNDLF), providing various well-established and state-of-the-art network architectures with the availability of pre-trained encoders from 2D data. Results of our experimental evaluation on 3D MRI data of brain tumor patients for i) tumor segmentation and ii) radiogenomic classification, show model size reduction by ~22% and improvement in validation accuracy by ~33%. Our findings support the advantage of ACS convolutions in pre-trained 2D CNNs over 3D CNN without pre-training, for 3D segmentation and classification tasks, democratizing existing models trained in datasets of unprecedented size and showing promise in the field of healthcare.