Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cell ; 185(26): 5040-5058.e19, 2022 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-36563667

RESUMO

Spatial molecular profiling of complex tissues is essential to investigate cellular function in physiological and pathological states. However, methods for molecular analysis of large biological specimens imaged in 3D are lacking. Here, we present DISCO-MS, a technology that combines whole-organ/whole-organism clearing and imaging, deep-learning-based image analysis, robotic tissue extraction, and ultra-high-sensitivity mass spectrometry. DISCO-MS yielded proteome data indistinguishable from uncleared samples in both rodent and human tissues. We used DISCO-MS to investigate microglia activation along axonal tracts after brain injury and characterized early- and late-stage individual amyloid-beta plaques in a mouse model of Alzheimer's disease. DISCO-bot robotic sample extraction enabled us to study the regional heterogeneity of immune cells in intact mouse bodies and aortic plaques in a complete human heart. DISCO-MS enables unbiased proteome analysis of preclinical and clinical tissues after unbiased imaging of entire specimens in 3D, identifying diagnostic and therapeutic opportunities for complex diseases. VIDEO ABSTRACT.


Assuntos
Doença de Alzheimer , Proteoma , Camundongos , Humanos , Animais , Proteoma/análise , Proteômica/métodos , Doença de Alzheimer/patologia , Peptídeos beta-Amiloides , Espectrometria de Massas , Placa Amiloide
2.
Nat Methods ; 21(7): 1306-1315, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38649742

RESUMO

Automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. Here, we present DELiVR, a virtual reality-trained deep-learning pipeline for detecting c-Fos+ cells as markers for neuronal activity in cleared mouse brains. Virtual reality annotation substantially accelerated training data generation, enabling DELiVR to outperform state-of-the-art cell-segmenting approaches. Our pipeline is available in a user-friendly Docker container that runs with a standalone Fiji plugin. DELiVR features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using Fiji for dataset-specific training. We applied DELiVR to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. Overall, DELiVR is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.


Assuntos
Encéfalo , Aprendizado Profundo , Realidade Virtual , Animais , Encéfalo/diagnóstico por imagem , Camundongos , Neurônios , Software , Processamento de Imagem Assistida por Computador/métodos , Proteínas Proto-Oncogênicas c-fos/metabolismo , Humanos
3.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38347140

RESUMO

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Assuntos
Inteligência Artificial
4.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38347141

RESUMO

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Semântica
5.
Eur Radiol ; 33(8): 5882-5893, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36928566

RESUMO

OBJECTIVES: T2-weighted (w) fat sat (fs) sequences, which are important in spine MRI, require a significant amount of scan time. Generative adversarial networks (GANs) can generate synthetic T2-w fs images. We evaluated the potential of synthetic T2-w fs images by comparing them to their true counterpart regarding image and fat saturation quality, and diagnostic agreement in a heterogenous, multicenter dataset. METHODS: A GAN was used to synthesize T2-w fs from T1- and non-fs T2-w. The training dataset comprised scans of 73 patients from two scanners, and the test dataset, scans of 101 patients from 38 multicenter scanners. Apparent signal- and contrast-to-noise ratios (aSNR/aCNR) were measured in true and synthetic T2-w fs. Two neuroradiologists graded image (5-point scale) and fat saturation quality (3-point scale). To evaluate whether the T2-w fs images are indistinguishable, a Turing test was performed by eleven neuroradiologists. Six pathologies were graded on the synthetic protocol (with synthetic T2-w fs) and the original protocol (with true T2-w fs) by the two neuroradiologists. RESULTS: aSNR and aCNR were not significantly different between the synthetic and true T2-w fs images. Subjective image quality was graded higher for synthetic T2-w fs (p = 0.023). In the Turing test, synthetic and true T2-w fs could not be distinguished from each other. The intermethod agreement between synthetic and original protocol ranged from substantial to almost perfect agreement for the evaluated pathologies. DISCUSSION: The synthetic T2-w fs might replace a physical T2-w fs. Our approach validated on a challenging, multicenter dataset is highly generalizable and allows for shorter scan protocols. KEY POINTS: • Generative adversarial networks can be used to generate synthetic T2-weighted fat sat images from T1- and non-fat sat T2-weighted images of the spine. • The synthetic T2-weighted fat sat images might replace a physically acquired T2-weighted fat sat showing a better image quality and excellent diagnostic agreement with the true T2-weighted fat images. • The present approach validated on a challenging, multicenter dataset is highly generalizable and allows for significantly shorter scan protocols.


Assuntos
Imageamento por Ressonância Magnética , Coluna Vertebral , Humanos , Coluna Vertebral/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Cintilografia
6.
Neuroradiology ; 63(11): 1831-1851, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33835238

RESUMO

PURPOSE: Advanced MRI-based biomarkers offer comprehensive and quantitative information for the evaluation and characterization of brain tumors. In this study, we report initial clinical experience in routine glioma imaging with a novel, fully 3D multiparametric quantitative transient-state imaging (QTI) method for tissue characterization based on T1 and T2 values. METHODS: To demonstrate the viability of the proposed 3D QTI technique, nine glioma patients (grade II-IV), with a variety of disease states and treatment histories, were included in this study. First, we investigated the feasibility of 3D QTI (6:25 min scan time) for its use in clinical routine imaging, focusing on image reconstruction, parameter estimation, and contrast-weighted image synthesis. Second, for an initial assessment of 3D QTI-based quantitative MR biomarkers, we performed a ROI-based analysis to characterize T1 and T2 components in tumor and peritumoral tissue. RESULTS: The 3D acquisition combined with a compressed sensing reconstruction and neural network-based parameter inference produced parametric maps with high isotropic resolution (1.125 × 1.125 × 1.125 mm3 voxel size) and whole-brain coverage (22.5 × 22.5 × 22.5 cm3 FOV), enabling the synthesis of clinically relevant T1-weighted, T2-weighted, and FLAIR contrasts without any extra scan time. Our study revealed increased T1 and T2 values in tumor and peritumoral regions compared to contralateral white matter, good agreement with healthy volunteer data, and high inter-subject consistency. CONCLUSION: 3D QTI demonstrated comprehensive tissue assessment of tumor substructures captured in T1 and T2 parameters. Aiming for fast acquisition of quantitative MR biomarkers, 3D QTI has potential to improve disease characterization in brain tumor patients under tight clinical time-constraints.


Assuntos
Glioma , Prótons , Encéfalo , Estudos de Viabilidade , Glioma/diagnóstico por imagem , Humanos , Imageamento Tridimensional , Imageamento por Ressonância Magnética
7.
J Clin Exp Dent ; 16(5): e547-e555, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38988762

RESUMO

Background: Artificial Intelligence (AI) has increasingly been integrated into dental practices, notably in radiographic imaging like Orthopantomograms (OPGs), transforming diagnostic protocols. Eye tracking technology offers a method to understand how dentists' visual attention may differ between conventional and AI-assisted diagnostics, but its integration into daily clinical practice is challenged by the cost and complexity of traditional systems. Material and Methods: Thirty experienced practitioners and dental students participated to evaluate the effectiveness of two low-budget eye-tracking systems, including the Peye Tracker (Eye Tracking Systems LTD, Southsea, UK) and Webgazer.js (Brown University, Providence, Rhode Island) in a clinical setting to assess their utility in capturing dentists' visual engagement with OPGs. The hardware and software setup, environmental conditions, and the process for eye-tracking data collection and analysis are illustrated. Results: The study found significant differences in eye-tracking accuracy between the two systems, with Webgazer.js showing higher accuracy compared to Peye Tracker (p<0.001). Additionally, the influence of visual aids (glasses vs. contact lenses) on the performance of eye-tracking systems revealed significant differences for both Peye Tracker (p<0.05) and Webgazer.js (p<0.05). Conclusions: Low-budget eye-tracking devices present challenges in achieving the desired accuracy for analyzing dentists' visual attention in clinical practice, highlighting the need for continued innovation and improvement in this technology. Key words:Artificial intelligence, Eye-tracking device, low-budget, dentistry.

8.
Nat Biotechnol ; 42(4): 617-627, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37430076

RESUMO

Whole-body imaging techniques play a vital role in exploring the interplay of physiological systems in maintaining health and driving disease. We introduce wildDISCO, a new approach for whole-body immunolabeling, optical clearing and imaging in mice, circumventing the need for transgenic reporter animals or nanobody labeling and so overcoming existing technical limitations. We identified heptakis(2,6-di-O-methyl)-ß-cyclodextrin as a potent enhancer of cholesterol extraction and membrane permeabilization, enabling deep, homogeneous penetration of standard antibodies without aggregation. WildDISCO facilitates imaging of peripheral nervous systems, lymphatic vessels and immune cells in whole mice at cellular resolution by labeling diverse endogenous proteins. Additionally, we examined rare proliferating cells and the effects of biological perturbations, as demonstrated in germ-free mice. We applied wildDISCO to map tertiary lymphoid structures in the context of breast cancer, considering both primary tumor and metastases throughout the mouse body. An atlas of high-resolution images showcasing mouse nervous, lymphatic and vascular systems is accessible at http://discotechnologies.org/wildDISCO/atlas/index.php .


Assuntos
Imageamento Tridimensional , Imunoglobulina G , Camundongos , Animais
9.
ArXiv ; 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38495563

RESUMO

Biophysical modeling, particularly involving partial differential equations (PDEs), offers significant potential for tailoring disease treatment protocols to individual patients. However, the inverse problem-solving aspect of these models presents a substantial challenge, either due to the high computational requirements of model-based approaches or the limited robustness of deep learning (DL) methods. We propose a novel framework that leverages the unique strengths of both approaches in a synergistic manner. Our method incorporates a DL ensemble for initial parameter estimation, facilitating efficient downstream evolutionary sampling initialized with this DL-based prior. We showcase the effectiveness of integrating a rapid deep-learning algorithm with a high-precision evolution strategy in estimating brain tumor cell concentrations from magnetic resonance images. The DL-Prior plays a pivotal role, significantly constraining the effective sampling-parameter space. This reduction results in a fivefold convergence acceleration and a Dice-score of 95.

10.
Neurooncol Adv ; 6(1): vdad171, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38435962

RESUMO

Background: The diffuse growth pattern of glioblastoma is one of the main challenges for accurate treatment. Computational tumor growth modeling has emerged as a promising tool to guide personalized therapy. Here, we performed clinical and biological validation of a novel growth model, aiming to close the gap between the experimental state and clinical implementation. Methods: One hundred and twenty-four patients from The Cancer Genome Archive (TCGA) and 397 patients from the UCSF Glioma Dataset were assessed for significant correlations between clinical data, genetic pathway activation maps (generated with PARADIGM; TCGA only), and infiltration (Dw) as well as proliferation (ρ) parameters stemming from a Fisher-Kolmogorov growth model. To further evaluate clinical potential, we performed the same growth modeling on preoperative magnetic resonance imaging data from 30 patients of our institution and compared model-derived tumor volume and recurrence coverage with standard radiotherapy plans. Results: The parameter ratio Dw/ρ (P < .05 in TCGA) as well as the simulated tumor volume (P < .05 in TCGA/UCSF) were significantly inversely correlated with overall survival. Interestingly, we found a significant correlation between 11 proliferation pathways and the estimated proliferation parameter. Depending on the cutoff value for tumor cell density, we observed a significant improvement in recurrence coverage without significantly increased radiation volume utilizing model-derived target volumes instead of standard radiation plans. Conclusions: Identifying a significant correlation between computed growth parameters and clinical and biological data, we highlight the potential of tumor growth modeling for individualized therapy of glioblastoma. This might improve the accuracy of radiation planning in the near future.

11.
medRxiv ; 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38045345

RESUMO

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D-UNets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1w and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI additionally includes a lesion location annotation tool, labeling lesion location according to the 2017 McDonald criteria (periventricular, infratentorial, juxtacortical, subcortical). We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10mm3 and 100mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.

12.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3784-3795, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38198270

RESUMO

Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of four backbone AI models for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética , Feto/diagnóstico por imagem , Encéfalo/diagnóstico por imagem
13.
Neuroimage Clin ; 42: 103611, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38703470

RESUMO

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D U-Nets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1-weighted and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI also includes a lesion location annotation tool, labeling lesions as periventricular, infratentorial, and juxtacortical according to the 2017 McDonald criteria, and, additionally, as subcortical. We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10 mm3 and 100 mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Esclerose Múltipla , Substância Branca , Humanos , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Imageamento por Ressonância Magnética/métodos , Substância Branca/diagnóstico por imagem , Substância Branca/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Processamento de Imagem Assistida por Computador/métodos , Feminino , Neuroimagem/métodos , Neuroimagem/normas , Masculino , Adulto
14.
Neuro Oncol ; 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38813990

RESUMO

BACKGROUND: Surgical resection is the standard of care for patients with large or symptomatic brain metastases (BMs). Despite improved local control after adjuvant stereotactic radiotherapy, the risk of local failure (LF) persists. Therefore, we aimed to develop and externally validate a pre-therapeutic radiomics-based prediction tool to identify patients at high LF risk. METHODS: Data were collected from A Multicenter Analysis of Stereotactic Radiotherapy to the Resection Cavity of Brain Metastases (AURORA) retrospective study (training cohort: 253 patients from two centers; external test cohort: 99 patients from five centers). Radiomic features were extracted from the contrast-enhancing BM (T1-CE MRI sequence) and the surrounding edema (FLAIR sequence). Different combinations of radiomic and clinical features were compared. The final models were trained on the entire training cohort with the best parameter set previously determined by internal 5-fold cross-validation and tested on the external test set. RESULTS: The best performance in the external test was achieved by an elastic net regression model trained with a combination of radiomic and clinical features with a concordance index (CI) of 0.77, outperforming any clinical model (best CI: 0.70). The model effectively stratified patients by LF risk in a Kaplan-Meier analysis (p < 0.001) and demonstrated an incremental net clinical benefit. At 24 months, we found LF in 9% and 74% of the low and high-risk groups, respectively. CONCLUSIONS: A combination of clinical and radiomic features predicted freedom from LF better than any clinical feature set alone. Patients at high risk for LF may benefit from stricter follow-up routines or intensified therapy.

15.
ArXiv ; 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-36945687

RESUMO

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

16.
ArXiv ; 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-37292481

RESUMO

Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.

17.
Sci Data ; 11(1): 496, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750041

RESUMO

Meningiomas are the most common primary intracranial tumors and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on brain MRI for diagnosis, treatment planning, and longitudinal treatment monitoring. However, automated, objective, and quantitative tools for non-invasive assessment of meningiomas on multi-sequence MR images are not available. Here we present the BraTS Pre-operative Meningioma Dataset, as the largest multi-institutional expert annotated multilabel meningioma multi-sequence MR image dataset to date. This dataset includes 1,141 multi-sequence MR images from six sites, each with four structural MRI sequences (T2-, T2/FLAIR-, pre-contrast T1-, and post-contrast T1-weighted) accompanied by expert manually refined segmentations of three distinct meningioma sub-compartments: enhancing tumor, non-enhancing tumor, and surrounding non-enhancing T2/FLAIR hyperintensity. Basic demographic data are provided including age at time of initial imaging, sex, and CNS WHO grade. The goal of releasing this dataset is to facilitate the development of automated computational methods for meningioma segmentation and expedite their incorporation into clinical practice, ultimately targeting improvement in the care of meningioma patients.


Assuntos
Imageamento por Ressonância Magnética , Neoplasias Meníngeas , Meningioma , Meningioma/diagnóstico por imagem , Humanos , Neoplasias Meníngeas/diagnóstico por imagem , Masculino , Feminino , Processamento de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Idoso
18.
Med Image Anal ; 91: 103029, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37988921

RESUMO

Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.


Assuntos
Doenças de Pequenos Vasos Cerebrais , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Doenças de Pequenos Vasos Cerebrais/diagnóstico por imagem , Hemorragia Cerebral , Computadores
19.
ArXiv ; 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38235066

RESUMO

The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neuro-vascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited public datasets with annotations on CoW anatomy, especially for CTA. Therefore we organized the TopCoW Challenge in 2023 with the release of an annotated CoW dataset. The TopCoW dataset was the first public dataset with voxel-level annotations for thirteen possible CoW vessel components, enabled by virtual-reality (VR) technology. It was also the first large dataset with paired MRA and CTA from the same patients. TopCoW challenge formalized the CoW characterization problem as a multiclass anatomical segmentation task with an emphasis on topological metrics. We invited submissions worldwide for the CoW segmentation task, which attracted over 140 registered participants from four continents. The top performing teams managed to segment many CoW components to Dice scores around 90%, but with lower scores for communicating arteries and rare variants. There were also topological mistakes for predictions with high Dice scores. Additional topological analysis revealed further areas for improvement in detecting certain CoW components and matching CoW variant topology accurately. TopCoW represented a first attempt at benchmarking the CoW anatomical segmentation task for MRA and CTA, both morphologically and topologically.

20.
Diagnostics (Basel) ; 13(5)2023 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-36900118

RESUMO

(1) Background and Purpose: In magnetic resonance imaging (MRI) of the spine, T2-weighted (T2-w) fat-saturated (fs) images improve the diagnostic assessment of pathologies. However, in the daily clinical setting, additional T2-w fs images are frequently missing due to time constraints or motion artifacts. Generative adversarial networks (GANs) can generate synthetic T2-w fs images in a clinically feasible time. Therefore, by simulating the radiological workflow with a heterogenous dataset, this study's purpose was to evaluate the diagnostic value of additional synthetic, GAN-based T2-w fs images in the clinical routine. (2) Methods: 174 patients with MRI of the spine were retrospectively identified. A GAN was trained to synthesize T2-w fs images from T1-w, and non-fs T2-w images of 73 patients scanned in our institution. Subsequently, the GAN was used to create synthetic T2-w fs images for the previously unseen 101 patients from multiple institutions. In this test dataset, the additional diagnostic value of synthetic T2-w fs images was assessed in six pathologies by two neuroradiologists. Pathologies were first graded on T1-w and non-fs T2-w images only, then synthetic T2-w fs images were added, and pathologies were graded again. Evaluation of the additional diagnostic value of the synthetic protocol was performed by calculation of Cohen's ĸ and accuracy in comparison to a ground truth (GT) grading based on real T2-w fs images, pre- or follow-up scans, other imaging modalities, and clinical information. (3) Results: The addition of the synthetic T2-w fs to the imaging protocol led to a more precise grading of abnormalities than when grading was based on T1-w and non-fs T2-w images only (mean ĸ GT versus synthetic protocol = 0.65; mean ĸ GT versus T1/T2 = 0.56; p = 0.043). (4) Conclusions: The implementation of synthetic T2-w fs images in the radiological workflow significantly improves the overall assessment of spine pathologies. Thereby, high-quality, synthetic T2-w fs images can be virtually generated by a GAN from heterogeneous, multicenter T1-w and non-fs T2-w contrasts in a clinically feasible time, which underlines the reproducibility and generalizability of our approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA