Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neurooncol Adv ; 6(1): vdad171, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38435962

RESUMO

Background: The diffuse growth pattern of glioblastoma is one of the main challenges for accurate treatment. Computational tumor growth modeling has emerged as a promising tool to guide personalized therapy. Here, we performed clinical and biological validation of a novel growth model, aiming to close the gap between the experimental state and clinical implementation. Methods: One hundred and twenty-four patients from The Cancer Genome Archive (TCGA) and 397 patients from the UCSF Glioma Dataset were assessed for significant correlations between clinical data, genetic pathway activation maps (generated with PARADIGM; TCGA only), and infiltration (Dw) as well as proliferation (ρ) parameters stemming from a Fisher-Kolmogorov growth model. To further evaluate clinical potential, we performed the same growth modeling on preoperative magnetic resonance imaging data from 30 patients of our institution and compared model-derived tumor volume and recurrence coverage with standard radiotherapy plans. Results: The parameter ratio Dw/ρ (P < .05 in TCGA) as well as the simulated tumor volume (P < .05 in TCGA/UCSF) were significantly inversely correlated with overall survival. Interestingly, we found a significant correlation between 11 proliferation pathways and the estimated proliferation parameter. Depending on the cutoff value for tumor cell density, we observed a significant improvement in recurrence coverage without significantly increased radiation volume utilizing model-derived target volumes instead of standard radiation plans. Conclusions: Identifying a significant correlation between computed growth parameters and clinical and biological data, we highlight the potential of tumor growth modeling for individualized therapy of glioblastoma. This might improve the accuracy of radiation planning in the near future.

2.
ArXiv ; 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38235066

RESUMO

The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neuro-vascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited public datasets with annotations on CoW anatomy, especially for CTA. Therefore we organized the TopCoW Challenge in 2023 with the release of an annotated CoW dataset. The TopCoW dataset was the first public dataset with voxel-level annotations for thirteen possible CoW vessel components, enabled by virtual-reality (VR) technology. It was also the first large dataset with paired MRA and CTA from the same patients. TopCoW challenge formalized the CoW characterization problem as a multiclass anatomical segmentation task with an emphasis on topological metrics. We invited submissions worldwide for the CoW segmentation task, which attracted over 140 registered participants from four continents. The top performing teams managed to segment many CoW components to Dice scores around 90%, but with lower scores for communicating arteries and rare variants. There were also topological mistakes for predictions with high Dice scores. Additional topological analysis revealed further areas for improvement in detecting certain CoW components and matching CoW variant topology accurately. TopCoW represented a first attempt at benchmarking the CoW anatomical segmentation task for MRA and CTA, both morphologically and topologically.

3.
Med Image Anal ; 83: 102672, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36395623

RESUMO

Current treatment planning of patients diagnosed with a brain tumor, such as glioma, could significantly benefit by accessing the spatial distribution of tumor cell concentration. Existing diagnostic modalities, e.g. magnetic resonance imaging (MRI), contrast sufficiently well areas of high cell density. In gliomas, however, they do not portray areas of low cell concentration, which can often serve as a source for the secondary appearance of the tumor after treatment. To estimate tumor cell densities beyond the visible boundaries of the lesion, numerical simulations of tumor growth could complement imaging information by providing estimates of full spatial distributions of tumor cells. Over recent years a corpus of literature on medical image-based tumor modeling was published. It includes different mathematical formalisms describing the forward tumor growth model. Alongside, various parametric inference schemes were developed to perform an efficient tumor model personalization, i.e. solving the inverse problem. However, the unifying drawback of all existing approaches is the time complexity of the model personalization which prohibits a potential integration of the modeling into clinical settings. In this work, we introduce a deep learning based methodology for inferring the patient-specific spatial distribution of brain tumors from T1Gd and FLAIR MRI medical scans. Coined as Learn-Morph-Infer, the method achieves real-time performance in the order of minutes on widely available hardware and the compute time is stable across tumor models of different complexity, such as reaction-diffusion and reaction-advection-diffusion models. We believe the proposed inverse solution approach not only bridges the way for clinical translation of brain tumor personalization but can also be adopted to other scientific and engineering domains.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem
4.
Med Image Anal ; 77: 102387, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35180675

RESUMO

Voluntary and involuntary patient motion is a major problem for data quality in clinical routine of Magnetic Resonance Imaging (MRI). It has been thoroughly investigated and, yet it still remains unresolved. In quantitative MRI, motion artifacts impair the entire temporal evolution of the magnetization and cause errors in parameter estimation. Here, we present a novel strategy based on residual learning for retrospective motion correction in fast 3D whole-brain multiparametric MRI. We propose a 3D multiscale convolutional neural network (CNN) that learns the non-linear relationship between the motion-affected quantitative parameter maps and the residual error to their motion-free reference. For supervised model training, despite limited data availability, we propose a physics-informed simulation to generate self-contained paired datasets from a priori motion-free data. We evaluate motion-correction performance of the proposed method for the example of 3D Quantitative Transient-state Imaging at 1.5T and 3T. We show the robustness of the motion correction for various motion regimes and demonstrate the generalization capabilities of the residual CNN in terms of real-motion in vivo data of healthy volunteers and clinical patient cases, including pediatric and adult patients with large brain lesions. Our study demonstrates that the proposed motion correction outperforms current state of the art, reliably providing a high, clinically relevant image quality for mild to pronounced patient movements. This has important implications in clinical setups where large amounts of motion affected data must be discarded as they are rendered diagnostically unusable.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Adulto , Artefatos , Criança , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Movimento (Física) , Estudos Retrospectivos
5.
Front Neuroimaging ; 1: 977491, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37555157

RESUMO

Registration methods facilitate the comparison of multiparametric magnetic resonance images acquired at different stages of brain tumor treatments. Image-based registration solutions are influenced by the sequences chosen to compute the distance measure, and the lack of image correspondences due to the resection cavities and pathological tissues. Nonetheless, an evaluation of the impact of these input parameters on the registration of longitudinal data is still missing. This work evaluates the influence of multiple sequences, namely T1-weighted (T1), T2-weighted (T2), contrast enhanced T1-weighted (T1-CE), and T2 Fluid Attenuated Inversion Recovery (FLAIR), and the exclusion of the pathological tissues on the non-rigid registration of pre- and post-operative images. We here investigate two types of registration methods, an iterative approach and a convolutional neural network solution based on a 3D U-Net. We employ two test sets to compute the mean target registration error (mTRE) based on corresponding landmarks. In the first set, markers are positioned exclusively in the surroundings of the pathology. The methods employing T1-CE achieves the lowest mTREs, with a improvement up to 0.8 mm for the iterative solution. The results are higher than the baseline when using the FLAIR sequence. When excluding the pathology, lower mTREs are observable for most of the methods. In the second test set, corresponding landmarks are located in the entire brain volumes. Both solutions employing T1-CE obtain the lowest mTREs, with a decrease up to 1.16 mm for the iterative method, whereas the results worsen using the FLAIR. When excluding the pathology, an improvement is observable for the CNN method using T1-CE. Both approaches utilizing the T1-CE sequence obtain the best mTREs, whereas the FLAIR is the least informative to guide the registration process. Besides, the exclusion of pathology from the distance measure computation improves the registration of the brain tissues surrounding the tumor. Thus, this work provides the first numerical evaluation of the influence of these parameters on the registration of longitudinal magnetic resonance images, and it can be helpful for developing future algorithms.

6.
Front Neurosci ; 14: 125, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32410929

RESUMO

Despite great advances in brain tumor segmentation and clear clinical need, translation of state-of-the-art computational methods into clinical routine and scientific practice remains a major challenge. Several factors impede successful implementations, including data standardization and preprocessing. However, these steps are pivotal for the deployment of state-of-the-art image segmentation algorithms. To overcome these issues, we present BraTS Toolkit. BraTS Toolkit is a holistic approach to brain tumor segmentation and consists of three components: First, the BraTS Preprocessor facilitates data standardization and preprocessing for researchers and clinicians alike. It covers the entire image analysis workflow prior to tumor segmentation, from image conversion and registration to brain extraction. Second, BraTS Segmentor enables orchestration of BraTS brain tumor segmentation algorithms for generation of fully-automated segmentations. Finally, Brats Fusionator can combine the resulting candidate segmentations into consensus segmentations using fusion methods such as majority voting and iterative SIMPLE fusion. The capabilities of our tools are illustrated with a practical example to enable easy translation to clinical and scientific practice.

7.
IEEE J Biomed Health Inform ; 24(9): 2599-2608, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32054593

RESUMO

Extranodal natural killer/T cell lymphoma (ENKL), nasal type is a kind of rare disease with a low survival rate that primarily affects Asian and South American populations. Segmentation of ENKL lesions is crucial for clinical decision support and treatment planning. This paper is the first study on computer-aided diagnosis systems for the ENKL segmentation problem. We propose an automatic, coarse-to-fine approach for ENKL segmentation using adversarial networks. In the coarse stage, we extract the region of interest bounding the lesions utilizing a segmentation neural network. In the fine stage, we use an adversarial segmentation network and further introduce a multi-scale L1 loss function to drive the network to learn both global and local features. The generator and discriminator are alternately trained by backpropagation in an adversarial fashion in a min-max game. Furthermore, we present the first exploration of zone-based uncertainty estimates based on Monte Carlo dropout technique in the context of deep networks for medical image segmentation. Specifically, we propose the uncertainty criteria based on the lesion and the background, and then linearly normalize them to a specific interval. This is not only the crucial criterion for evaluating the superiority of the algorithm, but also permits subsequent optimization by engineers and revision by clinicians after quantitatively understanding the main source of uncertainty from the background or the lesion zone. Experimental results demonstrate that the proposed method is more effective and lesion-zone stable than state-of-the-art deep-learning based segmentation model.


Assuntos
Linfoma de Células T , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA