Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
IEEE Trans Med Imaging ; 42(3): 697-712, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36264729

RESUMEN

Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.


Asunto(s)
Cavidad Abdominal , Aprendizaje Profundo , Humanos , Algoritmos , Encéfalo/diagnóstico por imagen , Abdomen/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4736-4739, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086627

RESUMEN

In metastatic breast cancer, bone metastases are prevalent and associated with multiple complications. Assessing their response to treatment is therefore crucial. Most deep learning methods segment or detect lesions on a single acquisition while only a few focus on longitudinal studies. In this work, 45 patients with baseline (BL) and follow-up (FU) images recruited in the context of the EPICUREseinmeta study were analyzed. The aim was to determine if a network trained for a particular timepoint can generalize well to another one, and to explore different improvement strategies. Four networks based on the same 3D U-Net framework to segment bone lesions on BL and FU images were trained with different strategies and compared. These four networks were trained 1) only with BL images 2) only with FU images 3) with both BL and FU images 4) only with FU images but with BL images and bone lesion segmentations registered as input channels. With the obtained segmentations, we computed the PET Bone Index (PBI) which assesses the bone metastases burden of patients and we analyzed its potential for treatment response evaluation. Dice scores of 0.53, 0.55, 0.59 and 0.62 were respectively obtained on FU acquisitions. The under-performance of the first and third networks may be explained by the lower SUV uptake due to treatment response in FU images compared to BL images. The fourth network gives better results than the second network showing that the addition of BL PET images and bone lesion segmentations as prior knowledge has its importance. With an AUC of 0.86, the difference of PBI between two acquisitions could be used to assess treatment response. Clinical relevance- To assess the response to treatment of bone metastases, it is crucial to detect and segment them on several acquisitions from a same patient. We proposed a completely automatic method to detect and segment these metastases on longitudinal 18F-FDG PET/CT images in the context of metastatic breast cancer. We also proposed an automatic PBI to quantitatively assess the evolution of the bone metastases burden of patient and to automatically evaluate their response to treatment.


Asunto(s)
Neoplasias Óseas , Neoplasias de la Mama , Neoplasias Óseas/diagnóstico por imagen , Neoplasias Óseas/secundario , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Femenino , Fluorodesoxiglucosa F18 , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía de Emisión de Positrones
3.
Phys Med Biol ; 67(15)2022 07 21.
Artículo en Inglés | MEDLINE | ID: mdl-35785776

RESUMEN

Objective.This paper proposes a novel approach for the longitudinal registration of PET imaging acquired for the monitoring of patients with metastatic breast cancer. Unlike with other image analysis tasks, the use of deep learning (DL) has not significantly improved the performance of image registration. With this work, we propose a new registration approach to bridge the performance gap between conventional and DL-based methods: medical image registration method regularized by architecture (MIRRBA).Approach.MIRRBAis a subject-specific deformable registration method which relies on a deep pyramidal architecture to parametrize the deformation field. Diverging from the usual deep-learning paradigms,MIRRBAdoes not require a learning database, but only a pair of images to be registered that is used to optimize the network's parameters. We appliedMIRRBAon a private dataset of 110 whole-body PET images of patients with metastatic breast cancer. We used different architecture configurations to produce the deformation field and studied the results obtained. We also compared our method to several standard registration approaches: two conventional iterative registration methods (ANTs and Elastix) and two supervised DL-based models (LapIRN and Voxelmorph). Registration accuracy was evaluated using the Dice score, the target registration error, the average Hausdorff distance and the detection rate, while the realism of the registration obtained was evaluated using Jacobian's determinant. The ability of the different methods to shrink disappearing lesions was also computed with the disappearing rate.Main results.MIRRBA significantly improved all metrics when compared to DL-based approaches. The organ and lesion Dice scores of Voxelmorph improved by 6% and 52% respectively, while the ones of LapIRN increased by 5% and 65%. Regarding conventional approaches, MIRRBA presented comparable results showing the feasibility of our method.Significance.In this paper, we also demonstrate the regularizing power of deep architectures and present new elements to understand the role of the architecture in DL methods used for registration.


Asunto(s)
Neoplasias de la Mama , Procesamiento de Imagen Asistido por Computador , Algoritmos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones
4.
Cancers (Basel) ; 14(1)2021 Dec 26.
Artículo en Inglés | MEDLINE | ID: mdl-35008265

RESUMEN

Metastatic breast cancer patients receive lifelong medication and are regularly monitored for disease progression. The aim of this work was to (1) propose networks to segment breast cancer metastatic lesions on longitudinal whole-body PET/CT and (2) extract imaging biomarkers from the segmentations and evaluate their potential to determine treatment response. Baseline and follow-up PET/CT images of 60 patients from the EPICUREseinmeta study were used to train two deep-learning models to segment breast cancer metastatic lesions: One for baseline images and one for follow-up images. From the automatic segmentations, four imaging biomarkers were computed and evaluated: SULpeak, Total Lesion Glycolysis (TLG), PET Bone Index (PBI) and PET Liver Index (PLI). The first network obtained a mean Dice score of 0.66 on baseline acquisitions. The second network obtained a mean Dice score of 0.58 on follow-up acquisitions. SULpeak, with a 32% decrease between baseline and follow-up, was the biomarker best able to assess patients' response (sensitivity 87%, specificity 87%), followed by TLG (43% decrease, sensitivity 73%, specificity 81%) and PBI (8% decrease, sensitivity 69%, specificity 69%). Our networks constitute promising tools for the automatic segmentation of lesions in patients with metastatic breast cancer allowing treatment response assessment with several biomarkers.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1532-1535, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018283

RESUMEN

18FDG PET/CT imaging is commonly used in diagnosis and follow-up of metastatic breast cancer, but its quantitative analysis is complicated by the number and location heterogeneity of metastatic lesions. Considering that bones are the most common location among metastatic sites, this work aims to compare different approaches to segment the bones and bone metastatic lesions in breast cancer.Two deep learning methods based on U-Net were developed and trained to segment either both bones and bone lesions or bone lesions alone on PET/CT images. These methods were cross-validated on 24 patients from the prospective EPICUREseinmeta metastatic breast cancer study and were evaluated using recall and precision to measure lesion detection, as well as the Dice score to assess bones and bone lesions segmentation accuracy.Results show that taking into account bone information in the training process allows to improve the precision of the lesions detection as well as the Dice score of the segmented lesions. Moreover, using the obtained bone and bone lesion masks, we were able to compute a PET bone index (PBI) inspired by the recognized Bone Scan Index (BSI). This automatically computed PBI globally agrees with the one calculated from ground truth delineations.Clinical relevance- We propose a completely automatic deep learning based method to detect and segment bones and bone lesions on 18FDG PET/CT in the context of metastatic breast cancer. We also introduce an automatic PET bone index which could be incorporated in the monitoring and decision process.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Fluorodesoxiglucosa F18 , Neoplasias de la Mama/diagnóstico por imagen , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Estudios Prospectivos , Tomografía Computarizada por Rayos X
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1536-1539, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018284

RESUMEN

Semi-automatic measurements are performed on 18FDG PET-CT images to monitor the evolution of metastatic sites in the clinical follow-up of metastatic breast cancer patients. Apart from being time-consuming and prone to subjective approximation, semi-automatic tools cannot make the difference between cancerous regions and active organs, presenting a high 18FDG uptake.In this work, we combine a deep learning-based approach with a superpixel segmentation method to segment the main active organs (brain, heart, bladder) from full-body PET images. In particular, we integrate a superpixel SLIC algorithm at different levels of a convolutional network. Results are compared with a deep learning segmentation network alone. The methods are cross-validated on full-body PET images of 36 patients and tested on the acquisitions of 24 patients from a different study center, in the context of the ongoing EPICUREseinmeta study. The similarity between the manually defined organ masks and the results is evaluated with the Dice score. Moreover, the amount of false positives is evaluated through the positive predictive value (PPV).According to the computed Dice scores, all approaches allow to accurately segment the target organs. However, the networks integrating superpixels are better suited to transfer knowledge across datasets acquired on multiple sites (domain adaptation) and are less likely to segment structures outside of the target organs, according to the PPV.Hence, combining deep learning with superpixels allows to segment organs presenting a high 18FDG uptake on PET images without selecting cancerous lesion, and thus improves the precision of the semi-automatic tools monitoring the evolution of breast cancer metastasis.Clinical relevance- We demonstrate the utility of combining deep learning and superpixel segmentation methods to accurately find the contours of active organs from metastatic breast cancer images, to different dataset distributions.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Algoritmos , Encéfalo , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Humanos , Metástasis de la Neoplasia , Tomografía Computarizada por Tomografía de Emisión de Positrones
7.
Magn Reson Imaging ; 58: 97-107, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30695721

RESUMEN

Resting state functional magnetic resonance imaging is used to study how brain regions are functionally connected by measuring temporal correlation of the fMRI signals, when a subject is at rest. Sparse dictionary learning is used to estimate a dictionary of resting state networks by decomposing the whole brain signals into several temporal features (atoms), each being shared by a set of voxels associated to a network. Recently, we proposed and validated a new method entitled Sparsity-based Analysis of Reliable K-hubness (SPARK), suggesting that connector hubs of brain networks participating in inter-network communication can be identified by counting the number of atoms involved in each voxel (sparse number k). However, such hub analysis can be corrupted by the presence of noise-related atoms, where physiological fluctuations in cardiorespiratory processes may remain even after band-pass filtering and regression of confound signals from the white matter and cerebrospinal fluid. Handling this issue might require manual classification of noisy atoms, which is a time-consuming and subjective task. Motivated by the fact that the physiological fluctuations are often localized in tissues close to large vasculatures, i.e. sagittal sinus, we propose an automatic classification of physiological noise-related atoms for SPARK using spatial priors and a stepwise regression procedure. We measured the degree to which the noise-characteristic time-courses within the mask are explained by each atom, and classified noise-related atoms using a subject-specific threshold estimated using a bootstrap resampling based strategy. Using real data from healthy subjects (N = 25), manual classification of the atoms by two independent reviewers showed the presence of sagittal sinus related noise in 65% of the runs. Applying the same manual classification after the proposed automatic removal method reduced this rate to 19%. A 10-fold cross-validation on real data showed good specificity and accuracy of the proposed automated method in classifying the target noise (area under the ROC curve= 0.89), when compared to the manual classification considered as the reference. We demonstrated decrease in k-hubness values in the voxels involved in the sagittal sinus at both individual and group levels, suggesting a significant improvement of SPARK, which is particularly important when considering clinical applications.


Asunto(s)
Mapeo Encefálico , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Reconocimiento de Normas Patrones Automatizadas , Adulto , Algoritmos , Encéfalo/fisiología , Femenino , Voluntarios Sanos , Humanos , Masculino , Curva ROC , Análisis de Regresión , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...