Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
2.
Med Biol Eng Comput ; 60(2): 583-598, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35029812

RESUMO

Free-breathing external beam radiotherapy remains challenging due to the complex elastic or irregular motion of abdominal organs, as imaging moving organs leads to the creation of motion blurring artifacts. In this paper, we propose a radial-based MRI reconstruction method from 3D free-breathing abdominal data using spatio-temporal geodesic trajectories, to quantify motion during radiotherapy. The prospective study was approved by the institutional review board and consent was obtained from all participants. A total of 25 healthy volunteers, 12 women and 13 men (38 years ± 12 [standard deviation]), and 11 liver cancer patients underwent imaging using a 3.0 T clinical MRI system. The radial acquisition based on golden-angle sparse sampling was performed using a 3D stack-of-stars gradient-echo sequence and reconstructed using a discretized piecewise spatio-temporal trajectory defined in a low-dimensional embedding, which tracks the inhale and exhale phases, allowing the separation between distinct motion phases. Liver displacement between phases as measured with the proposed radial approach based on the deformation vector fields was compared to a navigator-based approach. Images reconstructed with the proposed technique with 20 motion states and registered with the multiscale B-spline approach received on average the highest Likert scores for the overall image quality and visual SNR score 3.2 ± 0.3 (mean ± standard deviation), with liver displacement errors varying between 0.1 and 2.0 mm (mean 0.8 ± 0.6 mm). When compared to navigator-based approaches, the proposed method yields similar deformation vector field magnitudes and angle distributions, and with improved reconstruction accuracy based on mean squared errors. Schematic illustration of the proposed 4D-MRI reconstruction method based on radial golden-angle acquisitions and a respiration motion model from a manifold embedding used for motion tracking. First, data is extracted from the center of k-space using golden-angle sampling, which is then mapped onto a low-dimensional embedding, describing the relationship between neighboring samples in the breathing cycle. The trained model is then used to extract the respiratory motion signal for slice re-ordering. The process then improves the image quality through deformable image registration. Using a reference volume, the deformation vector field (DVF) of sequential motion states are extracted, followed by deformable registrations. The output is a 4DMRI which allows to visualize and quantify motion during free-breathing.


Assuntos
Imageamento Tridimensional , Respiração , Artefatos , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Movimento (Física) , Estudos Prospectivos
3.
Med Image Anal ; 75: 102260, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34670149

RESUMO

Radiotherapy is a widely used treatment modality for various types of cancers. A challenge for precise delivery of radiation to the treatment site is the management of internal motion caused by the patient's breathing, especially around abdominal organs such as the liver. Current image-guided radiation therapy (IGRT) solutions rely on ionising imaging modalities such as X-ray or CBCT, which do not allow real-time target tracking. Ultrasound imaging (US) on the other hand is relatively inexpensive, portable and non-ionising. Although 2D US can be acquired at a sufficient temporal frequency, it doesn't allow for target tracking in multiple planes, while 3D US acquisitions are not adapted for real-time. In this work, a novel deep learning-based motion modelling framework is presented for ultrasound IGRT. Our solution includes an image similarity-based rigid alignment module combined with a deep deformable motion model. Leveraging the representational capabilities of convolutional autoencoders, our deformable motion model associates complex 3D deformations with 2D surrogate US images through a common learned low dimensional representation. The model is trained on a variety of deformations and anatomies which enables it to generate the 3D motion experienced by the liver of a previously unseen subject. During inference, our framework only requires two pre-treatment 3D volumes of the liver at extreme breathing phases and a live 2D surrogate image representing the current state of the organ. In this study, the presented model is evaluated on a 3D+t US data set of 20 volunteers based on image similarity as well as anatomical target tracking performance. We report results that surpass comparable methodologies in both metric categories with a mean tracking error of 3.5±2.4 mm, demonstrating the potential of this technique for IGRT.


Assuntos
Imageamento Tridimensional , Radioterapia Guiada por Imagem , Humanos , Movimento (Física) , Ultrassonografia , Ultrassonografia de Intervenção
4.
Med Image Anal ; 74: 102250, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34601453

RESUMO

Shape and location organ variability induced by respiration constitutes one of the main challenges during dose delivery in radiotherapy. Providing up-to-date volumetric information during treatment can improve tumor tracking, thereby increasing treatment efficiency and reducing damage to healthy tissue. We propose a novel probabilistic model to address the problem of volumetric estimation with scalable predictive horizon from image-based surrogates during radiotherapy treatments, thus enabling out-of-plane tracking of targets. This problem is formulated as a conditional learning task, where the predictive variables are the 2D surrogate images and a pre-operative static 3D volume. The model learns a distribution of realistic motion fields over a population dataset. Simultaneously, a seq-2-seq inspired temporal mechanism acts over the surrogate images yielding extrapolated-in-time representations. The phase-specific motion distributions are associated with the predicted temporal representations, allowing the recovery of dense organ deformation in multiple times. Due to its generative nature, this model enables uncertainty estimations by sampling the latent space multiple times. Furthermore, it can be readily personalized to a new subject via fine-tuning, and does not require inter-subject correspondences. The proposed model was evaluated on free-breathing 4D MRI and ultrasound datasets from 25 healthy volunteers, as well as on 11 cancer patients. A navigator-based data augmentation strategy was used during the slice reordering process to increase model robustness against inter-cycle variability. The patient data was used as a hold-out test set. Our approach yields volumetric prediction from image surrogates with a mean error of 1.67 ± 1.68 mm and 2.17 ± 0.82 mm in unseen cases of the patient MRI and US datasets, respectively. Moreover, model personalization yields a mean landmark error of 1.4 ± 1.1 mm compared to ground truth annotations in the volunteer MRI dataset, with statistically significant improvements over state-of-the-art.


Assuntos
Radioterapia Guiada por Imagem , Humanos , Imageamento por Ressonância Magnética , Modelos Estatísticos , Respiração , Ultrassonografia
5.
Int J Comput Assist Radiol Surg ; 16(7): 1213-1225, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34114173

RESUMO

PURPOSE: Respiratory motion of thoracic organs poses a severe challenge for the administration of image-guided radiotherapy treatments. Providing online and up-to-date volumetric information during free breathing can improve target tracking, ultimately increasing treatment efficiency and reducing toxicity to surrounding healthy tissue. In this work, a novel population-based generative network is proposed to address the problem of 3D target location prediction from 2D image-based surrogates during radiotherapy, thus enabling out-of-plane tracking of treatment targets using images acquired in real time. METHODS: The proposed model is trained to simultaneously create a low-dimensional manifold representation of 3D non-rigid deformations and to predict, ahead of time, the motion of the treatment target. The predictive capabilities of the model allow correcting target location errors that can arise due to system latency, using only a baseline volume of the patient anatomy. Importantly, the method does not require supervised information such as ground-truth registration fields, organ segmentation, or anatomical landmarks. RESULTS: The proposed architecture was evaluated on both free-breathing 4D MRI and ultrasound datasets. Potential challenges present in a realistic therapy, like different acquisition protocols, were taken into account by using an independent hold-out test set. Our approach enables 3D target tracking from single-view slices with a mean landmark error of 1.8 mm, 2.4 mm and 5.2 mm in volunteer MRI, patient MRI and US datasets, respectively, without requiring any prior subject-specific 4D acquisition. CONCLUSIONS: This model presents several advantages over state-of-the-art approaches. Namely, it benefits from an explainable latent space with explicit respiratory phase discrimination. Thanks to the strong generalization capabilities of neural networks, it does not require establishing inter-subject correspondences. Once trained, it can be quickly deployed with an inference time of only 8 ms. The results show the capability of the network to predict future anatomical changes and track tumors in real time, yielding statistically significant improvements over related methods.


Assuntos
Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Radioterapia Guiada por Imagem/métodos , Neoplasias Torácicas/radioterapia , Ultrassonografia/métodos , Humanos , Respiração , Neoplasias Torácicas/diagnóstico
6.
Med Image Anal ; 64: 101754, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32580056

RESUMO

External beam radiotherapy is a commonly used treatment option for patients with cancer in the thoracic and abdominal regions. However, respiratory motion constitutes a major limitation during the intervention. It may stray the pre-defined target and trajectories determined during planning from the actual anatomy. We propose a novel framework to predict the in-plane organ motion. We introduce a recurrent encoder-decoder architecture which leverages feature representations at multiple scales. It simultaneously learns to map dense deformations between consecutive images from a given input sequence and to extrapolate them through time. Subsequently, several cascade-arranged spatial transformers use the predicted deformation fields to generate a future image sequence. We propose the use of a composite loss function which minimizes the difference between ground-truth and predicted images while maintaining smooth deformations. Our model is trained end-to-end in an unsupervised manner, thus it does not require additional information beyond image data. Moreover, no pre-processing steps such as segmentation or registration are needed. We report results on 85 different cases (healthy subjects and patients) belonging to multiples datasets across different imaging modalities. Experiments were aimed at investigating the importance of the proposed multi-scale architecture design and the effect of increasing the number of predicted frames on the overall accuracy of the model. The proposed model was able to predict vessel positions in the next temporal image with a median accuracy of 0.45 (0.55) mm, 0.45 (0.74) mm and 0.28 (0.58) mm in MRI, US and CT datasets, respectively. The obtained results show the strong potential of the model by achieving accurate matching between the predicted and target images on several imaging modalities.


Assuntos
Imageamento por Ressonância Magnética , Respiração , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA