Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Med Phys ; 51(4): 2367-2377, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38408022

RESUMO

BACKGROUND: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 ∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Pelve , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/patologia , Planejamento da Radioterapia Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
2.
Phys Imaging Radiat Oncol ; 25: 100416, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36969503

RESUMO

Background and purpose: To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods: Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results: MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions: Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA