Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Med Phys ; 51(4): 2367-2377, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38408022

RESUMO

BACKGROUND: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 ∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Pelve , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/patologia , Planejamento da Radioterapia Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
2.
Phys Imaging Radiat Oncol ; 25: 100416, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36969503

RESUMO

Background and purpose: To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods: Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results: MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions: Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.

3.
Dentomaxillofac Radiol ; 51(7): 20210437, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35532946

RESUMO

Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.


Assuntos
Aprendizado Profundo , Cirurgia Bucal , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
4.
Comput Methods Programs Biomed ; 208: 106261, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34289437

RESUMO

BACKGROUND AND OBJECTIVES: Deep learning is being increasingly used for deformable image registration and unsupervised approaches, in particular, have shown great potential. However, the registration of abdominopelvic Computed Tomography (CT) images remains challenging due to the larger displacements compared to those in brain or prostate Magnetic Resonance Imaging datasets that are typically considered as benchmarks. In this study, we investigate the use of the commonly used unsupervised deep learning framework VoxelMorph for the registration of a longitudinal abdominopelvic CT dataset acquired in patients with bone metastases from breast cancer. METHODS: As a pre-processing step, the abdominopelvic CT images were refined by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of the VoxelMorph framework when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images in the longitudinal dataset. This devised training strategy was compared against training on simulated deformations of a single CT volume. A widely used software toolbox for deformable image registration called NiftyReg was used as a benchmark. The evaluations were performed by calculating the Dice Similarity Coefficient (DSC) between manual vertebrae segmentations and the Structural Similarity Index (SSIM). RESULTS: The CT table removal procedure allowed both VoxelMorph and NiftyReg to achieve significantly better registration performance. In a 4-fold cross-validation scheme, the incremental training strategy resulted in better registration performance compared to training on a single volume, with a mean DSC of 0.929±0.037 and 0.883±0.033, and a mean SSIM of 0.984±0.009 and 0.969±0.007, respectively. Although our deformable image registration method did not outperform NiftyReg in terms of DSC (0.988±0.003) or SSIM (0.995±0.002), the registrations were approximately 300 times faster. CONCLUSIONS: This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Software , Tomografia Computadorizada por Raios X
5.
Med Phys ; 46(11): 5027-5035, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31463937

RESUMO

PURPOSE: In order to attain anatomical models, surgical guides and implants for computer-assisted surgery, accurate segmentation of bony structures in cone-beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed-scale dense convolutional neural network (MS-D network) for bone segmentation in CBCT scans affected by metal artifacts. METHOD: Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS-D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS-D network was evaluated using a leave-2-out scheme and compared with a clinical snake evolution algorithm and two state-of-the-art CNN architectures (U-Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard. RESULTS: CBCT scans segmented using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 ± 0.06, 0.87 ± 0.07, 0.86 ± 0.05, and 0.78 ± 0.07, respectively. The STL models acquired using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm ± 0.13 mm, 0.43 mm ± 0.16 mm, 0.40 mm ± 0.12 mm and 0.57 mm ± 0.22 mm, respectively. In contrast to the MS-D network, the ResNet introduced wave-like artifacts in the STL models, whereas the U-Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae. CONCLUSION: The MS-D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.


Assuntos
Artefatos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador/métodos , Metais , Redes Neurais de Computação , Dente/diagnóstico por imagem , Humanos , Próteses e Implantes
6.
Radiat Prot Dosimetry ; 179(1): 58-68, 2018 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-29040707

RESUMO

The objective of the present study was to assess and compare the effective doses in the wrist region resulting from conventional radiography device, multislice computed tomography (MSCT) device and two cone beam computed tomography (CBCT) devices using MOSFET dosemeters and a custom made anthropomorphic RANDO phantom according to the ICRP 103 recommendation. The effective dose for the conventional radiography was 1.0 µSv. The effective doses for the NewTom 5 G CBCT ranged between 0.7 µSv and 1.6 µSv, for the Planmed Verity CBCT 2.4 µSv and for the MSCT 8.6 µSv. When compared with the effective dose for AP- and LAT projections of a conventional radiographic device, this study showed an 8.6-fold effective dose for standard MSCT protocol and between 0.7 and 2.4-fold effective dose for standard CBCT protocols. When compared to the MSCT device, the CBCT devices offer a 3D view of the wrist at significantly lower effective doses.


Assuntos
Tomografia Computadorizada de Feixe Cônico/instrumentação , Tomografia Computadorizada Multidetectores/instrumentação , Doses de Radiação , Punho/efeitos da radiação , Humanos , Imagens de Fantasmas
7.
Med Phys ; 45(1): 92-100, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29091278

RESUMO

PURPOSE: Imaging phantoms are widely used for testing and optimization of imaging devices without the need to expose humans to irradiation. However, commercially available phantoms are commonly manufactured in simple, generic forms and sizes and therefore do not resemble the clinical situation for many patients. METHODS: Using 3D printing techniques, we created a life-size phantom based on a clinical CT scan of the thorax from a patient with lung cancer. It was assembled from bony structures printed in gypsum, lung structures consisting of airways, blood vessels >1 mm, and outer lung surface, three lung tumors printed in nylon, and soft tissues represented by silicone (poured into a 3D-printed mold). RESULTS: Kilovoltage x-ray and CT images of the phantom closely resemble those of the real patient in terms of size, shapes, and structures. Surface comparison using 3D models obtained from the phantom and the 3D models used for printing showed mean differences <1 mm for all structures. Tensile tests of the materials used for the phantom show that the phantom is able to endure radiation doses over 24,000 Gy. CONCLUSIONS: It is feasible to create an anthropomorphic thorax phantom using 3D printing and molding techniques. The phantom closely resembles a real patient in terms of spatial accuracy and is currently being used to evaluate x-ray-based imaging quality and positional verification techniques for radiotherapy.


Assuntos
Imagens de Fantasmas , Impressão Tridimensional , Tórax/diagnóstico por imagem , Tomografia Computadorizada por Raios X/instrumentação , Humanos
8.
Sci Rep ; 7(1): 10021, 2017 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-28855717

RESUMO

Surgical reconstruction of cartilaginous defects remains a major challenge. In the current study, we aimed to identify an imaging strategy for the development of patient-specific constructs that aid in the reconstruction of nasal deformities. Magnetic Resonance Imaging (MRI) was performed on a human cadaver head to find the optimal MRI sequence for nasal cartilage. This sequence was subsequently used on a volunteer. Images of both were assessed by three independent researchers to determine measurement error and total segmentation time. Three dimensionally (3D) reconstructed alar cartilage was then additively manufactured. Validity was assessed by comparing manually segmented MR images to the gold standard (micro-CT). Manual segmentation allowed delineation of the nasal cartilages. Inter- and intra-observer agreement was acceptable in the cadaver (coefficient of variation 4.6-12.5%), but less in the volunteer (coefficient of variation 0.6-21.9%). Segmentation times did not differ between observers (cadaver P = 0.36; volunteer P = 0.6). The lateral crus of the alar cartilage was consistently identified by all observers, whereas part of the medial crus was consistently missed. This study suggests that MRI is a feasible imaging modality for the development of 3D alar constructs for patient-specific reconstruction.


Assuntos
Imageamento por Ressonância Magnética/métodos , Cartilagens Nasais/diagnóstico por imagem , Modelagem Computacional Específica para o Paciente , Procedimentos de Cirurgia Plástica/métodos , Impressão Tridimensional , Idoso , Feminino , Humanos , Cartilagens Nasais/cirurgia
9.
Int J Comput Assist Radiol Surg ; 12(4): 607-615, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-27718124

RESUMO

PURPOSE: Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. METHOD: One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). RESULTS: The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). CONCLUSIONS: This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.


Assuntos
Cabeça/diagnóstico por imagem , Imageamento Tridimensional/métodos , Crânio/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes
10.
J Oral Maxillofac Surg ; 74(8): 1608-12, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27137437

RESUMO

Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures.


Assuntos
Fixação Interna de Fraturas/métodos , Fraturas Orbitárias/diagnóstico por imagem , Fraturas Orbitárias/cirurgia , Implantes Orbitários , Procedimentos de Cirurgia Plástica/métodos , Impressão Tridimensional , Ciclismo/lesões , Transplante Ósseo , Feminino , Humanos , Ílio/transplante , Pessoa de Meia-Idade , Desenho de Prótese , Tomografia Computadorizada por Raios X , Transplante Autólogo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA