Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Med Image Anal ; 97: 103276, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39068830

RESUMEN

Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Imagen por Resonancia Magnética , Planificación de la Radioterapia Asistida por Computador , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Dosificación Radioterapéutica , Neoplasias/radioterapia , Neoplasias/diagnóstico por imagen , Radioterapia Guiada por Imagen/métodos
2.
Med Phys ; 51(4): 2367-2377, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38408022

RESUMEN

BACKGROUND: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 ∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Próstata/patología , Pelvis , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/patología , Planificación de la Radioterapia Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
3.
Phys Imaging Radiat Oncol ; 25: 100416, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36969503

RESUMEN

Background and purpose: To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods: Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results: MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions: Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.

4.
Dentomaxillofac Radiol ; 51(7): 20210437, 2022 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-35532946

RESUMEN

Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.


Asunto(s)
Aprendizaje Profundo , Cirugía Bucal , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos
5.
Comput Methods Programs Biomed ; 208: 106261, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34289437

RESUMEN

BACKGROUND AND OBJECTIVES: Deep learning is being increasingly used for deformable image registration and unsupervised approaches, in particular, have shown great potential. However, the registration of abdominopelvic Computed Tomography (CT) images remains challenging due to the larger displacements compared to those in brain or prostate Magnetic Resonance Imaging datasets that are typically considered as benchmarks. In this study, we investigate the use of the commonly used unsupervised deep learning framework VoxelMorph for the registration of a longitudinal abdominopelvic CT dataset acquired in patients with bone metastases from breast cancer. METHODS: As a pre-processing step, the abdominopelvic CT images were refined by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of the VoxelMorph framework when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images in the longitudinal dataset. This devised training strategy was compared against training on simulated deformations of a single CT volume. A widely used software toolbox for deformable image registration called NiftyReg was used as a benchmark. The evaluations were performed by calculating the Dice Similarity Coefficient (DSC) between manual vertebrae segmentations and the Structural Similarity Index (SSIM). RESULTS: The CT table removal procedure allowed both VoxelMorph and NiftyReg to achieve significantly better registration performance. In a 4-fold cross-validation scheme, the incremental training strategy resulted in better registration performance compared to training on a single volume, with a mean DSC of 0.929±0.037 and 0.883±0.033, and a mean SSIM of 0.984±0.009 and 0.969±0.007, respectively. Although our deformable image registration method did not outperform NiftyReg in terms of DSC (0.988±0.003) or SSIM (0.995±0.002), the registrations were approximately 300 times faster. CONCLUSIONS: This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Programas Informáticos , Tomografía Computarizada por Rayos X
6.
Med Phys ; 46(11): 5027-5035, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31463937

RESUMEN

PURPOSE: In order to attain anatomical models, surgical guides and implants for computer-assisted surgery, accurate segmentation of bony structures in cone-beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed-scale dense convolutional neural network (MS-D network) for bone segmentation in CBCT scans affected by metal artifacts. METHOD: Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS-D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS-D network was evaluated using a leave-2-out scheme and compared with a clinical snake evolution algorithm and two state-of-the-art CNN architectures (U-Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard. RESULTS: CBCT scans segmented using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 ± 0.06, 0.87 ± 0.07, 0.86 ± 0.05, and 0.78 ± 0.07, respectively. The STL models acquired using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm ± 0.13 mm, 0.43 mm ± 0.16 mm, 0.40 mm ± 0.12 mm and 0.57 mm ± 0.22 mm, respectively. In contrast to the MS-D network, the ResNet introduced wave-like artifacts in the STL models, whereas the U-Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae. CONCLUSION: The MS-D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.


Asunto(s)
Artefactos , Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador/métodos , Metales , Redes Neurales de la Computación , Diente/diagnóstico por imagen , Humanos , Prótesis e Implantes
7.
Radiat Prot Dosimetry ; 179(1): 58-68, 2018 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-29040707

RESUMEN

The objective of the present study was to assess and compare the effective doses in the wrist region resulting from conventional radiography device, multislice computed tomography (MSCT) device and two cone beam computed tomography (CBCT) devices using MOSFET dosemeters and a custom made anthropomorphic RANDO phantom according to the ICRP 103 recommendation. The effective dose for the conventional radiography was 1.0 µSv. The effective doses for the NewTom 5 G CBCT ranged between 0.7 µSv and 1.6 µSv, for the Planmed Verity CBCT 2.4 µSv and for the MSCT 8.6 µSv. When compared with the effective dose for AP- and LAT projections of a conventional radiographic device, this study showed an 8.6-fold effective dose for standard MSCT protocol and between 0.7 and 2.4-fold effective dose for standard CBCT protocols. When compared to the MSCT device, the CBCT devices offer a 3D view of the wrist at significantly lower effective doses.


Asunto(s)
Tomografía Computarizada de Haz Cónico/instrumentación , Tomografía Computarizada Multidetector/instrumentación , Dosis de Radiación , Muñeca/efectos de la radiación , Humanos , Fantasmas de Imagen
8.
Med Phys ; 45(1): 92-100, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-29091278

RESUMEN

PURPOSE: Imaging phantoms are widely used for testing and optimization of imaging devices without the need to expose humans to irradiation. However, commercially available phantoms are commonly manufactured in simple, generic forms and sizes and therefore do not resemble the clinical situation for many patients. METHODS: Using 3D printing techniques, we created a life-size phantom based on a clinical CT scan of the thorax from a patient with lung cancer. It was assembled from bony structures printed in gypsum, lung structures consisting of airways, blood vessels >1 mm, and outer lung surface, three lung tumors printed in nylon, and soft tissues represented by silicone (poured into a 3D-printed mold). RESULTS: Kilovoltage x-ray and CT images of the phantom closely resemble those of the real patient in terms of size, shapes, and structures. Surface comparison using 3D models obtained from the phantom and the 3D models used for printing showed mean differences <1 mm for all structures. Tensile tests of the materials used for the phantom show that the phantom is able to endure radiation doses over 24,000 Gy. CONCLUSIONS: It is feasible to create an anthropomorphic thorax phantom using 3D printing and molding techniques. The phantom closely resembles a real patient in terms of spatial accuracy and is currently being used to evaluate x-ray-based imaging quality and positional verification techniques for radiotherapy.


Asunto(s)
Fantasmas de Imagen , Impresión Tridimensional , Tórax/diagnóstico por imagen , Tomografía Computarizada por Rayos X/instrumentación , Humanos
9.
Sci Rep ; 7(1): 10021, 2017 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-28855717

RESUMEN

Surgical reconstruction of cartilaginous defects remains a major challenge. In the current study, we aimed to identify an imaging strategy for the development of patient-specific constructs that aid in the reconstruction of nasal deformities. Magnetic Resonance Imaging (MRI) was performed on a human cadaver head to find the optimal MRI sequence for nasal cartilage. This sequence was subsequently used on a volunteer. Images of both were assessed by three independent researchers to determine measurement error and total segmentation time. Three dimensionally (3D) reconstructed alar cartilage was then additively manufactured. Validity was assessed by comparing manually segmented MR images to the gold standard (micro-CT). Manual segmentation allowed delineation of the nasal cartilages. Inter- and intra-observer agreement was acceptable in the cadaver (coefficient of variation 4.6-12.5%), but less in the volunteer (coefficient of variation 0.6-21.9%). Segmentation times did not differ between observers (cadaver P = 0.36; volunteer P = 0.6). The lateral crus of the alar cartilage was consistently identified by all observers, whereas part of the medial crus was consistently missed. This study suggests that MRI is a feasible imaging modality for the development of 3D alar constructs for patient-specific reconstruction.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Cartílagos Nasales/diagnóstico por imagen , Modelación Específica para el Paciente , Procedimientos de Cirugía Plástica/métodos , Impresión Tridimensional , Anciano , Femenino , Humanos , Cartílagos Nasales/cirugía
10.
Int J Comput Assist Radiol Surg ; 12(4): 607-615, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-27718124

RESUMEN

PURPOSE: Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. METHOD: One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). RESULTS: The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). CONCLUSIONS: This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.


Asunto(s)
Cabeza/diagnóstico por imagen , Imagenología Tridimensional/métodos , Cráneo/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados
11.
J Oral Maxillofac Surg ; 74(8): 1608-12, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-27137437

RESUMEN

Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures.


Asunto(s)
Fijación Interna de Fracturas/métodos , Fracturas Orbitales/diagnóstico por imagen , Fracturas Orbitales/cirugía , Implantes Orbitales , Procedimientos de Cirugía Plástica/métodos , Impresión Tridimensional , Ciclismo/lesiones , Trasplante Óseo , Femenino , Humanos , Ilion/trasplante , Persona de Mediana Edad , Diseño de Prótesis , Tomografía Computarizada por Rayos X , Trasplante Autólogo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA