Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Med Phys ; 51(4): 2367-2377, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38408022

RESUMEN

BACKGROUND: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. PURPOSE: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. METHODS: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. RESULTS: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85-0.91) and 0.86 (0.80-0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88-0.93) and 0.86 (0.80-0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 ∘ $^\circ$ , elastic deformations up to 40 mm, and bias fields. CONCLUSIONS: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Próstata/patología , Pelvis , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/patología , Planificación de la Radioterapia Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
2.
Phys Imaging Radiat Oncol ; 25: 100416, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36969503

RESUMEN

Background and purpose: To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods: Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results: MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions: Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.

3.
Dentomaxillofac Radiol ; 51(7): 20220104, 2022 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-35766951

RESUMEN

OBJECTIVE: Cone beam computed tomography (CBCT) images are being increasingly used to acquire three-dimensional (3D) models of the skull for additive manufacturing purposes. However, the accuracy of such models remains a challenge, especially in the orbital area. The aim of this study is to assess the impact of four different CBCT imaging positions on the accuracy of the resulting 3D models in the orbital area. METHODS: An anthropomorphic head phantom was manufactured by submerging a dry human skull in silicon to mimic the soft tissue attenuation and scattering properties of the human head. The phantom was scanned on a ProMax 3D MAX CBCT scanner using 90 and 120 kV for four different field of view positions: standard; elevated; backwards tilted; and forward tilted. All CBCT images were subsequently converted into 3D models and geometrically compared with a "gold-standard" optical scan of the dry skull. RESULTS: Mean absolute deviations of the 3D models ranged between 0.15 ± 0.11 mm and 0.56 ± 0.28 mm. The elevated imaging position in combination with 120 kV tube voltage resulted in an improved representation of the orbital walls in the resulting 3D model without compromising the accuracy. CONCLUSIONS: Head positioning during CBCT imaging can influence the accuracy of the resulting 3D model. The accuracy of such models may be improved by positioning the region of interest (e.g. the orbital area) in the focal plane (Figure 2a) of the CBCT X-ray beam.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Silicio , Tomografía Computarizada de Haz Cónico/métodos , Cabeza/diagnóstico por imagen , Humanos , Imagenología Tridimensional/métodos , Fantasmas de Imagen , Cráneo/diagnóstico por imagen
4.
Dentomaxillofac Radiol ; 51(7): 20210437, 2022 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-35532946

RESUMEN

Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.


Asunto(s)
Aprendizaje Profundo , Cirugía Bucal , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos
5.
J Imaging ; 7(3)2021 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-34460700

RESUMEN

The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed.

6.
Comput Methods Programs Biomed ; 208: 106261, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34289437

RESUMEN

BACKGROUND AND OBJECTIVES: Deep learning is being increasingly used for deformable image registration and unsupervised approaches, in particular, have shown great potential. However, the registration of abdominopelvic Computed Tomography (CT) images remains challenging due to the larger displacements compared to those in brain or prostate Magnetic Resonance Imaging datasets that are typically considered as benchmarks. In this study, we investigate the use of the commonly used unsupervised deep learning framework VoxelMorph for the registration of a longitudinal abdominopelvic CT dataset acquired in patients with bone metastases from breast cancer. METHODS: As a pre-processing step, the abdominopelvic CT images were refined by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of the VoxelMorph framework when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images in the longitudinal dataset. This devised training strategy was compared against training on simulated deformations of a single CT volume. A widely used software toolbox for deformable image registration called NiftyReg was used as a benchmark. The evaluations were performed by calculating the Dice Similarity Coefficient (DSC) between manual vertebrae segmentations and the Structural Similarity Index (SSIM). RESULTS: The CT table removal procedure allowed both VoxelMorph and NiftyReg to achieve significantly better registration performance. In a 4-fold cross-validation scheme, the incremental training strategy resulted in better registration performance compared to training on a single volume, with a mean DSC of 0.929±0.037 and 0.883±0.033, and a mean SSIM of 0.984±0.009 and 0.969±0.007, respectively. Although our deformable image registration method did not outperform NiftyReg in terms of DSC (0.988±0.003) or SSIM (0.995±0.002), the registrations were approximately 300 times faster. CONCLUSIONS: This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Programas Informáticos , Tomografía Computarizada por Rayos X
7.
Phys Med Biol ; 66(13)2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-34107467

RESUMEN

High cone-angle artifacts (HCAAs) appear frequently in circular cone-beam computed tomography (CBCT) images and can heavily affect diagnosis and treatment planning. To reduce HCAAs in CBCT scans, we propose a novel deep learning approach that reduces the three-dimensional (3D) nature of HCAAs to two-dimensional (2D) problems in an efficient way. Specifically, we exploit the relationship between HCAAs and the rotational scanning geometry by training a convolutional neural network (CNN) using image slices that were radially sampled from CBCT scans. We evaluated this novel approach using a dataset of input CBCT scans affected by HCAAs and high-quality artifact-free target CBCT scans. Two different CNN architectures were employed, namely U-Net and a mixed-scale dense CNN (MS-D Net). The artifact reduction performance of the proposed approach was compared to that of a Cartesian slice-based artifact reduction deep learning approach in which a CNN was trained to remove the HCAAs from Cartesian slices. In addition, all processed CBCT scans were segmented to investigate the impact of HCAAs reduction on the quality of CBCT image segmentation. We demonstrate that the proposed deep learning approach with geometry-aware dimension reduction greatly reduces HCAAs in CBCT scans and outperforms the Cartesian slice-based deep learning approach. Moreover, the proposed artifact reduction approach markedly improves the accuracy of the subsequent segmentation task compared to the Cartesian slice-based workflow.


Asunto(s)
Artefactos , Aprendizaje Profundo , Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
8.
Comput Methods Programs Biomed ; 207: 106192, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34062493

RESUMEN

BACKGROUND AND OBJECTIVE: Over the past decade, convolutional neural networks (CNNs) have revolutionized the field of medical image segmentation. Prompted by the developments in computational resources and the availability of large datasets, a wide variety of different two-dimensional (2D) and three-dimensional (3D) CNN training strategies have been proposed. However, a systematic comparison of the impact of these strategies on the image segmentation performance is still lacking. Therefore, this study aimed to compare eight different CNN training strategies, namely 2D (axial, sagittal and coronal slices), 2.5D (3 and 5 adjacent slices), majority voting, randomly oriented 2D cross-sections and 3D patches. METHODS: These eight strategies were used to train a U-Net and an MS-D network for the segmentation of simulated cone-beam computed tomography (CBCT) images comprising randomly-placed non-overlapping cylinders and experimental CBCT images of anthropomorphic phantom heads. The resulting segmentation performances were quantitatively compared by calculating Dice similarity coefficients. In addition, all segmented and gold standard experimental CBCT images were converted into virtual 3D models and compared using orientation-based surface comparisons. RESULTS: The CNN training strategy that generally resulted in the best performances on both simulated and experimental CBCT images was majority voting. When employing 2D training strategies, the segmentation performance can be optimized by training on image slices that are perpendicular to the predominant orientation of the anatomical structure of interest. Such spatial features should be taken into account when choosing or developing novel CNN training strategies for medical image segmentation. CONCLUSIONS: The results of this study will help clinicians and engineers to choose the most-suited CNN training strategy for CBCT image segmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Diente , Tomografía Computarizada de Haz Cónico , Redes Neurales de la Computación
9.
J Appl Clin Med Phys ; 22(5): 128-138, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33811787

RESUMEN

The aim of the study was to estimate and to compare effective doses in the elbow region resulting from four different x-ray imaging modalities. Absorbed organ doses were measured using 11 metal oxide field effect transistor (MOSFET) dosimeters that were placed in a custom-made anthropomorphic elbow RANDO phantom. Examinations were performed using Shimadzu FH-21 HR radiography device, Siemens Sensation Open 24-slice MSCT-device, NewTom 5G CBCT device, and Planmed Verity CBCT device, and the effective doses were calculated according to ICRP 103 recommendations. The effective dose for the conventional radiographic device was 1.5 µSv. The effective dose for the NewTom 5G CBCT ranged between 2.0 and 6.7 µSv, for the Planmed Verity CBCT device 2.6 µSv and for the Siemens Sensation MSCT device 37.4 µSv. Compared with conventional 2D radiography, this study demonstrated a 1.4-4.6 fold increase in effective dose for CBCT and 25-fold dose for standard MSCT protocols. When compared with 3D CBCT protocols, the study showed a 6-19 fold increase in effective dose using a standard MSCT protocol. CBCT devices offer a feasible low-dose alternative for elbow 3D imaging when compared to MSCT.


Asunto(s)
Codo , Tomografía Computarizada de Haz Cónico Espiral , Tomografía Computarizada de Haz Cónico , Humanos , Fantasmas de Imagen , Dosis de Radiación , Radiografía , Dosimetría Termoluminiscente
10.
Sci Data ; 6(1): 215, 2019 10 22.
Artículo en Inglés | MEDLINE | ID: mdl-31641152

RESUMEN

Unlike previous works, this open data collection consists of X-ray cone-beam (CB) computed tomography (CT) datasets specifically designed for machine learning applications and high cone-angle artefact reduction. Forty-two walnuts were scanned with a laboratory X-ray set-up to provide not only data from a single object but from a class of objects with natural variability. For each walnut, CB projections on three different source orbits were acquired to provide CB data with different cone angles as well as being able to compute artefact-free, high-quality ground truth images from the combined data that can be used for supervised learning. We provide the complete image reconstruction pipeline: raw projection data, a description of the scanning geometry, pre-processing and reconstruction scripts using open software, and the reconstructed volumes. Due to this, the dataset can not only be used for high cone-angle artefact reduction but also for algorithm development and evaluation for other tasks, such as image reconstruction from limited or sparse-angle (low-dose) scanning, super resolution, or segmentation.

11.
Forensic Sci Int ; 304: 109963, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31610335

RESUMEN

Clinical radiology is increasingly used as a source of data to test or develop forensic anthropological methods, especially in countries where contemporary skeletal collections are not available. Naturally, this requires analysis of the error that is a result of low accuracy of the modality (i.e. accuracy of the segmentation) and the error that arises due to difficulties in landmark recognition in virtual models. The cumulative effect of these errors ultimately determines whether virtual and dry bone measurements can be used interchangeably. To test the interchangeability of virtual and dry bone measurements, 13 male and 14 female intact cadavers from the body donation program of the Amsterdam UMC were CT scanned using a standard patient scanning protocol and processed to obtain the dry os coxae. These were again CT scanned using the same scanning protocol. All CT scans were segmented to create 3D virtual bone models of the os coxae ('dry' CT models and 'clinical' CT models). An Artec Spider 3D optical scanner was used to produce gold standard 'optical 3D models' of ten dry os coxae. The deviation of the surfaces of the 3D virtual bone models compared to the gold standard was used to calculate the accuracy of the CT models, both for the overall os coxae and for selected landmarks. Landmark recognition was studied by comparing the TEM and %TEM of nine traditional inter-landmark distances (ILDs). The percentage difference for the various ILDs between modalities was used to gauge the practical implications of both errors combined. Results showed that 'dry' CT models were 0.36-0.45mm larger than the 'optical 3D models' (deviations -0.27mm to 2.86mm). 'Clinical' CT models were 0.64-0.88mm larger than the 'optical 3D models' (deviations -4.99mm to 5.00mm). The accuracies of the ROIs were variable and larger for 'clinical' CT models than for 'dry' CT models. TEM and %TEM were generally in the acceptable ranges for all ILDs whilst no single modality was obviously more or less reliable than the others. For almost all ILDs, the average percentage difference between modalities was substantially larger than the average percentage difference between observers in 'dry bone' measurements only. Our results show that the combined error of segmentation- and landmark recognition error can be substantial, which may preclude the usage of 'clinical' CT scans as an alternative source for forensic anthropological reference data.


Asunto(s)
Imagenología Tridimensional , Huesos Pélvicos/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Anciano , Anciano de 80 o más Años , Puntos Anatómicos de Referencia , Cadáver , Simulación por Computador , Femenino , Antropología Forense , Humanos , Masculino , Persona de Mediana Edad , Huesos Pélvicos/anatomía & histología
12.
Med Phys ; 46(11): 5027-5035, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31463937

RESUMEN

PURPOSE: In order to attain anatomical models, surgical guides and implants for computer-assisted surgery, accurate segmentation of bony structures in cone-beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed-scale dense convolutional neural network (MS-D network) for bone segmentation in CBCT scans affected by metal artifacts. METHOD: Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS-D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS-D network was evaluated using a leave-2-out scheme and compared with a clinical snake evolution algorithm and two state-of-the-art CNN architectures (U-Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard. RESULTS: CBCT scans segmented using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 ± 0.06, 0.87 ± 0.07, 0.86 ± 0.05, and 0.78 ± 0.07, respectively. The STL models acquired using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm ± 0.13 mm, 0.43 mm ± 0.16 mm, 0.40 mm ± 0.12 mm and 0.57 mm ± 0.22 mm, respectively. In contrast to the MS-D network, the ResNet introduced wave-like artifacts in the STL models, whereas the U-Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae. CONCLUSION: The MS-D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.


Asunto(s)
Artefactos , Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador/métodos , Metales , Redes Neurales de la Computación , Diente/diagnóstico por imagen , Humanos , Prótesis e Implantes
13.
Comput Biol Med ; 103: 130-139, 2018 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-30366309

RESUMEN

BACKGROUND: The most tedious and time-consuming task in medical additive manufacturing (AM) is image segmentation. The aim of the present study was to develop and train a convolutional neural network (CNN) for bone segmentation in computed tomography (CT) scans. METHOD: The CNN was trained with CT scans acquired using six different scanners. Standard tessellation language (STL) models of 20 patients who had previously undergone craniotomy and cranioplasty using additively manufactured skull implants served as "gold standard" models during CNN training. The CNN segmented all patient CT scans using a leave-2-out scheme. All segmented CT scans were converted into STL models and geometrically compared with the gold standard STL models. RESULTS: The CT scans segmented using the CNN demonstrated a large overlap with the gold standard segmentation and resulted in a mean Dice similarity coefficient of 0.92 ±â€¯0.04. The CNN-based STL models demonstrated mean surface deviations ranging between -0.19 mm ±â€¯0.86 mm and 1.22 mm ±â€¯1.75 mm, when compared to the gold standard STL models. No major differences were observed between the mean deviations of the CNN-based STL models acquired using six different CT scanners. CONCLUSIONS: The fully-automated CNN was able to accurately segment the skull. CNNs thus offer the opportunity of removing the current prohibitive barriers of time and effort during CT image segmentation, making patient-specific AM constructs more accesible.


Asunto(s)
Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Diseño de Prótesis/métodos , Cráneo/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos , Prótesis e Implantes , Cráneo/patología , Cráneo/cirugía
14.
Eur J Orthod ; 40(1): 58-64, 2018 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-28453722

RESUMEN

Objective: To assess the accuracy of five different computed tomography (CT) scanners for the evaluation of the oropharynx morphology. Methods: An existing cone-beam computed tomography (CBCT) data set was used to fabricate an anthropomorphic phantom of the upper airway volume that extended from the uvula to the epiglottis (oropharynx) with known dimensions (gold standard). This phantom was scanned using two multi-detector row computed tomography (MDCT) scanners (GE Discovery CT750 HD, Siemens Somatom Sensation) and three CBCT scanners (NewTom 5G, 3D Accuitomo 170, Vatech PaX Zenith 3D). All CT images were segmented by two observers and converted into standard tessellation language (STL) models. The volume and the cross-sectional area of the oropharynx were measured on the acquired STL models. Finally, all STL models were registered and compared with the gold standard. Results: The intra- and inter-observer reliability of the oropharynx segmentation was fair to excellent. The most accurate volume measurements were acquired using the Siemens MDCT (98.4%; 14.3 cm3) and Vatech CBCT (98.9%; 14.4 cm3) scanners. The GE MDCT, NewTom 5G CBCT, and Accuitomo CBCT scanners resulted in smaller volumes, viz., 92.1% (13.4 cm3), 91.5% (13.3 cm3), and 94.6% (13.8 cm3), respectively. The most accurate cross-sectional area measurements were acquired using the Siemens MDCT (94.6%; 282.4 mm2), Accuitomo CBCT (95.1%; 283.8 mm2), and Vatech CBCT (95.3%; 284.5 mm2) scanners. The GE MDCT and NewTom 5G CBCT scanners resulted in smaller areas, viz., 89.3% (266.5 mm2) and 89.8% (268.0 mm2), respectively. Limitations: Images of the phantom were acquired using the vendor-supplied default airway scanning protocol for each scanner. Conclusion: Significant differences were observed in the volume and cross-sectional area measurements of the oropharynx acquired using different MDCT and CBCT scanners. The Siemens MDCT and the Vatech CBCT scanners were more accurate than the GE MDCT, NewTom 5G, and Accuitomo CBCT scanners. In clinical settings, CBCT scanners offer an alternative to MDCT scanners in the assessment of the oropharynx morphology.


Asunto(s)
Orofaringe/diagnóstico por imagen , Adulto , Antropometría/métodos , Tomografía Computarizada de Haz Cónico/instrumentación , Tomografía Computarizada de Haz Cónico/métodos , Femenino , Humanos , Imagenología Tridimensional/métodos , Orofaringe/anatomía & histología , Fantasmas de Imagen , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/instrumentación , Tomografía Computarizada por Rayos X/métodos
15.
Radiat Prot Dosimetry ; 179(1): 58-68, 2018 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-29040707

RESUMEN

The objective of the present study was to assess and compare the effective doses in the wrist region resulting from conventional radiography device, multislice computed tomography (MSCT) device and two cone beam computed tomography (CBCT) devices using MOSFET dosemeters and a custom made anthropomorphic RANDO phantom according to the ICRP 103 recommendation. The effective dose for the conventional radiography was 1.0 µSv. The effective doses for the NewTom 5 G CBCT ranged between 0.7 µSv and 1.6 µSv, for the Planmed Verity CBCT 2.4 µSv and for the MSCT 8.6 µSv. When compared with the effective dose for AP- and LAT projections of a conventional radiographic device, this study showed an 8.6-fold effective dose for standard MSCT protocol and between 0.7 and 2.4-fold effective dose for standard CBCT protocols. When compared to the MSCT device, the CBCT devices offer a 3D view of the wrist at significantly lower effective doses.


Asunto(s)
Tomografía Computarizada de Haz Cónico/instrumentación , Tomografía Computarizada Multidetector/instrumentación , Dosis de Radiación , Muñeca/efectos de la radiación , Humanos , Fantasmas de Imagen
16.
Med Phys ; 45(1): 92-100, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-29091278

RESUMEN

PURPOSE: Imaging phantoms are widely used for testing and optimization of imaging devices without the need to expose humans to irradiation. However, commercially available phantoms are commonly manufactured in simple, generic forms and sizes and therefore do not resemble the clinical situation for many patients. METHODS: Using 3D printing techniques, we created a life-size phantom based on a clinical CT scan of the thorax from a patient with lung cancer. It was assembled from bony structures printed in gypsum, lung structures consisting of airways, blood vessels >1 mm, and outer lung surface, three lung tumors printed in nylon, and soft tissues represented by silicone (poured into a 3D-printed mold). RESULTS: Kilovoltage x-ray and CT images of the phantom closely resemble those of the real patient in terms of size, shapes, and structures. Surface comparison using 3D models obtained from the phantom and the 3D models used for printing showed mean differences <1 mm for all structures. Tensile tests of the materials used for the phantom show that the phantom is able to endure radiation doses over 24,000 Gy. CONCLUSIONS: It is feasible to create an anthropomorphic thorax phantom using 3D printing and molding techniques. The phantom closely resembles a real patient in terms of spatial accuracy and is currently being used to evaluate x-ray-based imaging quality and positional verification techniques for radiotherapy.


Asunto(s)
Fantasmas de Imagen , Impresión Tridimensional , Tórax/diagnóstico por imagen , Tomografía Computarizada por Rayos X/instrumentación , Humanos
17.
Med Eng Phys ; 51: 6-16, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-29096986

RESUMEN

AIM OF THE STUDY: The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. METHODS: Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. RESULTS: The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. CONCLUSIONS: Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required.


Asunto(s)
Huesos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Impresión Tridimensional , Tomografía Computarizada por Rayos X
18.
Injury ; 48(12): 2872-2878, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28988806

RESUMEN

OBJECTIVES: In the Netherlands, cyclists continue to outnumber other road users in injuries and deaths. The wearing of bicycle helmets is not mandatory in the Netherlands even though research has shown that wearing bicycle helmets can reduce head and brain injuries by up to 88%. Therefore, the aim of this study was to assess the feasibility of using 3D technology to evaluate bicycle-related head injuries and helmet protection. METHODS: Three patients who had been involved in a bicycle accident while wearing a helmet were subjected to multi-detector row computed tomography (MDCT) imaging after trauma. The helmets were separately scanned using the same MDCT scanner with tube voltages ranging from 80kVp to 140kVp and tube currents ranging from 10mAs to 300mAs in order to determine the best image acquisition parameters for helmets. The acquired helmet images were converted into virtual 3D surface hence Standard Tessellation Language (STL) models and merged with MDCT-derived STL models of the patients' skulls. Finally, all skull fractures and corresponding helmet damage were visualized and related. RESULTS: Imaging bicycle helmets on an MDCT scanner proved to be feasible using a tube voltage of 120kVp and a tube current of 120mAs. Merging the resulting STL models of the patients' skull and helmet allowed the overall damage sustained by both skull and helmet to be related. CONCLUSION: Our proposed 3D method of assessing bicycle helmet damage and corresponding head injuries could offer valuable information for the development and design of safer bicycle helmets.


Asunto(s)
Ciclismo/lesiones , Traumatismos Craneocerebrales/prevención & control , Análisis de Falla de Equipo/métodos , Dispositivos de Protección de la Cabeza , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Traumatismos Mandibulares/prevención & control , Accidentes de Tránsito , Adulto , Diseño de Equipo , Estudios de Factibilidad , Humanos , Países Bajos , Tomografía Computarizada por Rayos X
19.
Sci Rep ; 7(1): 10021, 2017 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-28855717

RESUMEN

Surgical reconstruction of cartilaginous defects remains a major challenge. In the current study, we aimed to identify an imaging strategy for the development of patient-specific constructs that aid in the reconstruction of nasal deformities. Magnetic Resonance Imaging (MRI) was performed on a human cadaver head to find the optimal MRI sequence for nasal cartilage. This sequence was subsequently used on a volunteer. Images of both were assessed by three independent researchers to determine measurement error and total segmentation time. Three dimensionally (3D) reconstructed alar cartilage was then additively manufactured. Validity was assessed by comparing manually segmented MR images to the gold standard (micro-CT). Manual segmentation allowed delineation of the nasal cartilages. Inter- and intra-observer agreement was acceptable in the cadaver (coefficient of variation 4.6-12.5%), but less in the volunteer (coefficient of variation 0.6-21.9%). Segmentation times did not differ between observers (cadaver P = 0.36; volunteer P = 0.6). The lateral crus of the alar cartilage was consistently identified by all observers, whereas part of the medial crus was consistently missed. This study suggests that MRI is a feasible imaging modality for the development of 3D alar constructs for patient-specific reconstruction.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Cartílagos Nasales/diagnóstico por imagen , Modelación Específica para el Paciente , Procedimientos de Cirugía Plástica/métodos , Impresión Tridimensional , Anciano , Femenino , Humanos , Cartílagos Nasales/cirugía
20.
Dentomaxillofac Radiol ; 46(6): 20170043, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28467118

RESUMEN

OBJECTIVES: The aim of this study was to assess the reliability and accuracy of three different imaging software packages for three-dimensional analysis of the upper airway using CBCT images. METHODS: To assess the reliability of the software packages, 15 NewTom 5G® (QR Systems, Verona, Italy) CBCT data sets were randomly and retrospectively selected. Two observers measured the volume, minimum cross-sectional area and the length of the upper airway using Amira® (Visage Imaging Inc., Carlsbad, CA), 3Diagnosys® (3diemme, Cantu, Italy) and OnDemand3D® (CyberMed, Seoul, Republic of Korea) software packages. The intra- and inter-observer reliability of the upper airway measurements were determined using intraclass correlation coefficients and Bland & Altman agreement tests. To assess the accuracy of the software packages, one NewTom 5G® CBCT data set was used to print a three-dimensional anthropomorphic phantom with known dimensions to be used as the "gold standard". This phantom was subsequently scanned using a NewTom 5G® scanner. Based on the CBCT data set of the phantom, one observer measured the volume, minimum cross-sectional area, and length of the upper airway using Amira®, 3Diagnosys®, and OnDemand3D®, and compared these measurements with the gold standard. RESULTS: The intra- and inter-observer reliability of the measurements of the upper airway using the different software packages were excellent (intraclass correlation coefficient ≥0.75). There was excellent agreement between all three software packages in volume, minimum cross-sectional area and length measurements. All software packages underestimated the upper airway volume by -8.8% to -12.3%, the minimum cross-sectional area by -6.2% to -14.6%, and the length by -1.6% to -2.9%. CONCLUSIONS: All three software packages offered reliable volume, minimum cross-sectional area and length measurements of the upper airway. The length measurements of the upper airway were the most accurate results in all software packages. All software packages underestimated the upper airway dimensions of the anthropomorphic phantom.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Imagenología Tridimensional , Orofaringe/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Programas Informáticos , Humanos , Fantasmas de Imagen , Reproducibilidad de los Resultados , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...