Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Med Phys ; 51(5): 3360-3375, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38150576

RESUMO

BACKGROUND: Due to the high attenuation of metals, severe artifacts occur in cone beam computed tomography (CBCT). The metal segmentation in CBCT projections usually serves as a prerequisite for metal artifact reduction (MAR) algorithms. PURPOSE: The occurrence of truncation caused by the limited detector size leads to the incomplete acquisition of metal masks from the threshold-based method in CBCT volume. Therefore, segmenting metal directly in CBCT projections is pursued in this work. METHODS: Since the generation of high quality clinical training data is a constant challenge, this study proposes to generate simulated digital radiographs (data I) based on real CT data combined with self-designed computer aided design (CAD) implants. In addition to the simulated projections generated from 3D volumes, 2D x-ray images combined with projections of implants serve as the complementary data set (data II) to improve the network performance. In this work, SwinConvUNet consisting of shift window (Swin) vision transformers (ViTs) with patch merging as encoder is proposed for metal segmentation. RESULTS: The model's performance is evaluated on accurately labeled test datasets obtained from cadaver scans as well as the unlabeled clinical projections. When trained on the data I only, the convolutional neural network (CNN) encoder-based networks UNet and TransUNet achieve only limited performance on the cadaver test data, with an average dice score of 0.821 and 0.850. After using both data II and data I during training, the average dice scores for the two models increase to 0.906 and 0.919, respectively. By replacing the CNN encoder with Swin transformer, the proposed SwinConvUNet reaches an average dice score of 0.933 for cadaver projections when only trained on the data I. Furthermore, SwinConvUNet has the largest average dice score of 0.953 for cadaver projections when trained on the combined data set. CONCLUSIONS: Our experiments quantitatively demonstrate the effectiveness of the combination of the projections simulated under two pathways for network training. Besides, the proposed SwinConvUNet trained on the simulated projections performs state-of-the-art, robust metal segmentation as demonstrated on experiments on cadaver and clinical data sets. With the accurate segmentations from the proposed model, MAR can be conducted even for highly truncated CBCT scans.


Assuntos
Artefatos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Metais , Tomografia Computadorizada de Feixe Cônico/métodos , Metais/química , Processamento de Imagem Assistida por Computador/métodos , Humanos , Simulação por Computador , Algoritmos
2.
Med Phys ; 49(5): 2914-2930, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35305271

RESUMO

PURPOSE: Fiducial markers are commonly used in navigation-assisted minimally invasive spine surgery and they help transfer image coordinates into real-world coordinates. In practice, these markers might be located outside the field-of-view (FOV) of C-arm cone-beam computed tomography (CBCT) systems used in intraoperative surgeries, due to the limited detector sizes. As a consequence, reconstructed markers in CBCT volumes suffer from artifacts and have distorted shapes, which sets an obstacle for navigation. METHODS: In this work, we propose two fiducial marker detection methods: direct detection from distorted markers (direct method) and detection after marker recovery (recovery method). For direct detection from distorted markers in reconstructed volumes, an efficient automatic marker detection method using two neural networks and a conventional circle detection algorithm is proposed. For marker recovery, a task-specific data preparation strategy is proposed to recover markers from severely truncated data. Afterwards, a conventional marker detection algorithm is applied for position detection. The networks in both methods are trained based on simulated data. For the direct method, 6800 images and 10 000 images are generated, respectively, to train the U-Net and ResNet50. For the recovery method, the training set includes 1360 images for FBPConvNet and Pix2pixGAN. The simulated data set with 166 markers and four cadaver cases with real fiducials are used for evaluation. RESULTS: The two methods are evaluated on simulated data and real cadaver data. The direct method achieves 100% detection rates within 1 mm detection error on simulated data with normal truncation and simulated data with heavier noise, but only detect 94.6% markers in extremely severe truncation case. The recovery method detects all the markers successfully in three test data sets and around 95% markers are detected within 0.5 mm error. For real cadaver data, both methods achieve 100% marker detection rates with mean registration error below 0.2 mm. CONCLUSIONS: Our experiments demonstrate that the direct method is capable of detecting distorted markers accurately and the recovery method with the task-specific data preparation strategy has high robustness and generalizability on various data sets. The task-specific data preparation is able to reconstruct structures of interest outside the FOV from severely truncated data better than conventional data preparation.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Marcadores Fiduciais , Algoritmos , Artefatos , Cadáver , Tomografia Computadorizada de Feixe Cônico/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas
3.
Med Image Anal ; 70: 102028, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33744833

RESUMO

Due to the lack of a standardized 3D cephalometric analysis methodology, 2D cephalograms synthesized from 3D cone-beam computed tomography (CBCT) volumes are widely used for cephalometric analysis in dental CBCT systems. However, compared with conventional X-ray film based cephalograms, such synthetic cephalograms lack image contrast and resolution, which impairs cephalometric landmark identification. In addition, the increased radiation dose applied to acquire the scan for 3D reconstruction causes potential health risks. In this work, we propose a sigmoid-based intensity transform that uses the nonlinear optical property of X-ray films to increase image contrast of synthetic cephalograms from 3D volumes. To improve image resolution, super resolution deep learning techniques are investigated. For low dose purpose, the pixel-to-pixel generative adversarial network (pix2pixGAN) is proposed for 2D cephalogram synthesis directly from two cone-beam projections. For landmark detection in the synthetic cephalograms, an efficient automatic landmark detection method using the combination of LeNet-5 and ResNet50 is proposed. Our experiments demonstrate the efficacy of pix2pixGAN in 2D cephalogram synthesis, achieving an average peak signal-to-noise ratio (PSNR) value of 33.8 with reference to the cephalograms synthesized from 3D CBCT volumes. Pix2pixGAN also achieves the best performance in super resolution, achieving an average PSNR value of 32.5 without the introduction of checkerboard or jagging artifacts. Our proposed automatic landmark detection method achieves 86.7% successful detection rate in the 2 mm clinical acceptable range on the ISBI Test1 data, which is comparable to the state-of-the-art methods. The method trained on conventional cephalograms can be directly applied to landmark detection in the synthetic cephalograms, achieving 93.0% and 80.7% successful detection rate in 4 mm precision range for synthetic cephalograms from 3D volumes and 2D projections, respectively.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Imageamento Tridimensional , Cefalometria , Humanos , Processamento de Imagem Assistida por Computador , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA