Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
Opt Express ; 31(26): 44772-44797, 2023 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-38178538

RESUMO

To extend the field of view while reducing dimensions of the C-arm, we propose a carbon nanotube (CNT)-based C-arm computed tomography (CT) system with multiple X-ray sources. A prototype system was developed using three CNT X-ray sources, enabling a feasibility study. Geometry calibration and image reconstruction were performed to improve the quality of image acquisition. However, the geometry of the prototype system led to projection truncation for each source and an overlap region of object area covered by each source in the two-dimensional Radon space, necessitating specific corrective measures. We addressed these problems by implementing truncation correction and applying weighting techniques to the overlap region during the image reconstruction phase. Furthermore, to enable image reconstruction with a scan angle less than 360°, we designed a weighting function to solve data redundancy caused by the short scan angle. The accuracy of the geometry calibration method was evaluated via computer simulations. We also quantified the improvements in reconstructed image quality using mean-squared error and structural similarity. Moreover, detector lag correction was applied to address the afterglow observed in the experimental data obtained from the prototype system. Our evaluation of image quality involved comparing reconstructed images obtained with and without incorporating the geometry calibration results and images with and without lag correction. The outcomes of our simulation study and experimental investigation demonstrated the efficacy of our proposed geometry calibration, image reconstruction method, and lag correction in reducing image artifacts.

2.
Opt Express ; 27(7): 10108-10126, 2019 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-31045157

RESUMO

We propose a multi-pass approach to reduce cone-beam artifacts in a circular orbit cone-beam computed tomography (CT) system. Employing a large 2D detector array reduces the scan time but produces cone-beam artifacts in the Feldkamp, Davis, and Kress (FDK) reconstruction because of insufficient sampling for exact reconstruction. While the two-pass algorithm proposed by Hsieh is effective at reducing cone-beam artifacts, the correction performance is degraded when the bone density is moderate and the cone angle is large. In this work, we treated the cone-beam artifacts generated from bone and soft tissue as if they were from less dense bone objects and corrected them iteratively. The proposed method was validated using a numerical Defrise phantom, XCAT phantom data, and experimental data from a pediatric phantom followed by image quality assessment for FDK, the two-pass algorithm, the proposed method, and the total variation minimization-based iterative reconstruction (TV-IR). The results show that the proposed method was superior to the two-pass algorithm in cone-beam artifact reduction and effectively reduced the overcorrection by the two-pass algorithm near bone regions. It can also be observed that the proposed method produced better correction performance with fewer iterations than the TV-IR algorithm. A qualitative evaluation with mean-squared error, structural similarity, and structural dissimilarity demonstrated the effectiveness of the proposed method.

3.
Opt Express ; 25(15): 17280-17293, 2017 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-28789221

RESUMO

We propose a method to measure the directional in-plane modulation transfer function (MTF) of a digital tomosynthesis system using a sphere phantom. To assess the spatial resolution of an in-plane image of the tomosynthesis system, projection data of a sphere phantom were generated within a limited data acquisition range of 40°, and reconstructed by the FDK algorithm. To measure the in-plane MTF, we divided the Fourier transform of the reconstructed sphere phantom by that of the ideal sphere phantom, and then performed plane integral along the fz-direction. When dividing, small values in the denominator can introduce estimation errors, and these errors were reduced by the proposed method. To evaluate the performance of the proposed method, the in-plane MTF estimated by simulation and experimental data was compared to the ideal in-plane MTF generated by computer simulations using a point object. For quantitative evaluation, we measured frequency values at half-maximum and full-maximum of the directional in-plane MTF along the three different directions (i.e., f0° -, f30° -, and f60° -directions) and compared them with those of the ideal in-plane MTF. Although the sphere phantom has been regarded as an inappropriate object due to the anisotropic characteristics of tomosynthesis image, our results show that the proposed method has a reliable estimation performance, demonstrating the sphere phantom is still suitable for measuring the directional in-plane MTF for a digital tomosynthesis system.

4.
Opt Express ; 24(17): 18843-59, 2016 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-27557168

RESUMO

We investigate the effect of anatomical noise on the detectability of cone beam CT (CBCT) images with different slice directions, slice thicknesses, and volume glandular fractions (VGFs). Anatomical noise is generated using a power law spectrum of breast anatomy, and spherical objects with diameters from 1mm to 11mm are used as breast masses. CBCT projection images are simulated and reconstructed using the FDK algorithm. A channelized Hotelling observer (CHO) with Laguerre-Gauss (LG) channels is used to evaluate detectability for the signal-known-exactly (SKE) binary detection task. Detectability is calculated for various slice thicknesses in the transverse and longitudinal planes for 15%, 30% and 60% VGFs. The optimal slice thicknesses that maximize the detectability of the objects are determined. The results show that the ß value increases as the slice thickness increases, but that thicker slices yield higher detectability in the transverse and longitudinal planes, except for the case of a 1mm diameter spherical object. It is also shown that the longitudinal plane with a 0.1mm slice thickness provides higher detectability than the transverse plane, despite its higher ß value. With optimal slice thicknesses, the longitudinal plane exhibits better detectability for all VGFs and spherical objects.


Assuntos
Algoritmos , Mama/diagnóstico por imagem , Tomografia Computadorizada de Feixe Cônico/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Feminino , Humanos , Reprodutibilidade dos Testes
5.
Opt Express ; 24(4): 3749-64, 2016 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-26907031

RESUMO

We investigate the detection performance of transverse and longitudinal planes for various signal sizes (i.e., 1 mm to 8 mm diameter spheres) in cone beam computed tomography (CBCT) images. CBCT images are generated by computer simulation and images are reconstructed using an FDK algorithm. For each slice direction and signal size, a human observer study is conducted with a signal-known-exactly/background-known-exactly (SKE/BKE) binary detection task. The detection performance of human observers is compared with that of a channelized Hotelling observer (CHO). The detection performance of an ideal linear observer is also calculated using a CHO with Laguerre-Gauss (LG) channels. The detectability of high contrast small signals (i.e., up to 4-mm-diameter spheres) is higher in the longitudinal plane than the transverse plane. It is also shown that CHO performance correlates well with human observer performance in both transverse and longitudinal plane images.

6.
Opt Express ; 23(6): 7514-26, 2015 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-25837090

RESUMO

A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise.


Assuntos
Estatística como Assunto , Tomografia Computadorizada por Raios X/métodos , Simulação por Computador , Imagens de Fantasmas , Interpretação de Imagem Radiográfica Assistida por Computador
7.
Opt Express ; 22(11): 13380-92, 2014 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-24921532

RESUMO

Ring artifacts in computed tomography (CT) images degrade image quality and obscure the true shapes of objects. While several correction methods have been developed, their performances are often task-dependent and not generally applicable. Here, we propose a novel method to reduce ring artifacts by calculating the ratio of adjacent detector elements in the projection data, termed the line-ratio. Our method estimates the sensitivity of each detector element and equalizes them in sinogram space. As a result, the stripe pattern can be effectively removed from sinogram data, thereby also removing ring artifacts from the reconstructed CT image. Numerical simulations were performed to evaluate and compare the performance of our method with that of conventional methods. We also tested our method experimentally and demonstrated that our method has superior performance to other methods.

8.
Phys Med Biol ; 69(10)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38593820

RESUMO

Objective.Limited-angle computed tomography (CT) presents a challenge due to its ill-posed nature. In such scenarios, analytical reconstruction methods often exhibit severe artifacts. To tackle this inverse problem, several supervised deep learning-based approaches have been proposed. However, they are constrained by limitations such as generalization issue and the difficulty of acquiring a large amount of paired CT images.Approach.In this work, we propose an iterative neural reconstruction framework designed for limited-angle CT. By leveraging a coordinate-based neural representation, we formulate tomographic reconstruction as a convex optimization problem involving a deep neural network. We then employ differentiable projection layer to optimize this network by minimizing the discrepancy between the predicted and measured projection data. In addition, we introduce a prior-based weight initialization method to ensure the network starts optimization with an informed initial guess. This strategic initialization significantly improves the quality of iterative reconstruction by stabilizing the divergent behavior in ill-posed neural fields. Our method operates in a self-supervised manner, thereby eliminating the need for extensive data.Main results.The proposed method outperforms other iterative and learning-based methods. Experimental results on XCAT and Mayo Clinic datasets demonstrate the effectiveness of our approach in restoring anatomical features as well as structures. This finding was substantiated by visual inspections and quantitative evaluations using NRMSE, PSNR, and SSIM. Moreover, we conduct a comprehensive investigation into the divergent behavior of iterative neural reconstruction, thus revealing its suboptimal convergence when starting from scratch. In contrast, our method consistently produced accurate images by incorporating an initial estimate as informed initialization.Significance.This work showcases the feasibility to reconstruct high-fidelity CT images from limited-angle x-ray projections. The proposed methodology introduces a novel data-free approach to enhance medical imaging, holding promise across various clinical applications.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Humanos , Aprendizado Profundo
9.
Med Phys ; 51(3): 1637-1652, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38289987

RESUMO

BACKGROUND: Developing a deep-learning network for denoising low-dose CT (LDCT) images necessitates paired computed tomography (CT) images acquired at different dose levels. However, it is challenging to obtain these images from the same patient. PURPOSE: In this study, we introduce a novel approach to generate CT images at different dose levels. METHODS: Our method involves the direct estimation of the quantum noise power spectrum (NPS) from patient CT images without the need for prior information. By modeling the anatomical NPS using a power-law function and estimating the quantum NPS from the measured NPS after removing the anatomical NPS, we create synthesized quantum noise by applying the estimated quantum NPS as a filter to random noise. By adding synthesized noise to CT images, synthesized CT images can be generated as if these are obtained at a lower dose. This leads to the generation of paired images at different dose levels for training denoising networks. RESULTS: The proposed method accurately estimates the reference quantum NPS. The denoising network trained with paired data generated using synthesized quantum noise achieves denoising performance comparable to networks trained using Mayo Clinic data, as justified by the mean-squared-error (MSE), structural similarity index (SSIM)and peak signal-to-noise ratio (PSNR) scores. CONCLUSIONS: This approach offers a promising solution for LDCT image denoising network development without the need for multiple scans of the same patient at different doses.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
10.
Med Phys ; 51(4): 2817-2833, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37883787

RESUMO

BACKGROUND: In recent times, deep-learning-based super-resolution (DL-SR) techniques for computed tomography (CT) images have shown outstanding results in terms of full-reference image quality (FR-IQ) metrics (e.g., root mean square error and structural similarity index metric), which assesses IQ by measuring its similarity to the high-resolution (HR) image. In addition, IQ can be evaluated via task-based IQ (Task-IQ) metrics that evaluate the ability to perform specific tasks. Ironically, most proposed image domain-based SR techniques are not possible to improve a Task-IQ metric, which assesses the amount of information related to diagnosis. PURPOSE: In the case of CT imaging systems, sinogram domain data can be utilized for SR techniques. Therefore, this study aims to investigate the impact of utilizing sinogram domain data on diagnostic information restoration ability. METHODS: We evaluated three DL-SR techniques: using image domain data (Image-SR), using sinogram domain data (Sinogram-SR), and using sinogram as well as image domain data (Dual-SR). For Task-IQ evaluation, the Rayleigh discrimination task was used to evaluate diagnostic ability by focusing on the resolving power aspect, and an ideal observer (IO) can be used to perform the task. In this study, we used a convolutional neural network (CNN)-based IO that approximates the IO performance. We compared the IO performances of the SR techniques according to the data domain to evaluate the discriminative information restoration ability. RESULTS: Overall, the low-resolution (LR) and SR exhibit lower IO performances compared with that of HR owing to their degraded discriminative information when detector binning is used. Next, between the SR techniques, Image-SR does not show superior IO performances compared to the LR image, but Sinogram-SR and Dual-SR show superior IO performances than the LR image. Furthermore, in Sinogram-SR, we confirm that FR-IQ and IO performance are positively correlated. These observations demonstrate that sinogram domain upsampling improves the representation ability for discriminative information in the image domain compared to the LR and Image-SR. CONCLUSIONS: Unlike Image-SR, Sinogram-SR can improve the amount of discriminative information present in the image domain. This demonstrates that to improve the amount of discriminative information on the resolving power aspect, it is necessary to employ sinogram domain processing.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação
11.
Med Phys ; 51(4): 2510-2525, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38011539

RESUMO

BACKGROUND: Tomosynthesis systems are three-dimensional (3-D) medical imaging devices that operate over limited acquisition angles using low radiation dosages. To measure the spatial resolution performance of a tomosynthesis system, the modulation transfer function (MTF) is widely used as a quantitative evaluation metric. PURPOSE: We previously introduced a method to estimate the full 3-D MTF of a cone-beam computed tomography system using two-dimensional (2-D) Richardson-Lucy (RL) deconvolution with Tikhonov-Miller regularization. However, this method can not be applied directly to estimate the 3-D MTF of a tomosynthesis system, since the unique artifacts (i.e., shadow artifacts, spreading tails, directional blurring, and high-level noise) of the system produce several errors that lower the estimation performance. Varying positions of the negative pixels due to shadow artifacts and spreading tails cause inconsistent deconvolution performances at each of the directional projections, and the severe noise in the reconstructed images cause noise amplification during estimation. This work proposes several modifications to the previous method to resolve the inconsistent performance and noise amplification errors to increase the full 3-D MTF estimation accuracy. METHODS: Three modifications were introduced to the 2-D RL deconvolution to prevent estimation errors and improve MTF estimation performance: non-negativity relaxation function, cost function to terminate the iterative process of RL deconvolution, and regularization strength for noise control. To validate the effectiveness of the proposed modifications, we reconstructed sphere phantoms from simulation and experimental tomosynthesis studies in the iso-center and offset-center positions as well as estimated the full 3-D MTFs using the previous and proposed methods. We compared the 3-D render images, central plane images, and center profiles of the estimated 3-D MTFs and calculated the full widths at half and tenth maximum for quantitative evaluation. RESULTS: The previous method cannot estimate the full 3-D MTF of a tomosynthesis system; its inaccurate negative pixel relaxation produces circular-shaped errors, and the mean squared error based simple cost function for termination causes inconsistent estimation at each directional projection to diminish the clear edges of the low-frequency drop and missing sample regions. Noise amplification from lack of noise regularization is also seen in the previous method results. Compared to the previous method, the proposed method shows superior estimation performance at reducing errors in both the simulation and experimental studies regardless of object position. The proposed method preserves the low-frequency drop, missing sample regions from the limited acquisition angles, and missing cone region from the offset-center position; the estimated MTFs also show FWHM and FWTM values close to those of the ideal MTFs than with the previous method. CONCLUSIONS: This work presents a method to estimate the full 3-D MTF of a tomosynthesis system. The proposed modifications prevent circular-shaped errors and noise amplification due to the geometry for limited acquisition angles and high noise levels. Compared to our previous method, the proposed scheme show better performance for estimating the 3-D MTF of the tomosynthesis system.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Simulação por Computador , Doses de Radiação , Imagens de Fantasmas
12.
Med Phys ; 51(6): 4181-4200, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38478305

RESUMO

BACKGROUND: On enhancing the image quality of low-dose computed tomography (LDCT), various denoising methods have achieved meaningful improvements. However, they commonly produce over-smoothed results; the denoised images tend to be more blurred than the normal-dose targets (NDCTs). Furthermore, many recent denoising methods employ deep learning(DL)-based models, which require a vast amount of CT images (or image pairs). PURPOSE: Our goal is to address the problem of over-smoothed results and design an algorithm that works regardless of the need for a large amount of training dataset to achieve plausible denoising results. Over-smoothed images negatively affect the diagnosis and treatment since radiologists had developed clinical experiences with NDCT. Besides, a large-scale training dataset is often not available in clinical situations. To overcome these limitations, we propose locally-adaptive noise-level matching (LANCH), emphasizing the output should retain the same noise-level and characteristics to that of the NDCT without additional training. METHODS: We represent the NDCT image as the pixel-wisely weighted sum of an over-smoothed output from off-the-shelf denoiser (OSD) and the difference between the LDCT image and the OSD output. Herein, LANCH determines a 2D ratio map (i.e., pixel-wise weight matrix) by locally matching the noise-level of output and NDCT, where the LDCT-to-NDCT device flux (mAs) ratio reveals the NDCT noise-level. Thereby, LANCH can preserve important details in LDCT, and enhance the sharpness of the noise-free regions. Note that LANCH can enhance any LDCT denoisers without additional training data (i.e., zero-shot). RESULTS: The proposed method is applicable to any OSD denoisers, reporting significant texture plausibility development over the baseline denoisers in quantitative and qualitative manners. It is surprising that the denoising accuracy achieved by our method with zero-shot denoiser was comparable or superior to that of the best training-based denoisers; our result showed 1% and 33% gains in terms of SSIM and DISTS, respectively. Reader study with experienced radiologists shows significant image quality improvements, a gain of + 1.18 on a five-point mean opinion score scale. CONCLUSIONS: In this paper, we propose a technique to enhance any low-dose CT denoiser by leveraging the fundamental physical relationship between the x-ray flux and noise variance. Our method is capable of operating in a zero-shot condition, which means that only a single low-dose CT image is required for the enhancement process. We demonstrate that our approach is comparable or even superior to supervised DL-based denoisers that are trained using numerous CT images. Extensive experiments illustrate that our method consistently improves the performance of all tested LDCT denoisers.


Assuntos
Processamento de Imagem Assistida por Computador , Doses de Radiação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Humanos , Algoritmos , Aprendizado Profundo
13.
Med Phys ; 50(10): 6390-6408, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36971505

RESUMO

BACKGROUND: Since human observer studies are resource-intensive, mathematical model observers are frequently used to assess task-based image quality. The most common implementation of these model observers assume that the signal information is exactly known. However, these tasks cannot thoroughly represent situations where the signal information is not exactly known in terms of size and shape. PURPOSE: Considering the limitations of the tasks for which signal information is exactly known, we proposed a convolutional neural network (CNN)-based model observer for signal known statistically (SKS) and background known statistically (BKS) detection tasks in breast tomosynthesis images. METHODS: A wide parameter search was conducted from six different acquisition angles (i.e., 10°, 20°, 30°, 40°, 50°, and 60°) within the same dose level (i.e., 2.3 mGy) under two separate acquisition schemes: (1) constant total number of projections, and (2) constant angular separation between projections. Two different types of signals: spherical (i.e., SKE tasks) and spiculated (i.e., SKS tasks) were used. The detection performance of the CNN-based model observer was compared with that of the Hotelling observer (HO) instead of the IO. Pixel-wise gradient-weighted class activation mapping (pGrad-CAM) map was extracted from each reconstructed tomosynthesis image to provide an intuitive understanding of the trained CNN-based model observer. RESULTS: The CNN-based model observer achieved a higher detection performance compared to that of the HO for all tasks. Moreover, the improvement in its detection performance was greater for SKS tasks compared to that for SKE tasks. These results demonstrated that the addition of nonlinearity improved the detection performance owing to the variation of the background and signal. Interestingly, the pGrad-CAM results effectively localized the class-specific discriminative region, further supporting the quantitative evaluation results of the CNN-based model observer. In addition, we verified that the CNN-based model observer required fewer images to achieve the detection performance of the HO. CONCLUSIONS: In this work, we proposed a CNN-based model observer for SKS and BKS detection tasks in breast tomosynthesis images. Throughout the study, we demonstrated that the detection performance of the proposed CNN-based model observer was superior to that of the HO.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Mama/diagnóstico por imagem , Modelos Teóricos , Variações Dependentes do Observador
14.
Phys Med Biol ; 68(20)2023 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-37722388

RESUMO

Objective.This paper proposes a new objective function to improve the quality of synthesized breast CT images generated by the GAN and compares the GAN performances on transfer learning datasets from different image domains.Approach.The proposed objective function, named beta loss function, is based on the fact that x-ray-based breast images follow the power-law spectrum. Accordingly, the exponent of the power-law spectrum (beta value) for breast CT images is approximately two. The beta loss function is defined in terms of L1 distance between the beta value of synthetic images and validation samples. To compare the GAN performances for transfer learning datasets from different image domains, ImageNet and anatomical noise images are used in the transfer learning dataset. We employ styleGAN2 as the backbone network and add the proposed beta loss function. The patient-derived breast CT dataset is used as the training and validation dataset; 7355 and 212 images are used for network training and validation, respectively. We use the beta value evaluation and Fréchet inception distance (FID) score for quantitative evaluation.Main results.For qualitative assessment, we attempt to replicate the images from the validation dataset using the trained GAN. Our results show that the proposed beta loss function achieves a more similar beta value to real images and a lower FID score. Moreover, we observe that the GAN pretrained with anatomical noise images achieves better equality than ImageNet for beta value evaluation and FID score. Finally, the beta loss function with anatomical noise as the transfer learning dataset achieves the lowest FID score.Significance.Overall, the GAN using the proposed beta loss function with anatomical noise images as the transfer learning dataset provides the lowest FID score among all tested cases. Hence, this work has implications for developing GAN-based breast image synthesis methods for medical imaging applications.


Assuntos
Mama , Tomografia Computadorizada por Raios X , Humanos , Mama/diagnóstico por imagem , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador
15.
Med Phys ; 50(5): 2787-2804, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36734478

RESUMO

BACKGROUND: The purpose of a convolutional neural network (CNN)-based denoiser is to increase the diagnostic accuracy of low-dose computed tomography (LDCT) imaging. To increase diagnostic accuracy, there is a need for a method that reflects the features related to diagnosis during the denoising process. PURPOSE: To provide a training strategy for LDCT denoisers that relies more on diagnostic task-related features to improve diagnostic accuracy. METHODS: An attentive map derived from a lesion classifier (i.e., determining lesion-present or not) is created to represent the extent to which each pixel influences the decision by the lesion classifier. This is used as a weight to emphasize important parts of the image. The proposed training method consists of two steps. In the first one, the initial parameters of the CNN denoiser are trained using LDCT and normal-dose CT image pairs via supervised learning. In the second one, the learned parameters are readjusted using the attentive map to restore the fine details of the image. RESULTS: Structural details and the contrast are better preserved in images generated by using the denoiser trained via the proposed method than in those generated by conventional denoisers. The proposed denoiser also yields higher lesion detectability and localization accuracy than conventional denoisers. CONCLUSIONS: A denoiser trained using the proposed method preserves the small structures and the contrast in the denoised images better than without it. Specifically, using the attentive map improves the lesion detectability and localization accuracy of the denoiser.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
16.
Med Phys ; 50(12): 7714-7730, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37401539

RESUMO

BACKGROUND: Limited scan angles cause severe distortions and artifacts in reconstructed tomosynthesis images when the Feldkamp-Davis-Kress (FDK) algorithm is used for the purpose, which degrades clinical diagnostic performance. These blurring artifacts are fatal in chest tomosynthesis images because precise vertebrae segmentation is crucial for various diagnostic analyses, such as early diagnosis, surgical planning, and injury detection. Moreover, because most spinal pathologies are related to vertebral conditions, the development of methods for accurate and objective vertebrae segmentation in medical images is an important and challenging research area. PURPOSE: The existing point-spread-function-(PSF)-based deblurring methods use the same PSF in all sub-volumes without considering the spatially varying property of tomosynthesis images. This increases the PSF estimation error, thus further degrading the deblurring performance. However, the proposed method estimates the PSF more accurately by using sub-CNNs that contain a deconvolution layer for each sub-system, which improves the deblurring performance. METHODS: To minimize the effect of the spatially varying property, the proposed deblurring network architecture comprises four modules: (1) block division module, (2) partial PSF module, (3) deblurring block module, and (4) assembling block module. We compared the proposed DL-based method with the FDK algorithm, total-variation iterative reconstruction with GP-BB (TV-IR), 3D U-Net, FBPConvNet, and two-phase deblurring method. To investigate the deblurring performance of the proposed method, we evaluated its vertebrae segmentation performance by comparing the pixel accuracy (PA), intersection-over-union (IoU), and F-score values of reference images to those of the deblurred images. Also, pixel-based evaluations of the reference and deblurred images were performed by comparing their root mean squared error (RMSE) and visual information fidelity (VIF) values. In addition, 2D analysis of the deblurred images were performed by artifact spread function (ASF) and full width half maximum (FWHM) of the ASF curve. RESULTS: The proposed method was able to recover the original structure significantly, thereby further improving the image quality. The proposed method yielded the best deblurring performance in terms of vertebrae segmentation and similarity. The IoU, F-score, and VIF values of the chest tomosynthesis images reconstructed using the proposed SV method were 53.5%, 28.7%, and 63.2% higher, respectively, than those of the images reconstructed using the FDK method, and the RMSE value was 80.3% lower. These quantitative results indicate that the proposed method can effectively restore both the vertebrae and the surrounding soft tissue. CONCLUSIONS: We proposed a chest tomosynthesis deblurring technique for vertebrae segmentation by considering the spatially varying property of tomosynthesis systems. The results of quantitative evaluations indicated that the vertebrae segmentation performance of the proposed method was better than those of the existing deblurring methods.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Artefatos , Coluna Vertebral/diagnóstico por imagem
17.
Phys Med Biol ; 68(11)2023 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-37137323

RESUMO

Objective.In this work, we propose a convolutional neural network (CNN)-based multi-slice ideal model observer using transfer learning (TL-CNN) to reduce the required number of training samples.Approach.To train model observers, we generate simulated breast CT image volumes that are reconstructed using the FeldkampDavisKress algorithm with a ramp and Hanning-weighted ramp filter. The observer performance is evaluated on the background-known-statistically (BKS)/signal-known-exactly task with a spherical signal, and the BKS/signal-known-statistically task with random signal generated by the stochastic grown method. We compare the detectability of the CNN-based model observer with that of conventional linear model observers for multi-slice images (i.e. a multi-slice channelized Hotelling observer (CHO) and volumetric CHO). We also analyze the detectability of the TL-CNN for different numbers of training samples to examine its performance robustness to a limited number of training samples. To further analyze the effectiveness of transfer learning, we calculate the correlation coefficients of filter weights in the CNN-based multi-slice model observer.Main results.When using transfer learning for the CNN-based multi-slice ideal model observer, the TL-CNN provides the same performance with a 91.7% reduction in the number of training samples compared to that when transfer learning is not used. Moreover, compared to the conventional linear model observer, the proposed CNN-based multi-slice model observers achieve 45% higher detectability in the signal-known-statistically detection tasks and 13% higher detectability in the SKE detection tasks. In correlation coefficient analysis, it is observed that the filters in most of the layers are highly correlated, demonstrating the effectiveness of the transfer learning for multi-slice model observer training.Significance.Deep learning-based model observers require large numbers of training samples, and the required number of training samples increases as the dimensions of the image (i.e. the number of slices) increase. With applying transfer learning, the required number of training samples is significantly reduced without performance drop.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Aprendizado de Máquina
18.
Med Phys ; 50(12): 7731-7747, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37303108

RESUMO

BACKGROUND: Sparse-view computed tomography (CT) has attracted a lot of attention for reducing both scanning time and radiation dose. However, sparsely-sampled projection data generate severe streak artifacts in the reconstructed images. In recent decades, many sparse-view CT reconstruction techniques based on fully-supervised learning have been proposed and have shown promising results. However, it is not feasible to acquire pairs of full-view and sparse-view CT images in real clinical practice. PURPOSE: In this study, we propose a novel self-supervised convolutional neural network (CNN) method to reduce streak artifacts in sparse-view CT images. METHODS: We generate the training dataset using only sparse-view CT data and train CNN based on self-supervised learning. Since the streak artifacts can be estimated using prior images under the same CT geometry system, we acquire prior images by iteratively applying the trained network to given sparse-view CT images. We then subtract the estimated steak artifacts from given sparse-view CT images to produce the final results. RESULTS: We validated the imaging performance of the proposed method using extended cardiac-torso (XCAT) and the 2016 AAPM Low-Dose CT Grand Challenge dataset from Mayo Clinic. From the results of visual inspection and modulation transfer function (MTF), the proposed method preserved the anatomical structures effectively and showed higher image resolution compared to the various streak artifacts reduction methods for all projection views. CONCLUSIONS: We propose a new framework for streak artifacts reduction when only the sparse-view CT data are given. Although we do not use any information of full-view CT data for CNN training, the proposed method achieved the highest performance in preserving fine details. By overcoming the limitation of dataset requirements on fully-supervised-based methods, we expect that our framework can be utilized in the medical imaging field.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Cintilografia , Algoritmos , Imagens de Fantasmas
19.
Bioeng Transl Med ; 8(6): e10480, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38023698

RESUMO

Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.

20.
PLoS One ; 17(5): e0267850, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35587494

RESUMO

We investigated the effect of the optical blurring of X-ray source on digital breast tomosynthesis (DBT) image quality using well-designed DBT simulator and table-top experimental systems. To measure the in-plane modulation transfer function (MTF), we used simulated sphere phantom and Teflon sphere phantom and generated their projection data using two acquisition modes (i.e., step-and-shoot mode and continuous mode). After reconstruction, we measured the in-plane MTF using reconstructed sphere phantom images. In addition, we measured the anatomical noise power spectrum (aNPS) and signal detectability. We constructed simulated breast phantoms with a 50% volume glandular fraction (VGF) of breast anatomy using the power law spectrum and inserted spherical objects with 1 mm, 2 mm, and 5 mm diameters as breast masses. Projection data were acquired using two acquisition modes, and in-plane breast images were reconstructed using the Feldkamp-Davis-Kress (FDK) algorithm. For the experimental study, we used BR3D breast phantom with 50% VGF and obtained projection data using a table-top experimental system. To compare the detection performance of the two acquisition modes, we calculated the signal detectability using the channelized Hotelling observer (CHO) with Laguerre-Gauss (LG) channels. Our results show that spatial resolution of in-plane image in continuous mode was degraded due to the optical blurring of X-ray source. This blurring effect was reflected in aNPS, resulting in large ß values. From a signal detectability perspective, the signal detectability in step-and-shoot mode is higher than that in continuous mode for small spherical signals but not large spherical signals. Although the step-and-shoot mode has disadvantage in terms of scan time compared to the continuous mode, scanning in step-and-shoot mode is better for detecting small signals, indicating that there is a tradeoff between scan time and image quality.


Assuntos
Mama , Mamografia , Algoritmos , Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Mamografia/métodos , Imagens de Fantasmas , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA