Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Eur J Nucl Med Mol Imaging ; 51(13): 3874-3887, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39042332

RESUMEN

PURPOSE: Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images. METHODS: In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets. RESULTS: For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUVmax, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners. CONCLUSION: The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aumento de la Imagen/métodos , Redes Neurales de la Computación
2.
Comput Med Imaging Graph ; 113: 102351, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38335784

RESUMEN

Low resolution of positron emission tomography (PET) limits its diagnostic performance. Deep learning has been successfully applied to achieve super-resolution PET. However, commonly used supervised learning methods in this context require many pairs of low- and high-resolution (LR and HR) PET images. Although unsupervised learning utilizes unpaired images, the results are not as good as that obtained with supervised deep learning. In this paper, we propose a quasi-supervised learning method, which is a new type of weakly-supervised learning methods, to recover HR PET images from LR counterparts by leveraging similarity between unpaired LR and HR image patches. Specifically, LR image patches are taken from a patient as inputs, while the most similar HR patches from other patients are found as labels. The similarity between the matched HR and LR patches serves as a prior for network construction. Our proposed method can be implemented by designing a new network or modifying an existing network. As an example in this study, we have modified the cycle-consistent generative adversarial network (CycleGAN) for super-resolution PET. Our numerical and experimental results qualitatively and quantitatively show the merits of our method relative to the state-of-the-art methods. The code is publicly available at https://github.com/PigYang-ops/CycleGAN-QSDL.


Asunto(s)
Tomografía de Emisión de Positrones , Aprendizaje Automático Supervisado , Humanos
3.
Comput Biol Med ; 168: 107761, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38039894

RESUMEN

Though deep learning-based surgical smoke removal methods have shown significant improvements in effectiveness and efficiency, the lack of paired smoke and smoke-free images in real surgical scenarios limits the performance of these methods. Therefore, methods that can achieve good generalization performance without paired in-vivo data are in high demand. In this work, we propose a smoke veil prior regularized two-stage smoke removal framework based on the physical model of smoke image formation. More precisely, in the first stage, we leverage a reconstruction loss, a consistency loss and a smoke veil prior-based regularization term to perform fully supervised training on a synthetic paired image dataset. Then a self-supervised training stage is deployed on the real smoke images, where only the consistency loss and the smoke veil prior-based loss are minimized. Experiments show that the proposed method outperforms the state-of-the-art ones on synthetic dataset. The average PSNR, SSIM and RMSE values are 21.99±2.34, 0.9001±0.0252 and 0.2151±0.0643, respectively. The qualitative visual inspection on real dataset further demonstrates the effectiveness of the proposed method.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Examen Físico
4.
Comput Biol Med ; 165: 107461, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37708716

RESUMEN

Magnetic particle imaging (MPI) is an emerging medical imaging technique that has high sensitivity, contrast, and excellent depth penetration. In MPI, x-space is a reconstruction method that transforms the measured voltages into particle concentrations. The reconstructed native image can be modeled as a convolution of the magnetic particle concentration with a point-spread function (PSF). The PSF is one of the important parameters in deconvolution. However, accurately measuring or modeling the PSF in the hardware used for deconvolution is challenging due to the various environment and magnetic particle relaxation. The inaccurate PSF estimation may lead to the loss of the content structure of the MPI image, especially in low gradient fields. In this study, we developed a Dual Adversarial Network (DAN) with patch-wise contrastive constraint to deblur the MPI image. This method can overcome the limitations of unpaired data in data acquisition scenarios and remove the blur around the boundary more effectively than the common deconvolution method. We evaluated the performance of the proposed DAN model on simulated and real data. Experimental results confirmed that our model performs favorably against the deconvolution method that is mainly used for deblurring the MPI image and other GAN-based deep learning models.


Asunto(s)
Diagnóstico por Imagen , Fenómenos Magnéticos
5.
Phys Med Biol ; 68(20)2023 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-37708896

RESUMEN

Deep learning has been successfully applied to low-dose CT (LDCT) image denoising for reducing potential radiation risk. However, the widely reported supervised LDCT denoising networks require a training set of paired images, which is expensive to obtain and cannot be perfectly simulated. Unsupervised learning utilizes unpaired data and is highly desirable for LDCT denoising. As an example, an artifact disentanglement network (ADN) relies on unpaired images and obviates the need for supervision but the results of artifact reduction are not as good as those through supervised learning. An important observation is that there is often hidden similarity among unpaired data that can be utilized. This paper introduces a new learning mode, called quasi-supervised learning, to empower ADN for LDCT image denoising. For every LDCT image, the best matched image is first found from an unpaired normal-dose CT (NDCT) dataset. Then, the matched pairs and the corresponding matching degree as prior information are used to construct and train our ADN-type network for LDCT denoising. The proposed method is different from (but compatible with) supervised and semi-supervised learning modes and can be easily implemented by modifying existing networks. The experimental results show that the method is competitive with state-of-the-art methods in terms of noise suppression and contextual fidelity. The code and working dataset are publicly available athttps://github.com/ruanyuhui/ADN-QSDL.git.

6.
Phys Med Biol ; 67(14)2022 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-35732167

RESUMEN

Objective.With the progress of artificial intelligence (AI) in magnetic resonance imaging (MRI), large-scale multi-center MRI datasets have a great influence on diagnosis accuracy and model performance. However, multi-center images are highly variable due to the variety of scanners or scanning parameters in use, which has a negative effect on the generality of AI-based diagnosis models. To address this problem, we propose a self-supervised harmonization (SSH) method.Approach.Mapping the style of images between centers allows harmonization without traveling phantoms to be formalized as an unpaired image-to-image translation problem between two domains. The mapping is a two-stage transform, consisting of a modified cycle generative adversarial network (cycleGAN) for style transfer and a histogram matching module for structure fidelity. The proposed algorithm is demonstrated using female pelvic MRI images from two 3 T systems and compared with three state-of-the-art methods and one conventional method. In the absence of traveling phantoms, we evaluate harmonization from three perspectives: image fidelity, ability to remove inter-center differences, and influence on the downstream model.Main results.The improved image sharpness and structure fidelity are observed using the proposed harmonization pipeline. It largely decreases the number of features with a significant difference between two systems (from 64 to 45, lower than dualGAN: 57, cycleGAN: 59, ComBat: 64, and CLAHE: 54). In the downstream cervical cancer classification, it yields an area under the receiver operating characteristic curve of 0.894 (higher than dualGAN: 0.828, cycleGAN: 0.812, ComBat: 0.685, and CLAHE: 0.770).Significance.Our SSH method yields superior generality of downstream cervical cancer classification models by significantly decreasing the difference in radiomics features, and it achieves greater image fidelity.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias del Cuello Uterino , Inteligencia Artificial , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Aprendizaje Automático Supervisado
7.
Comput Biol Med ; 136: 104763, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34449305

RESUMEN

Medical image acquisition plays a significant role in the diagnosis and management of diseases. Magnetic Resonance (MR) and Computed Tomography (CT) are considered two of the most popular modalities for medical image acquisition. Some considerations, such as cost and radiation dose, may limit the acquisition of certain image modalities. Therefore, medical image synthesis can be used to generate required medical images without actual acquisition. In this paper, we propose a paired-unpaired Unsupervised Attention Guided Generative Adversarial Network (uagGAN) model to translate MR images to CT images and vice versa. The uagGAN model is pre-trained with a paired dataset for initialization and then retrained on an unpaired dataset using a cascading process. In the paired pre-training stage, we enhance the loss function of our model by combining the Wasserstein GAN adversarial loss function with a new combination of non-adversarial losses (content loss and L1) to generate fine structure images. This will ensure global consistency, and better capture of the high and low frequency details of the generated images. The uagGAN model is employed as it generates more accurate and sharper images through the production of attention masks. Knowledge from a non-medical pre-trained model is also transferred to the uagGAN model for improved learning and better image translation performance. Quantitative evaluation and qualitative perceptual analysis by radiologists indicate that employing transfer learning with the proposed paired-unpaired uagGAN model can achieve better performance as compared to other rival image-to-image translation models.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Atención , Encéfalo/diagnóstico por imagen , Aprendizaje Automático , Espectroscopía de Resonancia Magnética
8.
J Appl Stat ; 48(6): 961-976, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35707734

RESUMEN

In this paper, we provide a unified framework for two-sample t-test with partially paired data. We show that many existing two-sample t-tests with partially paired data can be viewed as special members in our unified framework. Some shortcomings of these t-tests are discussed. We also propose the asymptotically optimal weighted linear combination of the test statistics comparing all four paired and unpaired data sets. Simulation studies are used to illustrate the performance of our proposed asymptotically optimal weighted combinations of test statistics and compare with some existing methods. It is found that our proposed test statistic is generally more powerful. Three real data sets about CD4 count, DNA extraction concentrations, and the quality of sleep are also analyzed by using our newly introduced test statistic.

9.
PeerJ ; 4: e2104, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27330862

RESUMEN

Metabolomic profiling is an increasingly important method for identifying potential biomarkers in cancer cells with a view towards improved diagnosis and treatment. Nuclear magnetic resonance (NMR) provides a potentially noninvasive means to accurately characterize differences in the metabolomic profiles of cells. In this work, we use (1)H NMR to measure the metabolomic profiles of water soluble metabolites extracted from isogenic control and oncogenic HRAS-, KRAS-, and NRAS-transduced BEAS2B lung epithelial cells to determine the robustness of NMR metabolomic profiling in detecting differences between the transformed cells and their untransformed counterparts as well as differences among the RAS-transformed cells. Unique metabolomic signatures between control and RAS-transformed cell lines as well as among the three RAS isoform-transformed lines were found by applying principal component analysis to the NMR data. This study provides a proof of principle demonstration that NMR-based metabolomic profiling can robustly distinguish untransformed and RAS-transformed cells as well as cells transformed with different RAS oncogenic isoforms. Thus, our data may potentially provide new diagnostic signatures for RAS-transformed cells.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA