Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Adv Exp Med Biol ; 1213: 23-44, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32030661

RESUMEN

Medical images have been widely used in clinics, providing visual representations of under-skin tissues in human body. By applying different imaging protocols, diverse modalities of medical images with unique characteristics of visualization can be produced. Considering the cost of scanning high-quality single modality images or homogeneous multiple modalities of images, medical image synthesis methods have been extensively explored for clinical applications. Among them, deep learning approaches, especially convolutional neural networks (CNNs) and generative adversarial networks (GANs), have rapidly become dominating for medical image synthesis in recent years. In this chapter, based on a general review of the medical image synthesis methods, we will focus on introducing typical CNNs and GANs models for medical image synthesis. Especially, we will elaborate our recent work about low-dose to high-dose PET image synthesis, and cross-modality MR image synthesis, using these models.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
2.
Neuroimage ; 174: 550-562, 2018 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-29571715

RESUMEN

Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.


Asunto(s)
Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Adulto , Aprendizaje Profundo , Femenino , Humanos , Masculino , Dosis de Radiación , Reproducibilidad de los Resultados , Relación Señal-Ruido , Adulto Joven
3.
IEEE Trans Med Imaging ; 40(5): 1417-1427, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33534704

RESUMEN

In clinics, the information about the appearance and location of brain tumors is essential to assist doctors in diagnosis and treatment. Automatic brain tumor segmentation on the images acquired by magnetic resonance imaging (MRI) is a common way to attain this information. However, MR images are not quantitative and can exhibit significant variation in signal depending on a range of factors, which increases the difficulty of training an automatic segmentation network and applying it to new MR images. To deal with this issue, this paper proposes to learn a sample-adaptive intensity lookup table (LuT) that dynamically transforms the intensity contrast of each input MR image to adapt to the following segmentation task. Specifically, the proposed deep SA-LuT-Net framework consists of a LuT module and a segmentation module, trained in an end-to-end manner: the LuT module learns a sample-specific nonlinear intensity mapping function through communication with the segmentation module, aiming at improving the final segmentation performance. In order to make the LuT learning sample-adaptive, we parameterize the intensity mapping function by exploring two families of non-linear functions (i.e., piece-wise linear and power functions) and predict the function parameters for each given sample. These sample-specific parameters make the intensity mapping adaptive to samples. We develop our SA-LuT-Nets separately based on two backbone networks for segmentation, i.e., DMFNet and the modified 3D Unet, and validate them on BRATS2018 and BRATS2019 datasets for brain tumor segmentation. Our experimental results clearly demonstrate the superior performance of the proposed SA-LuT-Nets using either single or multiple MR modalities. It not only significantly improves the two baselines (DMFNet and the modified 3D Unet), but also wins a set of state-of-the-art segmentation methods. Moreover, we show that, the LuTs learnt using one segmentation model could also be applied to improving the performance of another segmentation model, indicating the general segmentation information captured by LuTs.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética
4.
Phys Med Biol ; 66(7)2021 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-33621965

RESUMEN

Dose reduction in cerebral CT perfusion (CTP) imaging is desirable but is accompanied by an increase in noise that can compromise the image quality and the accuracy of image-based haemodynamic modelling used for clinical decision support in acute ischaemic stroke. The few reported methods aimed at denoising low-dose CTP images lack practicality by considering only small sections of the brain or being computationally expensive. Moreover, the prediction of infarct and penumbra size and location-the chief means of decision support for treatment options-from denoised data has not been explored using these approaches. In this work, we present the first application of a 3D generative adversarial network (3D GAN) for predicting normal-dose CTP data from low-dose CTP data. Feasibility of the approach was tested using real data from 30 acute ischaemic stroke patients in conjunction with low dose simulation. The 3D GAN model was applied to 643voxel patches extracted from two different configurations of the CTP data-frame-based and stacked. The method led to whole-brain denoised data being generated for haemodynamic modelling within 90 s. Accuracy of the method was evaluated using standard image quality metrics and the extent to which the clinical content and lesion characteristics of the denoised CTP data were preserved. Results showed an average improvement of 5.15-5.32 dB PSNR and 0.025-0.033 structural similarity index (SSIM) for CTP images and 2.66-3.95 dB PSNR and 0.036-0.067 SSIM for functional maps at 50% and 25% of normal dose using GAN model in conjunction with a stacked data regime for image synthesis. Consequently, the average lesion volumetric error reduced significantly (p-value <0.05) by 18%-29% and dice coefficient improved significantly by 15%-22%. We conclude that GAN-based denoising is a promising practical approach for reducing radiation dose in CTP studies and improving lesion characterisation.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular , Encéfalo/diagnóstico por imagen , Isquemia Encefálica/diagnóstico por imagen , Reducción Gradual de Medicamentos , Estudios de Factibilidad , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen de Perfusión , Tomografía Computarizada por Rayos X/métodos
5.
IEEE Trans Med Imaging ; 39(7): 2339-2350, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31995478

RESUMEN

Generative adversarial network (GAN) has been widely explored for cross-modality medical image synthesis. The existing GAN models usually adversarially learn a global sample space mapping from the source-modality to the target-modality and then indiscriminately apply this mapping to all samples in the whole space for prediction. However, due to the scarcity of training samples in contrast to the complicated nature of medical image synthesis, learning a single global sample space mapping that is "optimal" to all samples is very challenging, if not intractable. To address this issue, this paper proposes sample-adaptive GAN models, which not only cater for the global sample space mapping between the source- and the target-modalities but also explore the local space around each given sample to extract its unique characteristic. Specifically, the proposed sample-adaptive GANs decompose the entire learning model into two cooperative paths. The baseline path learns a common GAN model by fitting all the training samples as usual for the global sample space mapping. The new sample-adaptive path additionally models each sample by learning its relationship with its neighboring training samples and using the target-modality features of these training samples as auxiliary information for synthesis. Enhanced by this sample-adaptive path, the proposed sample-adaptive GANs are able to flexibly adjust themselves to different samples, and therefore optimize the synthesis performance. Our models have been verified on three cross-modality MR image synthesis tasks from two public datasets, and they significantly outperform the state-of-the-art methods in comparison. Moreover, the experiment also indicates that our sample-adaptive strategy could be utilized to improve various backbone GAN models. It complements the existing GANs models and can be readily integrated when needed.


Asunto(s)
Procesamiento de Imagen Asistido por Computador
6.
IEEE Trans Med Imaging ; 38(7): 1750-1762, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30714911

RESUMEN

Magnetic resonance (MR) imaging is a widely used medical imaging protocol that can be configured to provide different contrasts between the tissues in human body. By setting different scanning parameters, each MR imaging modality reflects the unique visual characteristic of scanned body part, benefiting the subsequent analysis from multiple perspectives. To utilize the complementary information from multiple imaging modalities, cross-modality MR image synthesis has aroused increasing research interest recently. However, most existing methods only focus on minimizing pixel/voxel-wise intensity difference but ignore the textural details of image content structure, which affects the quality of synthesized images. In this paper, we propose edge-aware generative adversarial networks (Ea-GANs) for cross-modality MR image synthesis. Specifically, we integrate edge information, which reflects the textural structure of image content and depicts the boundaries of different objects in images, to reduce this gap. Corresponding to different learning strategies, two frameworks are proposed, i.e., a generator-induced Ea-GAN (gEa-GAN) and a discriminator-induced Ea-GAN (dEa-GAN). The gEa-GAN incorporates the edge information via its generator, while the dEa-GAN further does this from both the generator and the discriminator so that the edge similarity is also adversarially learned. In addition, the proposed Ea-GANs are 3D-based and utilize hierarchical features to capture contextual information. The experimental results demonstrate that the proposed Ea-GANs, especially the dEa-GAN, outperform multiple state-of-the-art methods for cross-modality MR image synthesis in both qualitative and quantitative measures. Moreover, the dEa-GAN also shows excellent generality to generic image synthesis tasks on benchmark datasets about facades, maps, and cityscapes.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Humanos , Redes Neurales de la Computación
7.
IEEE Trans Med Imaging ; 38(6): 1328-1339, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30507527

RESUMEN

Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1 ×1 ×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.


Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional/métodos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Bases de Datos Factuales , Humanos , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Dosis de Radiación
8.
Med Image Comput Comput Assist Interv ; 11070: 329-337, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31058275

RESUMEN

Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multimodality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.


Asunto(s)
Imagen por Resonancia Magnética , Imagen Multimodal , Tomografía de Emisión de Positrones , Algoritmos , Electrones , Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos , Redes Neurales de la Computación , Tomografía de Emisión de Positrones/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA