Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Phys Med Biol ; 69(15)2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-38981593

RESUMO

Objective.Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons.Approach.We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network.Results.The proposed model achieves a mean absolute error (MAE) of18.76±5.167in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of0.95±0.09and a Frechet inception distance (FID) of145.60±8.38. The model yields a MAE of26.83±8.27to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of0.73±0.06and a FID distance equal to122.58±7.55. The improvement of our model over other state-of-the-art GAN approaches is of 3.8%, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio of27.89±2.22and26.08±2.95to synthesize MRI from CT input.Significance.The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.


Assuntos
Neoplasias de Cabeça e Pescoço , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Humanos , Imageamento por Ressonância Magnética/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Imageamento Tridimensional/métodos , Imagem Multimodal/métodos , Redes Neurais de Computação
2.
Phys Med Biol ; 68(12)2023 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-37257456

RESUMO

Objective.Multi-parametric MR image synthesis is an effective approach for several clinical applications where specific modalities may be unavailable to reach a diagnosis. While technical and practical conditions limit the acquisition of new modalities for a patient, multimodal image synthesis combines multiple modalities to synthesize the desired modality.Approach.In this paper, we propose a new multi-parametric magnetic resonance imaging (MRI) synthesis model, which generates the target MRI modality from two other available modalities, in pathological MR images. We first adopt a contrastive learning approach that trains an encoder network to extract a suitable feature representation of the target space. Secondly, we build a synthesis network that generates the target image from a common feature space that approximately matches the contrastive learned space of the target modality. We incorporate a bidirectional feature learning strategy that learns a multimodal feature matching function, in two opposite directions, to transform the augmented multichannel input in the learned target space. Overall, our training synthesis loss is expressed as the combination of the reconstruction loss and a bidirectional triplet loss, using a pair of features.Main results.Compared to other state-of-the-art methods, the proposed model achieved an average improvement rate of 3.9% and 3.6% on the IXI and BraTS'18 datasets respectively. On the tumor BraTS'18 dataset, our model records the highest Dice score of 0.793(0.04) for preserving the synthesized tumor regions in the segmented images.Significance.Validation of the proposed model on two public datasets confirms the efficiency of the model to generate different MR contrasts, and preserve tumor areas in the synthesized images. In addition, the model is flexible to generate head and neck CT image from MR acquisitions. In future work, we plan to validate the model using interventional iMRI contrasts for MR-guided neurosurgery applications, and also for radiotherapy applications. Clinical measurements will be collected during surgery to evaluate the model's performance.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética Multiparamétrica , Humanos , Imageamento por Ressonância Magnética/métodos , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos
3.
Int J Comput Assist Radiol Surg ; 18(6): 971-979, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37103727

RESUMO

PURPOSE: During MR-guided neurosurgical procedures, several factors may limit the acquisition of additional MR sequences, which are needed by neurosurgeons to adjust surgical plans or ensure complete tumor resection. Automatically synthesized MR contrasts generated from other available heterogeneous MR sequences could alleviate timing constraints. METHODS: We propose a new multimodal MR synthesis approach leveraging a combination of MR modalities presenting glioblastomas to generate an additional modality. The proposed learning approach relies on a least square GAN (LSGAN) using an unsupervised contrastive learning strategy. We incorporate a contrastive encoder, which extracts an invariant contrastive representation from augmented pairs of the generated and real target MR contrasts. This contrastive representation describes a pair of features for each input channel, allowing to regularize the generator to be invariant to the high-frequency orientations. Moreover, when training the generator, we impose on the LSGAN loss another term reformulated as the combination of a reconstruction and a novel perception loss based on a pair of features. RESULTS: When compared to other multimodal MR synthesis approaches evaluated on the BraTS'18 brain dataset, the model yields the highest Dice score with [Formula: see text] and achieves the lowest variability information of [Formula: see text], with a probability rand index score of [Formula: see text] and a global consistency error of [Formula: see text]. CONCLUSION: The proposed model allows to generate reliable MR contrasts with enhanced tumors on the synthesized image using a brain tumor dataset (BraTS'18). In future work, we will perform a clinical evaluation of residual tumor segmentations during MR-guided neurosurgeries, where limited MR contrasts will be acquired during the procedure.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Análise dos Mínimos Quadrados , Encéfalo
4.
Phys Med Biol ; 66(9)2021 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-33761478

RESUMO

With the emergence of online MRI radiotherapy treatments, MR-based workflows have increased in importance in the clinical workflow. However proper dose planning still requires CT images to calculate dose attenuation due to bony structures. In this paper, we present a novel deep image synthesis model that generates in an unsupervised manner CT images from diagnostic MRI for radiotherapy planning. The proposed model based on a generative adversarial network (GAN) consists of learning a new invariant representation to generate synthetic CT (sCT) images based on high frequency and appearance patterns. This new representation encodes each convolutional feature map of the convolutional GAN discriminator, leading the training of the proposed model to be particularly robust in terms of image synthesis quality. Our model includes an analysis of common histogram features in the training process, thus reinforcing the generator such that the output sCT image exhibits a histogram matching that of the ground-truth CT. This CT-matched histogram is embedded then in a multi-resolution framework by assessing the evaluation over all layers of the discriminator network, which then allows the model to robustly classify the output synthetic image. Experiments were conducted on head and neck images of 56 cancer patients with a wide range of shape sizes and spatial image resolutions. The obtained results confirm the efficiency of the proposed model compared to other generative models, where the mean absolute error yielded by our model was 26.44(0.62), with a Hounsfield unit error of 45.3(1.87), and an overall Dice coefficient of 0.74(0.05), demonstrating the potential of the synthesis model for radiotherapy planning applications.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos
5.
Artigo em Inglês | MEDLINE | ID: mdl-31425034

RESUMO

This work presents a Bayesian statistical approach to the multimodal change detection (CD) problem in remote sensing imagery. More precisely, we formulate the multimodal CD problem in the unsupervised Markovian framework. The main novelty of the proposed Markovian model lies in the use of an observation field built up from a pixel pairwise modeling and on the bitemporal heterogeneous satellite image pair. Such modeling allows us to rely instead on a robust visual cue, with the appealing property of being quasi-invariant to the imaging (multi-) modality. To use this observation cue as part of a stochastic likelihood model, we first rely on a preliminary iterative estimation technique that takes into account the variety of the laws in the distribution mixture and estimates the parameters of the Markovian mixture model. Once this estimation step is completed, the Maximum a posteriori (MAP) solution of the change detection map, based on the previously estimated parameters, is then computed with a stochastic optimization process. Experimental results and comparisons involving a mixture of different types of imaging modalities confirm the robustness of the proposed approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA