Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; PP2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38564344

RESUMO

Treatment planning, which is a critical component of the radiotherapy workflow, is typically carried out by a medical physicist in a time-consuming trial-and-error manner. Previous studies have proposed knowledge-based or deep-learning-based methods for predicting dose distribution maps to assist medical physicists in improving the efficiency of treatment planning. However, these dose prediction methods usually fail to effectively utilize distance information between surrounding tissues and targets or organs-at-risk (OARs). Moreover, they are poor at maintaining the distribution characteristics of ray paths in the predicted dose distribution maps, resulting in a loss of valuable information. In this paper, we propose a distance-aware diffusion model (DoseDiff) for precise prediction of dose distribution. We define dose prediction as a sequence of denoising steps, wherein the predicted dose distribution map is generated with the conditions of the computed tomography (CT) image and signed distance maps (SDMs). The SDMs are obtained by distance transformation from the masks of targets or OARs, which provide the distance from each pixel in the image to the outline of the targets or OARs. We further propose a multi-encoder and multi-scale fusion network (MMFNet) that incorporates multi-scale and transformer-based fusion modules to enhance information fusion between the CT image and SDMs at the feature level. We evaluate our model on two in-house datasets and a public dataset, respectively. The results demonstrate that our DoseDiff method outperforms state-of-the-art dose prediction methods in terms of both quantitative performance and visual quality.

2.
IEEE Trans Med Imaging ; 42(8): 2313-2324, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37027663

RESUMO

Adaptive radiation therapy (ART) aims to deliver radiotherapy accurately and precisely in the presence of anatomical changes, in which the synthesis of computed tomography (CT) from cone-beam CT (CBCT) is an important step. However, because of serious motion artifacts, CBCT-to-CT synthesis remains a challenging task for breast-cancer ART. Existing synthesis methods usually ignore motion artifacts, thereby limiting their performance on chest CBCT images. In this paper, we decompose CBCT-to-CT synthesis into artifact reduction and intensity correction, and we introduce breath-hold CBCT images to guide them. To achieve superior synthesis performance, we propose a multimodal unsupervised representation disentanglement (MURD) learning framework that disentangles the content, style, and artifact representations from CBCT and CT images in the latent space. MURD can synthesize different forms of images using the recombination of disentangled representations. Also, we propose a multipath consistency loss to improve structural consistency in synthesis and a multidomain generator to improve synthesis performance. Experiments on our breast-cancer dataset show that MURD achieves impressive performance with a mean absolute error of 55.23±9.94 HU, a structural similarity index measurement of 0.721±0.042, and a peak signal-to-noise ratio of 28.26±1.93 dB in synthetic CT. The results show that compared to state-of-the-art unsupervised synthesis methods, our method produces better synthetic CT images in terms of both accuracy and visual quality.


Assuntos
Neoplasias da Mama , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Feminino , Tomografia Computadorizada de Feixe Cônico/métodos , Razão Sinal-Ruído , Imagens de Fantasmas , Neoplasias da Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
3.
Opt Express ; 29(15): 23764-23776, 2021 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-34614635

RESUMO

Recently, single or multi-layer spherical lens (monocentric lens) coupled with a microlens array (MLA) and an imaging sensor are under investigation to expand the field of view (FOV) for handheld plenoptic cameras. However, there lacks modeling the point spread functions (PSFs) for them to improve the imaging quality and to reconstruct the light field in the object space. In this paper, a generic image formation model is proposed for wide-FOV plenoptic cameras that use a monocentric lens and an MLA. By analyzing the optical characteristics of the monocentric lens, we propose to approximate it by a superposition of a series of concentric lenses with variable apertures. Based on geometry simplification and wave propagation, the equivalent imaging process of each portion of a wide-FOV plenoptic camera is modeled, based on which the PSF is derived. By comparing PSFs captured by real wide-FOV plenoptic camera and those generated by the proposed model, the validity of this model is verified. Further, reconstruction process is applied by deconvolving captured images with the PSFs generated by the proposed model. Experimental results show that the quality of reconstructed images is better than that of subaperture images, which demonstrates that our proposed PSF model is beneficial for imaging quality improvement and light field reconstruction.

4.
Opt Express ; 28(3): 3428-3441, 2020 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-32122011

RESUMO

Due to the subtle structure, the exact geometry parameters of the focused plenoptic camera cannot be retrieved after packaging, which leads to inaccurate light field processing such as visible artifacts in the rendering images. This paper proposes a novel blind calibration method to calculate the geometry parameters for the focused plenoptic cameras with high precision. It translates the problem of deriving the value of the geometry parameters to be the problem of deriving the pixel patch-size of each micro-image used in subaperture image rendering based on the geometry projection of the relay imaging process in the focused plenoptic camera. Then, a dark image calibration algorithm is proposed to retrieve the position and the geometry parameters of the MLA for subaperture image rendering. A triple-level calibration board with random texture is designed to realize focus plane confirming blindly, to facilitate capturing light field images at different object distances via a single shot and to benefit intensity feature matching in determining the rendering patch size. The rendering patch-size is found by the proposed Gradient-SSIM-based fractional-pixel matching based on the geometry projection analysis. Experiments conducted on the simulated data and the real imaging system demonstrate that the proposed method can acquire the geometry parameters with high accuracy and is robust to different focused plenoptic cameras.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA