Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Med Phys ; 51(3): 2066-2080, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37665773

RESUMO

BACKGROUND AND OBJECTIVE: Metallic magnetic resonance imaging (MRI) implants can introduce magnetic field distortions, resulting in image distortion, such as bulk shifts and signal-loss artifacts. Metal Artifacts Region Inpainting Network (MARINet), using the symmetry of brain MRI images, has been developed to generate normal MRI images in the image domain and improve image quality. METHODS: T1-weighted MRI images containing or located near the teeth of 100 patients were collected. A total of 9000 slices were obtained after data augmentation. Then, MARINet based on U-Net with a dual-path encoder was employed to inpaint the artifacts in MRI images. The input of MARINet contains the original image and the flipped registered image, with partial convolution used concurrently. Subsequently, we compared PConv with partial convolution, and GConv with gated convolution, SDEdit using a diffusion model for inpainting the artifact region of MRI images. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the mask were used to compare the results of these methods. In addition, the artifact masks of clinical MRI images were inpainted by physicians. RESULTS: MARINet could directly and effectively inpaint the incomplete MRI images generated by masks in the image domain. For the test results of PConv, GConv, SDEdit, and MARINet, the masked MAEs were 0.1938, 0.1904, 0.1876, and 0.1834, respectively, and the masked PSNRs were 17.39, 17.40, 17.49, and 17.60 dB, respectively. The visualization results also suggest that the network can recover the tissue texture, alveolar shape, and tooth contour. Additionally, for clinical artifact MRI images, MARINet completed the artifact region inpainting task more effectively when compared with other models. CONCLUSIONS: By leveraging the quasi-symmetry of brain MRI images, MARINet can directly and effectively inpaint the metal artifacts in MRI images in the image domain, restoring the tooth contour and detail, thereby enhancing the image quality.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Razão Sinal-Ruído
2.
Technol Cancer Res Treat ; 22: 15330338231199287, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37709267

RESUMO

As an important branch of artificial intelligence and machine learning, deep learning (DL) has been widely used in various aspects of cancer auxiliary diagnosis, among which cancer prognosis is the most important part. High-accuracy cancer prognosis is beneficial to the clinical management of patients with cancer. Compared with other methods, DL models can significantly improve the accuracy of prediction. Therefore, this article is a systematic review of the latest research on DL in cancer prognosis prediction. First, the data type, construction process, and performance evaluation index of the DL model are introduced in detail. Then, the current mainstream baseline DL cancer prognosis prediction models, namely, deep neural networks, convolutional neural networks, deep belief networks, deep residual networks, and vision transformers, including network architectures, the latest application in cancer prognosis, and their respective characteristics, are discussed. Next, some key factors that affect the predictive performance of the model and common performance enhancement techniques are listed. Finally, the limitations of the DL cancer prognosis prediction model in clinical practice are summarized, and the future research direction is prospected. This article could provide relevant researchers with a comprehensive understanding of DL cancer prognostic models and is expected to promote the research progress of cancer prognosis prediction.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Inteligência Artificial , Redes Neurais de Computação , Neoplasias/diagnóstico , Prognóstico
3.
Technol Cancer Res Treat ; 22: 15330338231194546, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37700675

RESUMO

Purpose: During ultrasound (US)-guided radiotherapy, the tissue is deformed by probe pressure, and the US image is limited by changes in tissue and organ position and geometry when the US image is aligned with computed tomography (CT) image, leading to poor alignment. Accordingly, a pixel displacement-based nondeformed US image production method is proposed. Methods: The correction of US image deformation is achieved by calculating the pixel displacement of an image. The positioning CT image (CTstd) is used as the gold standard. The deformed US image (USdef) is inputted into the Harris algorithm to extract corner points for selecting feature points, and the displacement of adjacent pixels of feature points in the US video stream is calculated using the Lucas-Kanade optical flow algorithm. The moving least squares algorithm is used to correct USdef globally and locally in accordance with image pixel displacement to generate a nondeformed US image (USrev). In addition, USdef and USrev were separately aligned with CTstd to evaluate the improvement of alignment accuracy through deformation correction. Results: In the phantom experiment, the overall and local average correction errors of the US image under the optimal probe pressure were 1.0944 and 0.7388 mm, respectively, and the registration accuracy of USdef and USrev with CTstd was 0.6764 and 0.9016, respectively. During the volunteer experiment, the correction error of all 12 patients' data ranged from -1.7525 to 1.5685 mm, with a mean absolute error of 0.8612 mm. The improvement range of US and CT registration accuracy, before and after image deformation correction in the 12 patients evaluated by a normalized correlation coefficient, was 0.1232 to 0.2476. Conclusion: The pixel displacement-based deformation correction method can solve the limitation imposed by image deformation on image alignment in US-guided radiotherapy. Compared with USdef, the alignment results of USrev with CT were better.


Assuntos
Ultrassonografia de Intervenção , Humanos , Algoritmos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Ultrassonografia de Intervenção/métodos , Radioterapia Guiada por Imagem/métodos
4.
Comput Methods Programs Biomed ; 231: 107393, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36739623

RESUMO

OBJECTIVE: A generative adversarial network (TCBCTNet) was proposed to generate synthetic computed tomography (sCT) from truncated low-dose cone-beam computed tomography (CBCT) and planning CT (pCT). The sCT was applied to the dose calculation of radiotherapy for patients with breast cancer. METHODS: The low-dose CBCT and pCT images of 80 female thoracic patients were used for training. The CBCT, pCT, and replanning CT (rCT) images of 20 thoracic patients and 20 patients with breast cancer were used for testing. All patients were fixed in the same posture with a vacuum pad. The CBCT images were scanned under the Fast Chest M20 protocol with a 50% reduction in projection frames compared with the standard Chest M20 protocol. Rigid registration was performed between pCT and CBCT, and deformation registration was performed between rCT and CBCT. In the training stage of the TCBCTNet, truncated CBCT images obtained from complete CBCT images by simulation were used. The input of the CBCT→CT generator was truncated CBCT and pCT, and TCBCTNet was applied to patients with breast cancer after training. The accuracy of the sCT was evaluated by anatomy and dosimetry and compared with the generative adversarial network with UNet and ResNet as the generators (named as UnetGAN, ResGAN). RESULTS: The three models could improve the image quality of CBCT and reduce the scattering artifacts while preserving the anatomical geometry of CBCT. For the chest test set, TCBCTNet achieved the best mean absolute error (MAE, 21.18±3.76 HU), better than 23.06±3.90 HU in UnetGAN and 22.47±3.57 HU in ResGAN. When applied to patients with breast cancer, TCBCTNet performance decreased, and MAE was 25.34±6.09 HU. Compared with rCT, sCT by TCBCTNet showed consistent dose distribution and subtle absolute dose differences between the target and the organ at risk. The 3D gamma pass rates were 98.98%±0.64% and 99.69%±0.22% at 2 mm/2% and 3 mm/3%, respectively. Ablation experiments confirmed that pCT and content loss played important roles in TCBCTNet. CONCLUSIONS: High-quality sCT images could be synthesized from truncated low-dose CBCT and pCT by using the proposed TCBCTNet model. In addition, sCT could be used to accurately calculate the dose distribution for patients with breast cancer.


Assuntos
Neoplasias da Mama , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Feminino , Planejamento da Radioterapia Assistida por Computador/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Radiometria
5.
Med Biol Eng Comput ; 61(7): 1757-1772, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36897469

RESUMO

This study aimed to inpaint the truncated areas of CT images by using generative adversarial networks with gated convolution (GatedConv) and apply these images to dose calculations in radiotherapy. CT images were collected from 100 patients with esophageal cancer under thermoplastic membrane placement, and 85 cases were used for training based on randomly generated circle masks. In the prediction stage, 15 cases of data were used to evaluate the accuracy of the inpainted CT in anatomy and dosimetry based on the mask with a truncated volume covering 40% of the arm volume, and they were compared with the inpainted CT synthesized by U-Net, pix2pix, and PConv with partial convolution. The results showed that GatedConv could directly and effectively inpaint incomplete CT images in the image domain. For the results of U-Net, pix2pix, PConv, and GatedConv, the mean absolute errors for the truncated tissue were 195.54, 196.20, 190.40, and 158.45 HU, respectively. The mean dose of the planning target volume, heart, and lung in the truncated CT was statistically different (p < 0.05) from those of the ground truth CT ([Formula: see text]). The differences in dose distribution between the inpainted CT obtained by the four models and [Formula: see text] were minimal. The inpainting effect of clinical truncated CT images based on GatedConv showed better stability compared with the other models. GatedConv can effectively inpaint the truncated areas with high image quality, and it is closer to [Formula: see text] in terms of image visualization and dosimetry than other inpainting models.


Assuntos
Radioterapia de Intensidade Modulada , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Dosagem Radioterapêutica
6.
Comput Methods Programs Biomed ; 221: 106932, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35671601

RESUMO

BACKGROUND AND OBJECTIVE: Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS: CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS: In the three synthetic directions, namely, CBCT â†’ CT, CBCT â†’ MRI, and MRI â†’ CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT â†’ CT, CBCT â†’ MRI, and MRI â†’ CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS: The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.


Assuntos
Processamento de Imagem Assistida por Computador , Planejamento da Radioterapia Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Radiometria , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos
7.
Phys Med Biol ; 67(3)2022 01 28.
Artigo em Inglês | MEDLINE | ID: mdl-34879356

RESUMO

Objective.A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model is proposed to synthesize higher-quality pseudo-CT from MRI images.Approach.MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted DenseNet as the main architecture. The local and global discriminators based on a convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by a fourfold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with the generator based on the architectures of ResNet, sUNet, and FCN.Main results.There are significant differences (P < 0.05) in the fourfold cross-validation results on the peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgtwith a root mean square error of 47.83 ± 2.92 HU, a normalized mutual information value of 0.9014 ± 0.0212, and a mean absolute error value of 46.79 ± 2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgtwere minimal. The mean absolute dose errors of Dosemax, Dosemin, and Dosemeanbased on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82, and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45 ± 1.91, and the comparison methods (ResNet_GAN, sUnet_GAN, and FCN_GAN) were 93.33 ± 1.20, 89.64 ± 1.63, and 87.31 ± 1.94, respectively.Significance.The pseudo-CT images obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgtin terms of anatomy and dosimetry than other GAN models.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Razão Sinal-Ruído
8.
Med Phys ; 49(10): 6424-6438, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35982470

RESUMO

PURPOSE: Magnetic resonance imaging (MRI) plays an important role in clinical diagnosis, but it is susceptible to metal artifacts. The generative adversarial network GatedConv with gated convolution (GC) and contextual attention (CA) was used to inpaint the metal artifact region in MRI images. METHODS: MRI images containing or near the teeth of 70 patients were collected, and the scanning sequence was a T1-weighted high-resolution isotropic volume examination sequence. A total of 10 000 slices were obtained after data enhancement, of which 8000 slices were used for training. MRI images were normalized to [-1,1]. Based on the randomly generated mask, U-Net, pix2pix, PConv with partial convolution, and GatedConv were used to inpaint the artifact region of MRI images. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the mask were used to compare the results of these methods. The inpainting effect on the test dataset using dental masks was also evaluated. Besides, the artifact area of clinical MRI images was inpainted based on the mask sketched by physicians. Finally, the earring artifacts and artifacts caused by abnormal signal foci were inpainted to verify the generalization of the models. RESULTS: GatedConv could directly and effectively inpaint the incomplete MRI images generated by masks in the image domain. For the results of U-Net, pix2pix, PConv, and GatedConv, the masked MAEs were 0.1638, 0.1812, 0.1688, and 0.1596, respectively, and the masked PSNRs were 18.2136, 17.5692, 18.2258, and 18.3035 dB, respectively. Using dental masks, the results of U-Net, pix2pix, and PConv differed more from the real images in terms of alveolar shape and surrounding tissue compared with GatedConv. GatedConv could inpaint the metal artifact region in clinical MRI images more effectively than the other models, but the increase in the mask area could reduce the inpainting effect. Inpainted MRI images by GatedConv and CT images with metal artifact reduction coincided with alveolar and tissue structure, and GatedConv could successfully inpaint artifacts caused by abnormal signal foci, whereas the other models failed. The ablation study demonstrated that GC and CA increased the reliability of the inpainting performance of GatedConv. CONCLUSION: MRI images are affected by metal, and signal void areas appear near metal. GatedConv can inpaint the MRI metal artifact region in the image domain directly and effectively and improve image quality. Medical image inpainting by GatedConv has potential value for tasks, such as positron emission tomography (PET) attenuation correction in PET/MRI and adaptive radiotherapy of synthetic CT based on MRI.


Assuntos
Artefatos , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
9.
Comput Methods Programs Biomed ; 215: 106600, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34971855

RESUMO

BACKGROUND AND OBJECTIVES: Thyroid nodules are a common disorder of the endocrine system. Segmentation of thyroid nodules on ultrasound images is an important step in the evaluation and diagnosis of nodules and an initial step in computer-aided diagnostic systems. The accuracy and consistency of segmentation remain a challenge due to the low contrast, speckle noise, and low resolution of ultrasound images. Therefore, the study of deep learning-based algorithms for thyroid nodule segmentation is important. This study utilizes soft shape supervision to improve the performance of detection and segmentation of boundaries of nodules. Soft shape supervision can emphasize the boundary features and assist the network in segmenting nodules accurately. METHODS: We propose a dual-path convolution neural network, including region and shape paths, which use DeepLabV3+ as the backbone. Soft shape supervision blocks are inserted between the two paths to implement cross-path attention mechanisms. The blocks enhance the representation of shape features and add them to the region path as auxiliary information. Thus, the network can accurately detect and segment thyroid nodules. RESULTS: We collect 3786 ultrasound images of thyroid nodules to train and test our network. Compared with the ground truth, the test results achieve an accuracy of 95.81% and a DSC of 85.33. The visualization results also suggest that the network has learned clear and accurate boundaries of the nodules. The evaluation metrics and visualization results demonstrate the superior segmentation performance of the network to other classical deep learning-based networks. CONCLUSIONS: The proposed dual-path network can accurately realize automatic segmentation of thyroid nodules on ultrasound images. It can also be used as an initial step in computer-aided diagnosis. It shows superior performance to other classical methods and demonstrates the potential for accurate segmentation of nodules in clinical applications.


Assuntos
Nódulo da Glândula Tireoide , Algoritmos , Diagnóstico por Computador , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa