Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Publication year range
1.
Comput Med Imaging Graph ; 113: 102344, 2024 04.
Article in English | MEDLINE | ID: mdl-38320336

ABSTRACT

Cone Beam Computed Tomography (CBCT) plays a crucial role in Image-Guided Radiation Therapy (IGRT), providing essential assurance of accuracy in radiation treatment by monitoring changes in anatomical structures during the treatment process. However, CBCT images often face interference from scatter noise and artifacts, posing a significant challenge when relying solely on CBCT for precise dose calculation and accurate tissue localization. There is an urgent need to enhance the quality of CBCT images, enabling a more practical application in IGRT. This study introduces EGDiff, a novel framework based on the diffusion model, designed to address the challenges posed by scatter noise and artifacts in CBCT images. In our approach, we employ a forward diffusion process by adding Gaussian noise to CT images, followed by a reverse denoising process using ResUNet with an attention mechanism to predict noise intensity, ultimately synthesizing CBCT-to-CT images. Additionally, we design an energy-guided function to retain domain-independent features and discard domain-specific features during the denoising process, enhancing the effectiveness of CBCT-CT generation. We conduct numerous experiments on the thorax dataset and pancreas dataset. The results demonstrate that EGDiff performs better on the thoracic tumor dataset with SSIM of 0.850, MAE of 26.87 HU, PSNR of 19.83 dB, and NCC of 0.874. EGDiff outperforms SoTA CBCT-to-CT synthesis methods on the pancreas dataset with SSIM of 0.754, MAE of 32.19 HU, PSNR of 19.35 dB, and NCC of 0.846. By improving the accuracy and reliability of CBCT images, EGDiff can enhance the precision of radiation therapy, minimize radiation exposure to healthy tissues, and ultimately contribute to more effective and personalized cancer treatment strategies.


Subject(s)
Spiral Cone-Beam Computed Tomography , Reproducibility of Results , Thorax , Phantoms, Imaging
2.
Article in English | MEDLINE | ID: mdl-37782591

ABSTRACT

Automated anesthesia promises to enable more precise and personalized anesthetic administration and free anesthesiologists from repetitive tasks, allowing them to focus on the most critical aspects of a patient's surgical care. Current research has typically focused on creating simulated environments from which agents can learn. These approaches have demonstrated good experimental results, but are still far from clinical application. In this paper, Policy Constraint Q-Learning (PCQL), a data-driven reinforcement learning algorithm for solving the problem of learning strategies on real world anesthesia data, is proposed. Conservative Q-Learning was first introduced to alleviate the problem of Q function overestimation in an offline context. A policy constraint term is added to agent training to keep the policy distribution of the agent and the anesthesiologist consistent to ensure safer decisions made by the agent in anesthesia scenarios. The effectiveness of PCQL was validated by extensive experiments on a real clinical anesthesia dataset we collected. Experimental results show that PCQL is predicted to achieve higher gains than the baseline approach while maintaining good agreement with the reference dose given by the anesthesiologist, using less total dose, and being more responsive to the patient's vital signs. In addition, the confidence intervals of the agent were investigated, which were able to cover most of the clinical decisions of the anesthesiologist. Finally, an interpretable method, SHAP, was used to analyze the contributing components of the model predictions to increase the transparency of the model.

3.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 40(3): 482-491, 2023 Jun 25.
Article in Chinese | MEDLINE | ID: mdl-37380387

ABSTRACT

Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.


Subject(s)
Algorithms , Brain Neoplasms , Humans , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Recognition, Psychology
SELECTION OF CITATIONS
SEARCH DETAIL