Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Phys Med Biol ; 69(8)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38471171

RESUMO

Objective.The aim of this study was to reconstruct volumetric computed tomography (CT) images in real-time from ultra-sparse two-dimensional x-ray projections, facilitating easier navigation and positioning during image-guided radiation therapy.Approach.Our approach leverages a voxel-sapce-searching Transformer model to overcome the limitations of conventional CT reconstruction techniques, which require extensive x-ray projections and lead to high radiation doses and equipment constraints.Main results.The proposed XTransCT algorithm demonstrated superior performance in terms of image quality, structural accuracy, and generalizability across different datasets, including a hospital set of 50 patients, the large-scale public LIDC-IDRI dataset, and the LNDb dataset for cross-validation. Notably, the algorithm achieved an approximately 300% improvement in reconstruction speed, with a rate of 44 ms per 3D image reconstruction compared to former 3D convolution-based methods.Significance.The XTransCT architecture has the potential to impact clinical practice by providing high-quality CT images faster and with substantially reduced radiation exposure for patients. The model's generalizability suggests it has the potential applicable in various healthcare settings.


Assuntos
Radioterapia Guiada por Imagem , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Raios X , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas
2.
Comput Med Imaging Graph ; 112: 102336, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38244280

RESUMO

Rigid pre-registration involving local-global matching or other large deformation scenarios is crucial. Current popular methods rely on unsupervised learning based on grayscale similarity, but under circumstances where different poses lead to varying tissue structures, or where image quality is poor, these methods tend to exhibit instability and inaccuracies. In this study, we propose a novel method for medical image registration based on arbitrary voxel point of interest matching, called query point quizzer (QUIZ). QUIZ focuses on the correspondence between local-global matching points, specifically employing CNN for feature extraction and utilizing the Transformer architecture for global point matching queries, followed by applying average displacement for local image rigid transformation.We have validated this approach on a large deformation dataset of cervical cancer patients, with results indicating substantially smaller deviations compared to state-of-the-art methods. Remarkably, even for cross-modality subjects, it achieves results surpassing the current state-of-the-art.


Assuntos
Algoritmos , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
3.
Med Image Anal ; 91: 102984, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37837690

RESUMO

The accurate delineation of organs-at-risk (OARs) is a crucial step in treatment planning during radiotherapy, as it minimizes the potential adverse effects of radiation on surrounding healthy organs. However, manual contouring of OARs in computed tomography (CT) images is labor-intensive and susceptible to errors, particularly for low-contrast soft tissue. Deep learning-based artificial intelligence algorithms surpass traditional methods but require large datasets. Obtaining annotated medical images is both time-consuming and expensive, hindering the collection of extensive training sets. To enhance the performance of medical image segmentation, augmentation strategies such as rotation and Gaussian smoothing are employed during preprocessing. However, these conventional data augmentation techniques cannot generate more realistic deformations, limiting improvements in accuracy. To address this issue, this study introduces a statistical deformation model-based data augmentation method for volumetric medical image segmentation. By applying diverse and realistic data augmentation to CT images from a limited patient cohort, our method significantly improves the fully automated segmentation of OARs across various body parts. We evaluate our framework on three datasets containing tumor OARs from the head, neck, chest, and abdomen. Test results demonstrate that the proposed method achieves state-of-the-art performance in numerous OARs segmentation challenges. This innovative approach holds considerable potential as a powerful tool for various medical imaging-related sub-fields, effectively addressing the challenge of limited data access.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Algoritmos , Pescoço , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
4.
Med Image Anal ; 91: 102998, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857066

RESUMO

Radiotherapy serves as a pivotal treatment modality for malignant tumors. However, the accuracy of radiotherapy is significantly compromised due to respiratory-induced fluctuations in the size, shape, and position of the tumor. To address this challenge, we introduce a deep learning-anchored, volumetric tumor tracking methodology that employs single-angle X-ray projection images. This process involves aligning the intraoperative two-dimensional (2D) X-ray images with the pre-treatment three-dimensional (3D) planning Computed Tomography (CT) scans, enabling the extraction of the 3D tumor position and segmentation. Prior to therapy, a bespoke patient-specific tumor tracking model is formulated, leveraging a hybrid data augmentation, style correction, and registration network to create a mapping from single-angle 2D X-ray images to the corresponding 3D tumors. During the treatment phase, real-time X-ray images are fed into the trained model, producing the respective 3D tumor positioning. Rigorous validation conducted on actual patient lung data and lung phantoms attests to the high localization precision of our method at lowered radiation doses, thus heralding promising strides towards enhancing the precision of radiotherapy.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Imageamento Tridimensional/métodos , Raios X , Tomografia Computadorizada por Raios X/métodos , Neoplasias/diagnóstico por imagem , Neoplasias/radioterapia , Tomografia Computadorizada de Feixe Cônico/métodos
5.
Phys Med Biol ; 68(24)2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-37844603

RESUMO

Objective.Medical image registration represents a fundamental challenge in medical image processing. Specifically, CT-CBCT registration has significant implications in the context of image-guided radiation therapy (IGRT). However, traditional iterative methods often require considerable computational time. Deep learning based methods, especially when dealing with low contrast organs, are frequently entangled in local optimal solutions.Approach.To address these limitations, we introduce a registration method based on volumetric feature points integration with bio-structure-informed guidance. Surface point cloud is generated from segmentation labels during the training stage, with both the surface-registered point pairs and voxel feature point pairs co-guiding the training process, thereby achieving higher registration accuracy.Main results.Our findings have been validated on paired CT-CBCT datasets. In comparison with other deep learning registration methods, our approach has improved the precision by 6%, reaching a state-of-the-art status.Significance.The integration of voxel feature points and bio-structure feature points to guide the training of the medical image registration network has achieved promising results. This provides a meaningful direction for further research in medical image registration and IGRT.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Radioterapia Guiada por Imagem , Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Radioterapia Guiada por Imagem/métodos , Algoritmos
6.
Phys Med Biol ; 68(20)2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37714184

RESUMO

Objective.Computed tomography (CT) is a widely employed imaging technology for disease detection. However, CT images often suffer from ring artifacts, which may result from hardware defects and other factors. These artifacts compromise image quality and impede diagnosis. To address this challenge, we propose a novel method based on dual contrast learning image style transformation network model (DCLGAN) that effectively eliminates ring artifacts from CT images while preserving texture details.Approach. Our method involves simulating ring artifacts on real CT data to generate the uncorrected CT (uCT) data and transforming them into strip artifacts. Subsequently, the DCLGAN synthetic network is applied in the polar coordinate system to remove the strip artifacts and generate a synthetic CT (sCT). We compare the uCT and sCT images to obtain a residual image, which is then filtered to extract the strip artifacts. An inverse polar transformation is performed to obtain the ring artifacts, which are subtracted from the original CT image to produce a corrected image.Main results.To validate the effectiveness of our approach, we tested it using real CT data, simulated data, and cone beam computed tomography images of the patient's brain. The corrected CT images showed a reduction in mean absolute error by 12.36 Hounsfield units (HU), a decrease in root mean square error by 18.94 HU, an increase in peak signal-to-noise ratio by 3.53 decibels (dB), and an improvement in structural similarity index by 9.24%.Significance.These results demonstrate the efficacy of our method in eliminating ring artifacts and preserving image details, making it a valuable tool for CT imaging.

7.
Comput Biol Med ; 165: 107377, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37651766

RESUMO

PURPOSE: Cone-beam computed tomography (CBCT) is widely utilized in modern radiotherapy; however, CBCT images exhibit increased scatter artifacts compared to planning CT (pCT), compromising image quality and limiting further applications. Scatter correction is thus crucial for improving CBCT image quality. METHODS: In this study, we proposed an unsupervised contrastive learning method for CBCT scatter correction. Initially, we transformed low-quality CBCT into high-quality synthetic pCT (spCT) and generated forward projections of CBCT and spCT. By computing the difference between these projections, we obtained a residual image containing image details and scatter artifacts. Image details primarily comprise high-frequency signals, while scatter artifacts consist mainly of low-frequency signals. We extracted the scatter projection signal by applying a low-pass filter to remove image details. The corrected CBCT (cCBCT) projection signal was obtained by subtracting the scatter artifacts projection signal from the original CBCT projection. Finally, we employed the FDK reconstruction algorithm to generate the cCBCT image. RESULTS: To evaluate cCBCT image quality, we aligned the CBCT and pCT of six patients. In comparison to CBCT, cCBCT maintains anatomical consistency and significantly enhances CT number, spatial homogeneity, and artifact suppression. The mean absolute error (MAE) of the test data decreased from 88.0623 ± 26.6700 HU to 17.5086 ± 3.1785 HU. The MAE of fat regions of interest (ROIs) declined from 370.2980 ± 64.9730 HU to 8.5149 ± 1.8265 HU, and the error between their maximum and minimum CT numbers decreased from 572.7528 HU to 132.4648 HU. The MAE of muscle ROIs reduced from 354.7689 ± 25.0139 HU to 16.4475 ± 3.6812 HU. We also compared our proposed method with several conventional unsupervised synthetic image generation techniques, demonstrating superior performance. CONCLUSIONS: Our approach effectively enhances CBCT image quality and shows promising potential for future clinical adoption.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Espalhamento de Radiação
8.
Int J Surg ; 109(7): 2010-2024, 2023 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-37300884

RESUMO

BACKGROUND: Peritoneal recurrence (PR) is the predominant pattern of relapse after curative-intent surgery in gastric cancer (GC) and indicates a dismal prognosis. Accurate prediction of PR is crucial for patient management and treatment. The authors aimed to develop a noninvasive imaging biomarker from computed tomography (CT) for PR evaluation, and investigate its associations with prognosis and chemotherapy benefit. METHODS: In this multicenter study including five independent cohorts of 2005 GC patients, the authors extracted 584 quantitative features from the intratumoral and peritumoral regions on contrast-enhanced CT images. The artificial intelligence algorithms were used to select significant PR-related features, and then integrated into a radiomic imaging signature. And improvements of diagnostic accuracy for PR by clinicians with the signature assistance were quantified. Using Shapley values, the authors determined the most relevant features and provided explanations to prediction. The authors further evaluated its predictive performance in prognosis and chemotherapy response. RESULTS: The developed radiomics signature had a consistently high accuracy in predicting PR in the training cohort (area under the curve: 0.732) and internal and Sun Yat-sen University Cancer Center validation cohorts (0.721 and 0.728). The radiomics signature was the most important feature in Shapley interpretation. The diagnostic accuracy of PR with the radiomics signature assistance was improved by 10.13-18.86% for clinicians ( P <0.001). Furthermore, it was also applicable in the survival prediction. In multivariable analysis, the radiomics signature remained an independent predictor for PR and prognosis ( P <0.001 for all). Importantly, patients with predicting high risk of PR from radiomics signature could gain survival benefit from adjuvant chemotherapy. By contrast, chemotherapy had no impact on survival for patients with a predicted low risk of PR. CONCLUSION: The noninvasive and explainable model developed from preoperative CT images could accurately predict PR and chemotherapy benefit in patients with GC, which will allow the optimization of individual decision-making.


Assuntos
Neoplasias Peritoneais , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/tratamento farmacológico , Neoplasias Gástricas/cirurgia , Inteligência Artificial , Neoplasias Peritoneais/diagnóstico por imagem , Neoplasias Peritoneais/tratamento farmacológico , Estudos Retrospectivos , Recidiva Local de Neoplasia/diagnóstico por imagem , Gastrectomia
9.
Comput Biol Med ; 161: 106888, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37244146

RESUMO

X-ray Computed Tomography (CT) techniques play a vitally important role in clinical diagnosis, but radioactivity exposure can also induce the risk of cancer for patients. Sparse-view CT reduces the impact of radioactivity on the human body through sparsely sampled projections. However, images reconstructed from sparse-view sinograms often suffer from serious streaking artifacts. To overcome this issue, we propose an end-to-end attention-based mechanism deep network for image correction in this paper. Firstly, the process is to reconstruct the sparse projection by the filtered back-projection algorithm. Next, the reconstructed results are fed into the deep network for artifact correction. More specifically, we integrate the attention-gating module into U-Net pipelines, whose function is implicitly learning to emphasize relevant features beneficial for a given assignment while restraining background regions. Attention is used to combine the local feature vectors extracted at intermediate stages in the convolutional neural network and the global feature vector extracted from the coarse scale activation map. To improve the performance of our network, we fused a pre-trained ResNet50 model into our architecture. The model was trained and tested using the dataset from The Cancer Imaging Archive (TCIA), which consists of images of various human organs obtained from multiple views. This experience demonstrates that the developed functions are highly effective in removing streaking artifacts while preserving structural details. Additionally, quantitative evaluation of our proposed model shows significant improvement in peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean squared error (RMSE) metrics compared to other methods, with an average PSNR of 33.9538, SSIM of 0.9435, and RMSE of 45.1208 at 20 views. Finally, the transferability of the network was verified using the 2016 AAPM dataset. Therefore, this approach holds great promise in achieving high-quality sparse-view CT images.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Artefatos
10.
BMC Med Inform Decis Mak ; 23(1): 64, 2023 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-37024893

RESUMO

BACKGROUND: Breast cancer (BC) is one of the most common cancers among women. Since diverse features can be collected, how to stably select the powerful ones for accurate BC diagnosis remains challenging. METHODS: A hybrid framework is designed for successively investigating both feature ranking (FR) stability and cancer diagnosis effectiveness. Specifically, on 4 BC datasets (BCDR-F03, WDBC, GSE10810 and GSE15852), the stability of 23 FR algorithms is evaluated via an advanced estimator (S), and the predictive power of the stable feature ranks is further tested by using different machine learning classifiers. RESULTS: Experimental results identify 3 algorithms achieving good stability ([Formula: see text]) on the four datasets and generalized Fisher score (GFS) leading to state-of-the-art performance. Moreover, GFS ranks suggest that shape features are crucial in BC image analysis (BCDR-F03 and WDBC) and that using a few genes can well differentiate benign and malignant tumor cases (GSE10810 and GSE15852). CONCLUSIONS: The proposed framework recognizes a stable FR algorithm for accurate BC diagnosis. Stable and effective features could deepen the understanding of BC diagnosis and related decision-making applications.


Assuntos
Neoplasias da Mama , Feminino , Humanos , Neoplasias da Mama/diagnóstico , Algoritmos , Aprendizado de Máquina
11.
IEEE Trans Med Imaging ; 42(5): 1495-1508, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37015393

RESUMO

A novel method is proposed to obtain four-dimensional (4D) cone-beam computed tomography (CBCT) images from a routine scan in patients with upper abdominal cancer. The projections are sorted according to the location of the lung diaphragm before being reconstructed to phase-sorted data. A multiscale-discriminator generative adversarial network (MSD-GAN) is proposed to alleviate the severe streaking artifacts in the original images. The MSD-GAN is trained using simulated CBCT datasets from patient planning CT images. The enhanced images are further used to estimate the deformable vector field (DVF) among breathing phases using a deformable image registration method. The estimated DVF is then applied in the motion-compensated ordered-subset simultaneous algebraic reconstruction approach to generate 4D CBCT images. The proposed MSD-GAN is compared with U-Net on the performance of image enhancement. Results show that the proposed method significantly outperforms the total variation regularization-based iterative reconstruction approach and the method using only MSD-GAN to enhance original phase-sorted images in simulation and patient studies on 4D reconstruction quality. The MSD-GAN also shows higher accuracy than the U-Net. The proposed method enables a practical way for 4D-CBCT imaging from a single routine scan in upper abdominal cancer treatment including liver and pancreatic tumors.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Aprendizado Profundo , Aumento da Imagem , Neoplasias , Tomografia Computadorizada de Feixe Cônico/métodos , Conjuntos de Dados como Assunto , Neoplasias/diagnóstico por imagem
12.
Front Oncol ; 13: 1127866, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36910636

RESUMO

Objective: To develop a contrast learning-based generative (CLG) model for the generation of high-quality synthetic computed tomography (sCT) from low-quality cone-beam CT (CBCT). The CLG model improves the performance of deformable image registration (DIR). Methods: This study included 100 post-breast-conserving patients with the pCT images, CBCT images, and the target contours, which the physicians delineated. The CT images were generated from CBCT images via the proposed CLG model. We used the Sct images as the fixed images instead of the CBCT images to achieve the multi-modality image registration accurately. The deformation vector field is applied to propagate the target contour from the pCT to CBCT to realize the automatic target segmentation on CBCT images. We calculate the Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), and average surface distance (ASD) between the prediction and reference segmentation to evaluate the proposed method. Results: The DSC, HD95, and ASD of the target contours with the proposed method were 0.87 ± 0.04, 4.55 ± 2.18, and 1.41 ± 0.56, respectively. Compared with the traditional method without the synthetic CT assisted (0.86 ± 0.05, 5.17 ± 2.60, and 1.55 ± 0.72), the proposed method was outperformed, especially in the soft tissue target, such as the tumor bed region. Conclusion: The CLG model proposed in this study can create the high-quality sCT from low-quality CBCT and improve the performance of DIR between the CBCT and the pCT. The target segmentation accuracy is better than using the traditional DIR.

13.
Bioengineering (Basel) ; 10(2)2023 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-36829638

RESUMO

Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.

14.
J Digit Imaging ; 36(3): 923-931, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36717520

RESUMO

The aim of this study is to evaluate a regional deformable model based on a deep unsupervised learning model for automatic contour propagation in breast cone-beam computed tomography-guided adaptive radiation therapy. A deep unsupervised learning model was introduced to map breast's tumor bed, clinical target volume, heart, left lung, right lung, and spinal cord from planning computed tomography to cone-beam CT. To improve the traditional image registration method's performance, we used a regional deformable framework based on the narrow-band mapping, which can mitigate the effect of the image artifacts on the cone-beam CT. We retrospectively selected 373 anonymized cone-beam CT volumes from 111 patients with breast cancer. The cone-beam CTs are divided into three sets. 311 / 20 / 42 cone-beam CT images were used for training, validating, and testing. The manual contour was used as reference for the testing set. We compared the results between the reference and the model prediction for evaluating the performance. The mean Dice between manual reference segmentations and the model predicted segmentations for breast tumor bed, clinical target volume, heart, left lung, right lung, and spinal cord were 0.78 ± 0.09, 0.90 ± 0.03, 0.88 ± 0.04, 0.94 ± 0.03, 0.95 ± 0.02, and 0.77 ± 0.07, respectively. The results demonstrated a good agreement between the reference and the proposed contours. The proposed deep learning-based regional deformable model technique can automatically propagate contours for breast cancer adaptive radiotherapy. Deep learning in contour propagation was promising, but further investigation was warranted.


Assuntos
Neoplasias da Mama , Aprendizado de Máquina não Supervisionado , Humanos , Feminino , Estudos Retrospectivos , Algoritmos , Planejamento da Radioterapia Assistida por Computador/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/radioterapia , Processamento de Imagem Assistida por Computador/métodos
15.
J Immunother Cancer ; 11(11)2023 11 21.
Artigo em Inglês | MEDLINE | ID: mdl-38179695

RESUMO

BACKGROUND: Despite remarkable benefits have been provided by immune checkpoint inhibitors in gastric cancer (GC), predictions of treatment response and prognosis remain unsatisfactory, making identifying biomarkers desirable. The aim of this study was to develop and validate a CT imaging biomarker to predict the immunotherapy response in patients with GC and investigate the associated immune infiltration patterns. METHODS: This retrospective study included 294 GC patients who received anti-PD-1/PD-L1 immunotherapy from three independent medical centers between January 2017 and April 2022. A radiomics score (RS) was developed from the intratumoral and peritumoral features on pretreatment CT images to predict immunotherapy-related progression-free survival (irPFS). The performance of the RS was evaluated by the area under the time-dependent receiver operating characteristic curve (AUC). Multivariable Cox regression analysis was performed to construct predictive nomogram of irPFS. The C-index was used to determine the performance of the nomogram. Bulk RNA sequencing of tumors from 42 patients in The Cancer Genome Atlas was used to investigate the RS-associated immune infiltration patterns. RESULTS: Overall, 89 of 294 patients (median age, 57 years (IQR 48-66 years); 171 males) had an objective response to immunotherapy. The RS included 13 CT features that yielded AUCs of 12-month irPFS of 0.787, 0.810 and 0.785 in the training, internal validation, and external validation 1 cohorts, respectively, and an AUC of 24-month irPFS of 0.805 in the external validation 2 cohort. Patients with low RS had longer irPFS in each cohort (p<0.05). Multivariable Cox regression analyses showed RS is an independent prognostic factor of irPFS. The nomogram that integrated the RS and clinical characteristics showed improved performance in predicting irPFS, with C-index of 0.687-0.778 in the training and validation cohorts. The CT imaging biomarker was associated with M1 macrophage infiltration. CONCLUSION: The findings of this prognostic study suggest that the non-invasive CT imaging biomarker can effectively predict immunotherapy outcomes in patients with GC and is associated with innate immune signaling, which can serve as a potential tool for individual treatment decisions.


Assuntos
Imunoterapia , Neoplasias Gástricas , Humanos , Masculino , Pessoa de Meia-Idade , Biomarcadores , Estudos Retrospectivos , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/tratamento farmacológico , Tomografia Computadorizada por Raios X , Feminino , Idoso
16.
Nat Commun ; 13(1): 5095, 2022 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-36042205

RESUMO

The tumor immune microenvironment (TIME) is associated with tumor prognosis and immunotherapy response. Here we develop and validate a CT-based radiomics score (RS) using 2272 gastric cancer (GC) patients to investigate the relationship between the radiomics imaging biomarker and the neutrophil-to-lymphocyte ratio (NLR) in the TIME, including its correlation with prognosis and immunotherapy response in advanced GC. The RS achieves an AUC of 0.795-0.861 in predicting the NLR in the TIME. Notably, the radiomics imaging biomarker is indistinguishable from the IHC-derived NLR status in predicting DFS and OS in each cohort (HR range: 1.694-3.394, P < 0.001). We find the objective responses of a cohort of anti-PD-1 immunotherapy patients is significantly higher in the low-RS group (60.9% and 42.9%) than in the high-RS group (8.1% and 14.3%). The radiomics imaging biomarker is a noninvasive method to evaluate TIME, and may correlate with prognosis and anti PD-1 immunotherapy response in GC patients.


Assuntos
Neoplasias Gástricas , Biomarcadores , Humanos , Imunoterapia , Linfócitos/patologia , Neutrófilos/patologia , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologia , Neoplasias Gástricas/terapia , Microambiente Tumoral
17.
IEEE J Biomed Health Inform ; 26(10): 5247-5257, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35849683

RESUMO

Since the tumor moves with the patient's breathing movement in clinical surgery, the real-time prediction of respiratory movement is required to improve the efficacy of radiotherapy. Some RNN-based respiratory management methods have been proposed for this purpose. However, these existing RNN-based methods often suffer from the degradation of generalization performance for a long-term window (such as 600 ms) because of the structural consistency constraints. In this paper, we propose an innovative Long Short-term Transformer (LSTformer) for long-term real-time accurate respiratory prediction. Specifically, a novel Long-term Information Enhancement module (LIE) is proposed to solve the performance degradation under a long window by increasing the long-term memory of latent variables. A lightweight Transformer Encoder (LTE) is proposed to satisfy the real-time requirement via simplifying the architecture and limiting the number of layers. In addition, we propose an application-oriented data augmentation strategy to generalize our LSTformer to practical application scenarios, especially robotic radiotherapy. Extensive experiments on our augmented dataset and publicly available dataset demonstrate the state-of-the-art performance of our method on the premise of satisfying the real-time demand.


Assuntos
Neoplasias , Respiração , Humanos , Movimento , Taxa Respiratória
18.
Phys Med Biol ; 67(5)2022 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-35172290

RESUMO

Objective.Four-dimensional cone-beam computed tomography (4D CBCT) has unique advantages in moving target localization, tracking and therapeutic dose accumulation in adaptive radiotherapy. However, the severe fringe artifacts and noise degradation caused by 4D CBCT reconstruction restrict its clinical application. We propose a novel deep unsupervised learning model to generate the high-quality 4D CBCT from the poor-quality 4D CBCT.Approach.The proposed model uses a contrastive loss function to preserve the anatomical structure in the corrected image. To preserve the relationship between the input and output image, we use a multilayer, patch-based method rather than operate on entire images. Furthermore, we draw negatives from within the input 4D CBCT rather than from the rest of the dataset.Main results.The results showed that the streak and motion artifacts were significantly suppressed. The spatial resolution of the pulmonary vessels and microstructure were also improved. To demonstrate the results in the different directions, we make the animation to show the different views of the predicted correction image in the supplementary animation.Significance.The proposed method can be integrated into any 4D CBCT reconstruction method and maybe a practical way to enhance the image quality of the 4D CBCT.


Assuntos
Artefatos , Tomografia Computadorizada de Feixe Cônico Espiral , Tomografia Computadorizada Quadridimensional , Movimento (Física) , Aprendizado de Máquina não Supervisionado
19.
Comput Biol Med ; 141: 105139, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34942395

RESUMO

PURPOSE: To develop a deep unsupervised learning method with control volume (CV) mapping from patient positioning daily CT (dCT) to planning computed tomography (pCT) for precise patient positioning. METHODS: We propose an unsupervised learning framework, which maps CVs from dCT to pCT to automatically generate the couch shifts, including translation and rotation dimensions. The network inputs are dCT, pCT and CV positions in the pCT. The output is the transformation parameter of the dCT used to setup the head and neck cancer (HNC) patients. The network is trained to maximize image similarity between the CV in the pCT and the CV in the dCT. A total of 554 CT scans from 158 HNC patients were used for the evaluation of the proposed model. At different points in time, each patient had many CT scans. Couch shifts are calculated for the testing by averaging the translation and rotation from the CVs. The ground-truth of the shifts come from bone landmarks determined by an experienced radiation oncologist. RESULTS: The system positioning errors of translation and rotation are less than 0.47 mm and 0.17°, respectively. The random positioning errors of translation and rotation are less than 1.13 mm and 0.29°, respectively. The proposed method enhanced the proportion of cases registered within a preset tolerance (2.0 mm/1.0°) from 66.67% to 90.91% as compared to standard registrations. CONCLUSIONS: We proposed a deep unsupervised learning architecture for patient positioning with inclusion of CVs mapping, which weights the CVs regions differently to mitigate any potential adverse influence of image artifacts on the registration. Our experimental results show that the proposed method achieved efficient and effective HNC patient positioning.


Assuntos
Neoplasias de Cabeça e Pescoço , Radioterapia Guiada por Imagem , Tomografia Computadorizada de Feixe Cônico/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia Guiada por Imagem/métodos , Tomografia Computadorizada por Raios X
20.
Quant Imaging Med Surg ; 11(12): 4881-4894, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34888196

RESUMO

Modern conformal beam delivery techniques require image-guidance to ensure the prescribed dose to be delivered as planned. Recent advances in artificial intelligence (AI) have greatly augmented our ability to accurately localize the treatment target while sparing the normal tissues. In this paper, we review the applications of AI-based algorithms in image-guided radiotherapy (IGRT), and discuss the indications of these applications to the future of clinical practice of radiotherapy. The benefits, limitations and some important trends in research and development of the AI-based IGRT techniques are also discussed. AI-based IGRT techniques have the potential to monitor tumor motion, reduce treatment uncertainty and improve treatment precision. Particularly, these techniques also allow more healthy tissue to be spared while keeping tumor coverage the same or even better.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA