Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
1.
Phys Med Biol ; 69(8)2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38471171

RESUMEN

Objective.The aim of this study was to reconstruct volumetric computed tomography (CT) images in real-time from ultra-sparse two-dimensional x-ray projections, facilitating easier navigation and positioning during image-guided radiation therapy.Approach.Our approach leverages a voxel-sapce-searching Transformer model to overcome the limitations of conventional CT reconstruction techniques, which require extensive x-ray projections and lead to high radiation doses and equipment constraints.Main results.The proposed XTransCT algorithm demonstrated superior performance in terms of image quality, structural accuracy, and generalizability across different datasets, including a hospital set of 50 patients, the large-scale public LIDC-IDRI dataset, and the LNDb dataset for cross-validation. Notably, the algorithm achieved an approximately 300% improvement in reconstruction speed, with a rate of 44 ms per 3D image reconstruction compared to former 3D convolution-based methods.Significance.The XTransCT architecture has the potential to impact clinical practice by providing high-quality CT images faster and with substantially reduced radiation exposure for patients. The model's generalizability suggests it has the potential applicable in various healthcare settings.


Asunto(s)
Radioterapia Guiada por Imagen , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Rayos X , Tomografía Computarizada de Haz Cónico/métodos , Imagenología Tridimensional , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen
2.
Comput Med Imaging Graph ; 112: 102336, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38244280

RESUMEN

Rigid pre-registration involving local-global matching or other large deformation scenarios is crucial. Current popular methods rely on unsupervised learning based on grayscale similarity, but under circumstances where different poses lead to varying tissue structures, or where image quality is poor, these methods tend to exhibit instability and inaccuracies. In this study, we propose a novel method for medical image registration based on arbitrary voxel point of interest matching, called query point quizzer (QUIZ). QUIZ focuses on the correspondence between local-global matching points, specifically employing CNN for feature extraction and utilizing the Transformer architecture for global point matching queries, followed by applying average displacement for local image rigid transformation.We have validated this approach on a large deformation dataset of cervical cancer patients, with results indicating substantially smaller deviations compared to state-of-the-art methods. Remarkably, even for cross-modality subjects, it achieves results surpassing the current state-of-the-art.


Asunto(s)
Algoritmos , Neoplasias del Cuello Uterino , Femenino , Humanos , Neoplasias del Cuello Uterino/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
3.
IEEE Trans Med Imaging ; PP2024 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-38194400

RESUMEN

During the process of computed tomography (CT), metallic implants often cause disruptive artifacts in the reconstructed images, impeding accurate diagnosis. Many supervised deep learning-based approaches have been proposed for metal artifact reduction (MAR). However, these methods heavily rely on training with paired simulated data, which are challenging to acquire. This limitation can lead to decreased performance when applying these methods in clinical practice. Existing unsupervised MAR methods, whether based on learning or not, typically work within a single domain, either in the image domain or the sinogram domain. In this paper, we propose an unsupervised MAR method based on the diffusion model, a generative model with a high capacity to represent data distributions. Specifically, we first train a diffusion model using CT images without metal artifacts. Subsequently, we iteratively introduce the diffusion priors in both the sinogram domain and image domain to restore the degraded portions caused by metal artifacts. Besides, we design temporally dynamic weight masks for the image-domian fusion. The dual-domain processing empowers our approach to outperform existing unsupervised MAR methods, including another MAR method based on diffusion model. The effectiveness has been qualitatively and quantitatively validated on synthetic datasets. Moreover, our method demonstrates superior visual results among both supervised and unsupervised methods on clinical datasets. Codes are available in github.com/DeepXuan/DuDoDp-MAR.

4.
Med Image Anal ; 91: 102984, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37837690

RESUMEN

The accurate delineation of organs-at-risk (OARs) is a crucial step in treatment planning during radiotherapy, as it minimizes the potential adverse effects of radiation on surrounding healthy organs. However, manual contouring of OARs in computed tomography (CT) images is labor-intensive and susceptible to errors, particularly for low-contrast soft tissue. Deep learning-based artificial intelligence algorithms surpass traditional methods but require large datasets. Obtaining annotated medical images is both time-consuming and expensive, hindering the collection of extensive training sets. To enhance the performance of medical image segmentation, augmentation strategies such as rotation and Gaussian smoothing are employed during preprocessing. However, these conventional data augmentation techniques cannot generate more realistic deformations, limiting improvements in accuracy. To address this issue, this study introduces a statistical deformation model-based data augmentation method for volumetric medical image segmentation. By applying diverse and realistic data augmentation to CT images from a limited patient cohort, our method significantly improves the fully automated segmentation of OARs across various body parts. We evaluate our framework on three datasets containing tumor OARs from the head, neck, chest, and abdomen. Test results demonstrate that the proposed method achieves state-of-the-art performance in numerous OARs segmentation challenges. This innovative approach holds considerable potential as a powerful tool for various medical imaging-related sub-fields, effectively addressing the challenge of limited data access.


Asunto(s)
Inteligencia Artificial , Neoplasias , Humanos , Algoritmos , Cuello , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Planificación de la Radioterapia Asistida por Computador/métodos
5.
Med Image Anal ; 91: 102998, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37857066

RESUMEN

Radiotherapy serves as a pivotal treatment modality for malignant tumors. However, the accuracy of radiotherapy is significantly compromised due to respiratory-induced fluctuations in the size, shape, and position of the tumor. To address this challenge, we introduce a deep learning-anchored, volumetric tumor tracking methodology that employs single-angle X-ray projection images. This process involves aligning the intraoperative two-dimensional (2D) X-ray images with the pre-treatment three-dimensional (3D) planning Computed Tomography (CT) scans, enabling the extraction of the 3D tumor position and segmentation. Prior to therapy, a bespoke patient-specific tumor tracking model is formulated, leveraging a hybrid data augmentation, style correction, and registration network to create a mapping from single-angle 2D X-ray images to the corresponding 3D tumors. During the treatment phase, real-time X-ray images are fed into the trained model, producing the respective 3D tumor positioning. Rigorous validation conducted on actual patient lung data and lung phantoms attests to the high localization precision of our method at lowered radiation doses, thus heralding promising strides towards enhancing the precision of radiotherapy.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Imagenología Tridimensional/métodos , Rayos X , Tomografía Computarizada por Rayos X/métodos , Neoplasias/diagnóstico por imagen , Neoplasias/radioterapia , Tomografía Computarizada de Haz Cónico/métodos
6.
Bioengineering (Basel) ; 10(11)2023 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-38002438

RESUMEN

The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956-0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases.

7.
Phys Med Biol ; 68(24)2023 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-37844603

RESUMEN

Objective.Medical image registration represents a fundamental challenge in medical image processing. Specifically, CT-CBCT registration has significant implications in the context of image-guided radiation therapy (IGRT). However, traditional iterative methods often require considerable computational time. Deep learning based methods, especially when dealing with low contrast organs, are frequently entangled in local optimal solutions.Approach.To address these limitations, we introduce a registration method based on volumetric feature points integration with bio-structure-informed guidance. Surface point cloud is generated from segmentation labels during the training stage, with both the surface-registered point pairs and voxel feature point pairs co-guiding the training process, thereby achieving higher registration accuracy.Main results.Our findings have been validated on paired CT-CBCT datasets. In comparison with other deep learning registration methods, our approach has improved the precision by 6%, reaching a state-of-the-art status.Significance.The integration of voxel feature points and bio-structure feature points to guide the training of the medical image registration network has achieved promising results. This provides a meaningful direction for further research in medical image registration and IGRT.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Radioterapia Guiada por Imagen , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Radioterapia Guiada por Imagen/métodos , Algoritmos
8.
Phys Med Biol ; 68(20)2023 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-37714184

RESUMEN

Objective.Computed tomography (CT) is a widely employed imaging technology for disease detection. However, CT images often suffer from ring artifacts, which may result from hardware defects and other factors. These artifacts compromise image quality and impede diagnosis. To address this challenge, we propose a novel method based on dual contrast learning image style transformation network model (DCLGAN) that effectively eliminates ring artifacts from CT images while preserving texture details.Approach. Our method involves simulating ring artifacts on real CT data to generate the uncorrected CT (uCT) data and transforming them into strip artifacts. Subsequently, the DCLGAN synthetic network is applied in the polar coordinate system to remove the strip artifacts and generate a synthetic CT (sCT). We compare the uCT and sCT images to obtain a residual image, which is then filtered to extract the strip artifacts. An inverse polar transformation is performed to obtain the ring artifacts, which are subtracted from the original CT image to produce a corrected image.Main results.To validate the effectiveness of our approach, we tested it using real CT data, simulated data, and cone beam computed tomography images of the patient's brain. The corrected CT images showed a reduction in mean absolute error by 12.36 Hounsfield units (HU), a decrease in root mean square error by 18.94 HU, an increase in peak signal-to-noise ratio by 3.53 decibels (dB), and an improvement in structural similarity index by 9.24%.Significance.These results demonstrate the efficacy of our method in eliminating ring artifacts and preserving image details, making it a valuable tool for CT imaging.

9.
Comput Biol Med ; 165: 107377, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37651766

RESUMEN

PURPOSE: Cone-beam computed tomography (CBCT) is widely utilized in modern radiotherapy; however, CBCT images exhibit increased scatter artifacts compared to planning CT (pCT), compromising image quality and limiting further applications. Scatter correction is thus crucial for improving CBCT image quality. METHODS: In this study, we proposed an unsupervised contrastive learning method for CBCT scatter correction. Initially, we transformed low-quality CBCT into high-quality synthetic pCT (spCT) and generated forward projections of CBCT and spCT. By computing the difference between these projections, we obtained a residual image containing image details and scatter artifacts. Image details primarily comprise high-frequency signals, while scatter artifacts consist mainly of low-frequency signals. We extracted the scatter projection signal by applying a low-pass filter to remove image details. The corrected CBCT (cCBCT) projection signal was obtained by subtracting the scatter artifacts projection signal from the original CBCT projection. Finally, we employed the FDK reconstruction algorithm to generate the cCBCT image. RESULTS: To evaluate cCBCT image quality, we aligned the CBCT and pCT of six patients. In comparison to CBCT, cCBCT maintains anatomical consistency and significantly enhances CT number, spatial homogeneity, and artifact suppression. The mean absolute error (MAE) of the test data decreased from 88.0623 ± 26.6700 HU to 17.5086 ± 3.1785 HU. The MAE of fat regions of interest (ROIs) declined from 370.2980 ± 64.9730 HU to 8.5149 ± 1.8265 HU, and the error between their maximum and minimum CT numbers decreased from 572.7528 HU to 132.4648 HU. The MAE of muscle ROIs reduced from 354.7689 ± 25.0139 HU to 16.4475 ± 3.6812 HU. We also compared our proposed method with several conventional unsupervised synthetic image generation techniques, demonstrating superior performance. CONCLUSIONS: Our approach effectively enhances CBCT image quality and shows promising potential for future clinical adoption.


Asunto(s)
Algoritmos , Tomografía Computarizada de Haz Cónico , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Artefactos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Dispersión de Radiación
10.
Int J Surg ; 109(7): 2010-2024, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-37300884

RESUMEN

BACKGROUND: Peritoneal recurrence (PR) is the predominant pattern of relapse after curative-intent surgery in gastric cancer (GC) and indicates a dismal prognosis. Accurate prediction of PR is crucial for patient management and treatment. The authors aimed to develop a noninvasive imaging biomarker from computed tomography (CT) for PR evaluation, and investigate its associations with prognosis and chemotherapy benefit. METHODS: In this multicenter study including five independent cohorts of 2005 GC patients, the authors extracted 584 quantitative features from the intratumoral and peritumoral regions on contrast-enhanced CT images. The artificial intelligence algorithms were used to select significant PR-related features, and then integrated into a radiomic imaging signature. And improvements of diagnostic accuracy for PR by clinicians with the signature assistance were quantified. Using Shapley values, the authors determined the most relevant features and provided explanations to prediction. The authors further evaluated its predictive performance in prognosis and chemotherapy response. RESULTS: The developed radiomics signature had a consistently high accuracy in predicting PR in the training cohort (area under the curve: 0.732) and internal and Sun Yat-sen University Cancer Center validation cohorts (0.721 and 0.728). The radiomics signature was the most important feature in Shapley interpretation. The diagnostic accuracy of PR with the radiomics signature assistance was improved by 10.13-18.86% for clinicians ( P <0.001). Furthermore, it was also applicable in the survival prediction. In multivariable analysis, the radiomics signature remained an independent predictor for PR and prognosis ( P <0.001 for all). Importantly, patients with predicting high risk of PR from radiomics signature could gain survival benefit from adjuvant chemotherapy. By contrast, chemotherapy had no impact on survival for patients with a predicted low risk of PR. CONCLUSION: The noninvasive and explainable model developed from preoperative CT images could accurately predict PR and chemotherapy benefit in patients with GC, which will allow the optimization of individual decision-making.


Asunto(s)
Neoplasias Peritoneales , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/tratamiento farmacológico , Neoplasias Gástricas/cirugía , Inteligencia Artificial , Neoplasias Peritoneales/diagnóstico por imagen , Neoplasias Peritoneales/tratamiento farmacológico , Estudios Retrospectivos , Recurrencia Local de Neoplasia/diagnóstico por imagen , Gastrectomía
11.
Comput Biol Med ; 161: 106888, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37244146

RESUMEN

X-ray Computed Tomography (CT) techniques play a vitally important role in clinical diagnosis, but radioactivity exposure can also induce the risk of cancer for patients. Sparse-view CT reduces the impact of radioactivity on the human body through sparsely sampled projections. However, images reconstructed from sparse-view sinograms often suffer from serious streaking artifacts. To overcome this issue, we propose an end-to-end attention-based mechanism deep network for image correction in this paper. Firstly, the process is to reconstruct the sparse projection by the filtered back-projection algorithm. Next, the reconstructed results are fed into the deep network for artifact correction. More specifically, we integrate the attention-gating module into U-Net pipelines, whose function is implicitly learning to emphasize relevant features beneficial for a given assignment while restraining background regions. Attention is used to combine the local feature vectors extracted at intermediate stages in the convolutional neural network and the global feature vector extracted from the coarse scale activation map. To improve the performance of our network, we fused a pre-trained ResNet50 model into our architecture. The model was trained and tested using the dataset from The Cancer Imaging Archive (TCIA), which consists of images of various human organs obtained from multiple views. This experience demonstrates that the developed functions are highly effective in removing streaking artifacts while preserving structural details. Additionally, quantitative evaluation of our proposed model shows significant improvement in peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean squared error (RMSE) metrics compared to other methods, with an average PSNR of 33.9538, SSIM of 0.9435, and RMSE of 45.1208 at 20 views. Finally, the transferability of the network was verified using the 2016 AAPM dataset. Therefore, this approach holds great promise in achieving high-quality sparse-view CT images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Artefactos
12.
BMC Med Inform Decis Mak ; 23(1): 64, 2023 04 06.
Artículo en Inglés | MEDLINE | ID: mdl-37024893

RESUMEN

BACKGROUND: Breast cancer (BC) is one of the most common cancers among women. Since diverse features can be collected, how to stably select the powerful ones for accurate BC diagnosis remains challenging. METHODS: A hybrid framework is designed for successively investigating both feature ranking (FR) stability and cancer diagnosis effectiveness. Specifically, on 4 BC datasets (BCDR-F03, WDBC, GSE10810 and GSE15852), the stability of 23 FR algorithms is evaluated via an advanced estimator (S), and the predictive power of the stable feature ranks is further tested by using different machine learning classifiers. RESULTS: Experimental results identify 3 algorithms achieving good stability ([Formula: see text]) on the four datasets and generalized Fisher score (GFS) leading to state-of-the-art performance. Moreover, GFS ranks suggest that shape features are crucial in BC image analysis (BCDR-F03 and WDBC) and that using a few genes can well differentiate benign and malignant tumor cases (GSE10810 and GSE15852). CONCLUSIONS: The proposed framework recognizes a stable FR algorithm for accurate BC diagnosis. Stable and effective features could deepen the understanding of BC diagnosis and related decision-making applications.


Asunto(s)
Neoplasias de la Mama , Femenino , Humanos , Neoplasias de la Mama/diagnóstico , Algoritmos , Aprendizaje Automático
13.
IEEE Trans Med Imaging ; 42(5): 1495-1508, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37015393

RESUMEN

A novel method is proposed to obtain four-dimensional (4D) cone-beam computed tomography (CBCT) images from a routine scan in patients with upper abdominal cancer. The projections are sorted according to the location of the lung diaphragm before being reconstructed to phase-sorted data. A multiscale-discriminator generative adversarial network (MSD-GAN) is proposed to alleviate the severe streaking artifacts in the original images. The MSD-GAN is trained using simulated CBCT datasets from patient planning CT images. The enhanced images are further used to estimate the deformable vector field (DVF) among breathing phases using a deformable image registration method. The estimated DVF is then applied in the motion-compensated ordered-subset simultaneous algebraic reconstruction approach to generate 4D CBCT images. The proposed MSD-GAN is compared with U-Net on the performance of image enhancement. Results show that the proposed method significantly outperforms the total variation regularization-based iterative reconstruction approach and the method using only MSD-GAN to enhance original phase-sorted images in simulation and patient studies on 4D reconstruction quality. The MSD-GAN also shows higher accuracy than the U-Net. The proposed method enables a practical way for 4D-CBCT imaging from a single routine scan in upper abdominal cancer treatment including liver and pancreatic tumors.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Aprendizaje Profundo , Aumento de la Imagen , Neoplasias , Tomografía Computarizada de Haz Cónico/métodos , Conjuntos de Datos como Asunto , Neoplasias/diagnóstico por imagen
14.
Front Oncol ; 13: 1127866, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36910636

RESUMEN

Objective: To develop a contrast learning-based generative (CLG) model for the generation of high-quality synthetic computed tomography (sCT) from low-quality cone-beam CT (CBCT). The CLG model improves the performance of deformable image registration (DIR). Methods: This study included 100 post-breast-conserving patients with the pCT images, CBCT images, and the target contours, which the physicians delineated. The CT images were generated from CBCT images via the proposed CLG model. We used the Sct images as the fixed images instead of the CBCT images to achieve the multi-modality image registration accurately. The deformation vector field is applied to propagate the target contour from the pCT to CBCT to realize the automatic target segmentation on CBCT images. We calculate the Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), and average surface distance (ASD) between the prediction and reference segmentation to evaluate the proposed method. Results: The DSC, HD95, and ASD of the target contours with the proposed method were 0.87 ± 0.04, 4.55 ± 2.18, and 1.41 ± 0.56, respectively. Compared with the traditional method without the synthetic CT assisted (0.86 ± 0.05, 5.17 ± 2.60, and 1.55 ± 0.72), the proposed method was outperformed, especially in the soft tissue target, such as the tumor bed region. Conclusion: The CLG model proposed in this study can create the high-quality sCT from low-quality CBCT and improve the performance of DIR between the CBCT and the pCT. The target segmentation accuracy is better than using the traditional DIR.

15.
Comput Biol Med ; 155: 106710, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36842222

RESUMEN

PURPOSE: Metal artifacts can significantly decrease the quality of computed tomography (CT) images. This occurs as X-rays penetrate implanted metals, causing severe attenuation and resulting in metal artifacts in the CT images. This degradation in image quality can hinder subsequent clinical diagnosis and treatment planning. Beam hardening artifacts are often manifested as severe strip artifacts in the image domain, affecting the overall quality of the reconstructed CT image. In the sinogram domain, metal is typically located in specific areas, and image processing in these regions can preserve image information in other areas, making the model more robust. To address this issue, we propose a region-based correction of beam hardening artifacts in the sinogram domain using deep learning. METHODS: We present a model composed of three modules: (a) a Sinogram Metal Segmentation Network (Seg-Net), (b) a Sinogram Enhancement Network (Sino-Net), and (c) a Fusion Module. The model starts by using the Attention U-Net network to segment the metal regions in the sinogram. The segmented metal regions are then interpolated to obtain a sinogram image free of metal. The Sino-Net is then applied to compensate for the loss of organizational and artifact information in the metal regions. The corrected metal sinogram and the interpolated metal-free sinogram are then used to reconstruct the metal CT and metal-free CT images, respectively. Finally, the Fusion Module combines the two CT images to produce the result. RESULTS: Our proposed method shows strong performance in both qualitative and quantitative evaluations. The peak signal-to-noise ratio (PSNR) of the CT image before and after correction was 18.22 and 30.32, respectively. The structural similarity index measure (SSIM) improved from 0.75 to 0.99, and the weighted peak signal-to-noise ratio (WPSNR) increased from 21.69 to 35.68. CONCLUSIONS: Our proposed method demonstrates the reliability of high-accuracy correction of beam hardening artifacts.


Asunto(s)
Artefactos , Aprendizaje Profundo , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos , Metales , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Algoritmos
16.
Bioengineering (Basel) ; 10(2)2023 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-36829638

RESUMEN

Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.

17.
J Digit Imaging ; 36(3): 923-931, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36717520

RESUMEN

The aim of this study is to evaluate a regional deformable model based on a deep unsupervised learning model for automatic contour propagation in breast cone-beam computed tomography-guided adaptive radiation therapy. A deep unsupervised learning model was introduced to map breast's tumor bed, clinical target volume, heart, left lung, right lung, and spinal cord from planning computed tomography to cone-beam CT. To improve the traditional image registration method's performance, we used a regional deformable framework based on the narrow-band mapping, which can mitigate the effect of the image artifacts on the cone-beam CT. We retrospectively selected 373 anonymized cone-beam CT volumes from 111 patients with breast cancer. The cone-beam CTs are divided into three sets. 311 / 20 / 42 cone-beam CT images were used for training, validating, and testing. The manual contour was used as reference for the testing set. We compared the results between the reference and the model prediction for evaluating the performance. The mean Dice between manual reference segmentations and the model predicted segmentations for breast tumor bed, clinical target volume, heart, left lung, right lung, and spinal cord were 0.78 ± 0.09, 0.90 ± 0.03, 0.88 ± 0.04, 0.94 ± 0.03, 0.95 ± 0.02, and 0.77 ± 0.07, respectively. The results demonstrated a good agreement between the reference and the proposed contours. The proposed deep learning-based regional deformable model technique can automatically propagate contours for breast cancer adaptive radiotherapy. Deep learning in contour propagation was promising, but further investigation was warranted.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Automático no Supervisado , Humanos , Femenino , Estudios Retrospectivos , Algoritmos , Planificación de la Radioterapia Asistida por Computador/métodos , Tomografía Computarizada de Haz Cónico/métodos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/radioterapia , Procesamiento de Imagen Asistido por Computador/métodos
18.
J Immunother Cancer ; 11(11)2023 11 21.
Artículo en Inglés | MEDLINE | ID: mdl-38179695

RESUMEN

BACKGROUND: Despite remarkable benefits have been provided by immune checkpoint inhibitors in gastric cancer (GC), predictions of treatment response and prognosis remain unsatisfactory, making identifying biomarkers desirable. The aim of this study was to develop and validate a CT imaging biomarker to predict the immunotherapy response in patients with GC and investigate the associated immune infiltration patterns. METHODS: This retrospective study included 294 GC patients who received anti-PD-1/PD-L1 immunotherapy from three independent medical centers between January 2017 and April 2022. A radiomics score (RS) was developed from the intratumoral and peritumoral features on pretreatment CT images to predict immunotherapy-related progression-free survival (irPFS). The performance of the RS was evaluated by the area under the time-dependent receiver operating characteristic curve (AUC). Multivariable Cox regression analysis was performed to construct predictive nomogram of irPFS. The C-index was used to determine the performance of the nomogram. Bulk RNA sequencing of tumors from 42 patients in The Cancer Genome Atlas was used to investigate the RS-associated immune infiltration patterns. RESULTS: Overall, 89 of 294 patients (median age, 57 years (IQR 48-66 years); 171 males) had an objective response to immunotherapy. The RS included 13 CT features that yielded AUCs of 12-month irPFS of 0.787, 0.810 and 0.785 in the training, internal validation, and external validation 1 cohorts, respectively, and an AUC of 24-month irPFS of 0.805 in the external validation 2 cohort. Patients with low RS had longer irPFS in each cohort (p<0.05). Multivariable Cox regression analyses showed RS is an independent prognostic factor of irPFS. The nomogram that integrated the RS and clinical characteristics showed improved performance in predicting irPFS, with C-index of 0.687-0.778 in the training and validation cohorts. The CT imaging biomarker was associated with M1 macrophage infiltration. CONCLUSION: The findings of this prognostic study suggest that the non-invasive CT imaging biomarker can effectively predict immunotherapy outcomes in patients with GC and is associated with innate immune signaling, which can serve as a potential tool for individual treatment decisions.


Asunto(s)
Inmunoterapia , Neoplasias Gástricas , Humanos , Masculino , Persona de Mediana Edad , Biomarcadores , Estudios Retrospectivos , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/tratamiento farmacológico , Tomografía Computarizada por Rayos X , Femenino , Anciano
19.
Bioengineering (Basel) ; 9(12)2022 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-36551010

RESUMEN

Automatic pain estimation plays an important role in the field of medicine and health. In the previous studies, most of the entire image frame was directly imported into the model. This operation can allow background differences to negatively affect the experimental results. To tackle this issue, we propose the parallel CNNs framework with regional attention for automatic pain intensity estimation at the frame level. This modified convolution neural network structure combines BlurPool methods to enhance translation invariance in network learning. The improved networks can focus on learning core regions while supplementing global information, thereby obtaining parallel feature information. The core regions are mainly based on the tradeoff between the weights of the channel attention modules and the spatial attention modules. Meanwhile, the background information of the non-core regions is shielded by the DropBlock algorithm. These steps enable the model to learn facial pain features adaptively, not limited to a single image pattern. The experimental result of our proposed model outperforms many state-of-the-art methods on the RMSE and PCC metrics when evaluated on the diverse pain levels of over 12,000 images provided by the publicly available UNBC dataset. The model accuracy rate has reached 95.11%. The experimental results show that the proposed method is highly efficient at extracting the facial features of pain and predicts pain levels with high accuracy.

20.
Nat Commun ; 13(1): 5095, 2022 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-36042205

RESUMEN

The tumor immune microenvironment (TIME) is associated with tumor prognosis and immunotherapy response. Here we develop and validate a CT-based radiomics score (RS) using 2272 gastric cancer (GC) patients to investigate the relationship between the radiomics imaging biomarker and the neutrophil-to-lymphocyte ratio (NLR) in the TIME, including its correlation with prognosis and immunotherapy response in advanced GC. The RS achieves an AUC of 0.795-0.861 in predicting the NLR in the TIME. Notably, the radiomics imaging biomarker is indistinguishable from the IHC-derived NLR status in predicting DFS and OS in each cohort (HR range: 1.694-3.394, P < 0.001). We find the objective responses of a cohort of anti-PD-1 immunotherapy patients is significantly higher in the low-RS group (60.9% and 42.9%) than in the high-RS group (8.1% and 14.3%). The radiomics imaging biomarker is a noninvasive method to evaluate TIME, and may correlate with prognosis and anti PD-1 immunotherapy response in GC patients.


Asunto(s)
Neoplasias Gástricas , Biomarcadores , Humanos , Inmunoterapia , Linfocitos/patología , Neutrófilos/patología , Neoplasias Gástricas/diagnóstico por imagen , Neoplasias Gástricas/patología , Neoplasias Gástricas/terapia , Microambiente Tumoral
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...