Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Diagnostics (Basel) ; 14(18)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39335776

RESUMO

Objectives: Early detection and accurate diagnosis of lymph node metastasis (LNM) in head and neck cancer (HNC) are crucial for enhancing patient prognosis and survival rates. Current imaging methods have limitations, necessitating new evaluation of new diagnostic techniques. This study investigates the potential of combining pre-operative CT and intra-operative fluorescence lifetime imaging (FLIm) to enhance LNM prediction in HNC using primary tumor signatures. Methods: CT and FLIm data were collected from 46 HNC patients. A total of 42 FLIm features and 924 CT radiomic features were extracted from the primary tumor site and fused. A support vector machine (SVM) model with a radial basis function kernel was trained to predict LNM. Hyperparameter tuning was conducted using 10-fold nested cross-validation. Prediction performance was evaluated using balanced accuracy (bACC) and the area under the ROC curve (AUC). Results: The model, leveraging combined CT and FLIm features, demonstrated improved testing accuracy (bACC: 0.71, AUC: 0.79) over the CT-only (bACC: 0.58, AUC: 0.67) and FLIm-only (bACC: 0.61, AUC: 0.72) models. Feature selection identified that a subset of 10 FLIm and 10 CT features provided optimal predictive capability. Feature contribution analysis identified high-pass and low-pass wavelet-filtered CT images as well as Laguerre coefficients from FLIm as key predictors. Conclusions: Combining CT and FLIm of the primary tumor improves the prediction of HNC LNM compared to either modality alone. Significance: This study underscores the potential of combining pre-operative radiomics with intra-operative FLIm for more accurate LNM prediction in HNC, offering promise to enhance patient outcomes.

2.
Med Phys ; 49(5): 3263-3277, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35229904

RESUMO

PURPOSE: Image guidance is used to improve the accuracy of radiation therapy delivery but results in increased dose to patients. This is of particular concern in children who need be treated per Pediatric Image Gently Protocols due to long-term risks from radiation exposure. The purpose of this study is to design a deep neural network architecture and loss function for improving soft-tissue contrast and preserving small anatomical features in ultra-low-dose cone-beam CTs (CBCT) of head and neck cancer (HNC) imaging. METHODS: A 2D compound U-Net architecture (modified U-Net++) with different depths was proposed to enhance the network capability of capturing small-volume structures. A mask weighted loss function (Mask-Loss) was applied to enhance soft-tissue contrast. Fifty-five paired CBCT and CT images of HNC patients were retrospectively collected for network training and testing. The output enhanced CBCT images from the present study were evaluated with quantitative metrics including mean absolute error (MAE), signal-to-noise ratio (SNR), and structural similarity (SSIM), and compared with those from the previously proposed network architectures (U-Net and wide U-Net) using MAE loss functions. A visual assessment of ten selected structures in the enhanced CBCT images of each patient was performed to evaluate image quality improvement, blindly scored by an experienced radiation oncologist specialized in HN cancer. RESULTS: All the enhanced CBCT images showed reduced artifactual distortion and image noise. U-Net++ outperformed the U-Net and wide U-Net in terms of MAE, contrast near structure boundaries, and small structures. The proposed Mask-Loss improved image contrast and accuracy of the soft-tissue regions. The enhanced CBCT images predicted by U-Net++ and Mask-Loss demonstrated improvement compared to the U-Net in terms of average MAE (52.41 vs 42.85 HU), SNR (14.14 vs 15.07 dB), and SSIM (0.84 vs 0.87), respectively ( p < 0.01 $p < 0.01$ , in all paired t-tests). The visual assessment showed that the proposed U-Net++ and Mask-Loss significantly improved original CBCTs ( p < 0.01 $p < 0.01$ ), compared to the U-Net and MAE loss. CONCLUSIONS: The proposed network architecture and loss function effectively improved image quality in soft-tissue contrast, organ boundary, and small structure preservation for ultra-low-dose CBCT following Image Gently Protocol. This method has potential to provide sufficient anatomical representation on the enhanced CBCT images for accurate treatment delivery and potentially fast online-adaptive re-planning for HN cancer patients.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Criança , Tomografia Computadorizada de Feixe Cônico/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Planejamento da Radioterapia Assistida por Computador/métodos , Estudos Retrospectivos
3.
Phys Med Biol ; 65(21): 215020, 2020 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-32707565

RESUMO

Reducing radiation dose of x-ray computed tomography (CT) and thereby decreasing the potential risk to patients are desirable in CT imaging. Deep neural network (DNN) has been proposed to reduce noise in low-dose CT (LdCT) images and showed promising results. However, most existing DNN-based methods require training a neural network using high-quality CT images as the reference. Lack of high-quality reference data has therefore been the bottleneck in the current DNN-based methods. Recently, a noise-to-noise (Noise2Noise) training method was proposed to train a denoising neural network with only noisy images. It has also been applied to LdCT data in both the count domain and image domain. However, the method still requires a separately acquired independent noisy reference image for supervising the training procedure. To address this limitation, we propose a novel method to generate both training inputs and training labels from the existing CT scans, which does not require any additional high-dose CT images or repeated scans. Therefore, existing large noisy dataset can be fully exploited for training a denoising neural network. Our experimental results show that the trained networks can reduce noise in existing CT image and hence improve the image quality for clinical diagnosis.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X , Humanos
4.
Phys Med Biol ; 65(3): 035003, 2020 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-31842014

RESUMO

To improve image quality and CT number accuracy of fast-scan low-dose cone-beam computed tomography (CBCT) through a deep-learning convolutional neural network (CNN) methodology for head-and-neck (HN) radiotherapy. Fifty-five paired CBCT and CT images from HN patients were retrospectively analysed. Among them, 15 patients underwent adaptive replanning during treatment, thus had same-day CT/CBCT pairs. The remaining 40 patients (post-operative) had paired planning CT and 1st fraction CBCT images with minimal anatomic changes. A 2D U-Net architecture with 27-layers in 5 depths was built for the CNN. CNN training was performed using data from 40 post-operative HN patients with 2080 paired CT/CBCT slices. Validation and test datasets include 5 same-day datasets with 260 slice pairs and 10 same-day datasets with 520 slice pairs, respectively. To examine the impact of differences in training dataset selection and network performance as a function of training data size, additional networks were trained using 30, 40 and 50 datasets. Image quality of enhanced CBCT images were quantitatively compared against the CT image using mean absolute error (MAE) of Hounsfield units (HU), signal-to-noise ratio (SNR) and structural similarity (SSIM). Enhanced CBCT images reduced artifact distortion and improved soft tissue contrast. Networks trained with 40 datasets had imaging performance comparable to those trained with 50 datasets and outperformed those trained with 30 datasets. Comparison of CBCT and enhanced CBCT images demonstrated improvement in average MAE from 172.73 to 49.28 HU, SNR from 8.27 to 14.25 dB, and SSIM from 0.42 to 0.85. The image processing time is 2 s per patient using a NVIDIA GeForce GTX 1080 Ti GPU. The proposed deep-leaning methodology was fast and effective for image quality enhancement of fast-scan low-dose CBCT. This method has potential to support fast online-adaptive re-planning for HN cancer patients.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Estudos Retrospectivos , Razão Sinal-Ruído
5.
Front Artif Intell ; 3: 614384, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33733226

RESUMO

Purpose: To assess image quality and uncertainty in organ-at-risk segmentation on cone beam computed tomography (CBCT) enhanced by deep-learning convolutional neural network (DCNN) for head and neck cancer. Methods: An in-house DCNN was trained using forty post-operative head and neck cancer patients with their planning CT and first-fraction CBCT images. Additional fifteen patients with repeat simulation CT (rCT) and CBCT scan taken on the same day (oCBCT) were used for validation and clinical utility assessment. Enhanced CBCT (eCBCT) images were generated from the oCBCT using the in-house DCNN. Quantitative imaging quality improvement was evaluated using HU accuracy, signal-to-noise-ratio (SNR), and structural similarity index measure (SSIM). Organs-at-risk (OARs) were delineated on o/eCBCT and compared with manual structures on the same day rCT. Contour accuracy was assessed using dice similarity coefficient (DSC), Hausdorff distance (HD), and center of mass (COM) displacement. Qualitative assessment of users' confidence in manual segmenting OARs was performed on both eCBCT and oCBCT by visual scoring. Results: eCBCT organs-at-risk had significant improvement on mean pixel values, SNR (p < 0.05), and SSIM (p < 0.05) compared to oCBCT images. Mean DSC of eCBCT-to-rCT (0.83 ± 0.06) was higher than oCBCT-to-rCT (0.70 ± 0.13). Improvement was observed for mean HD of eCBCT-to-rCT (0.42 ± 0.13 cm) vs. oCBCT-to-rCT (0.72 ± 0.25 cm). Mean COM was less for eCBCT-to-rCT (0.28 ± 0.19 cm) comparing to oCBCT-to-rCT (0.44 ± 0.22 cm). Visual scores showed OAR segmentation was more accessible on eCBCT than oCBCT images. Conclusion: DCNN improved fast-scan low-dose CBCT in terms of the HU accuracy, image contrast, and OAR delineation accuracy, presenting potential of eCBCT for adaptive radiotherapy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA