Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Meas Sci Technol ; 34(5): 054002, 2023 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-36743834

RESUMEN

Accurate tracking of anatomic landmarks is critical for motion management in liver radiation therapy. Ultrasound (US) is a safe, low-cost technology that is broadly available and offer real-time imaging capability. This study proposed a deep learning-based tracking method for the US image-guided radiation therapy. The proposed cascade deep learning model is composed of an attention network, a mask region-based convolutional neural network (mask R-CNN), and a long short-term memory (LSTM) network. The attention network learns a mapping from an US image to a suspected area of landmark motion in order to reduce the search region. The mask R-CNN then produces multiple region-of-interest proposals in the reduced region and identifies the proposed landmark via three network heads: bounding box regression, proposal classification, and landmark segmentation. The LSTM network models the temporal relationship among the successive image frames for bounding box regression and proposal classification. To consolidate the final proposal, a selection method is designed according to the similarities between sequential frames. The proposed method was tested on the liver US tracking datasets used in the medical image computing and computer assisted interventions 2015 challenges, where the landmarks were annotated by three experienced observers to obtain their mean positions. Five-fold cross validation on the 24 given US sequences with ground truths shows that the mean tracking error for all landmarks is 0.65 ± 0.56 mm, and the errors of all landmarks are within 2 mm. We further tested the proposed model on 69 landmarks from the testing dataset that have the similar image pattern with the training pattern, resulting in a mean tracking error of 0.94 ± 0.83 mm. The proposed deep-learning model was implemented on a graphics processing unit (GPU), tracking 47-81 frames s-1. Our experimental results have demonstrated the feasibility and accuracy of our proposed method in tracking liver anatomic landmarks using US images, providing a potential solution for real-time liver tracking for active motion management during radiation therapy.

2.
Photoacoustics ; 34: 100575, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38174105

RESUMEN

Accurate needle guidance is crucial for safe and effective clinical diagnosis and treatment procedures. Conventional ultrasound (US)-guided needle insertion often encounters challenges in consistency and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. As a powerful tool in image processing, deep learning has shown promise for enhancing needle visibility in US images, although its dependence on manual annotation or simulated data as ground truth can lead to potential bias or difficulties in generalizing to real US images. Photoacoustic (PA) imaging has demonstrated its capability for high-contrast needle visualization. In this study, we explore the potential of PA imaging as a reliable ground truth for deep learning network training without the need for expert annotation. Our network (UIU-Net), trained on ex vivo tissue image datasets, has shown remarkable precision in localizing needles within US images. The evaluation of needle segmentation performance extends across previously unseen ex vivo data and in vivo human data (collected from an open-source data repository). Specifically, for human data, the Modified Hausdorff Distance (MHD) value stands at approximately 3.73, and the targeting error value is around 2.03, indicating the strong similarity and small needle orientation deviation between the predicted needle and actual needle location. A key advantage of our method is its applicability beyond US images captured from specific imaging systems, extending to images from other US imaging systems.

3.
Med Phys ; 49(12): 7545-7554, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35869866

RESUMEN

PURPOSE: A quality assurance (QA) CT scans are usually acquired during cancer radiotherapy to assess for any anatomical changes, which may cause an unacceptable dose deviation and therefore warrant a replan. Accurate and rapid deformable image registration (DIR) is needed to support contour propagation from the planning CT (pCT) to the QA CT to facilitate dose volume histogram (DVH) review. Further, the generated deformation maps are used to track the anatomical variations throughout the treatment course and calculate the corresponding accumulated dose from one or more treatment plans. METHODS: In this study, we aim to develop a deep learning (DL)-based method for automatic deformable registration to align the pCT and the QA CT. Our proposed method, named dual-feasible framework, was implemented by a mutual network that functions as both a forward module and a backward module. The mutual network was trained to predict two deformation vector fields (DVFs) simultaneously, which were then used to register the pCT and QA CT in both directions. A novel dual feasible loss was proposed to train the mutual network. The dual-feasible framework was able to provide additional DVF regularization during network training, which preserves the topology and reduces folding problems. We conducted experiments on 65 head-and-neck cancer patients (228 CTs in total), each with 1 pCT and 2-6 QA CTs. For evaluations, we calculated the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), target registration error (TRE) between the deformed and target images and the Jacobian determinant of the predicted DVFs. RESULTS: Within the body contour, the mean MAE, PSNR, SSIM, and TRE are 122.7 HU, 21.8 dB, 0.62 and 4.1 mm before registration and are 40.6 HU, 30.8 dB, 0.94, and 2.0 mm after registration using the proposed method. These results demonstrate the feasibility and efficacy of our proposed method for pCT and QA CT DIR. CONCLUSION: In summary, we proposed a DL-based method for automatic DIR to match the pCT to the QA CT. Such DIR method would not only benefit current workflow of evaluating DVHs on QA CTs but may also facilitate studies of treatment response assessment and radiomics that depend heavily on the accurate localization of tissues across longitudinal images.


Asunto(s)
Algoritmos , Neoplasias de Cabeza y Cuello , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Planificación de la Radioterapia Asistida por Computador/métodos
4.
Phys Med Biol ; 67(2)2022 01 21.
Artículo en Inglés | MEDLINE | ID: mdl-34794138

RESUMEN

Magnetic resonance imaging (MRI) allows accurate and reliable organ delineation for many disease sites in radiation therapy because MRI is able to offer superb soft-tissue contrast. Manual organ-at-risk delineation is labor-intensive and time-consuming. This study aims to develop a deep-learning-based automated multi-organ segmentation method to release the labor and accelerate the treatment planning process for head-and-neck (HN) cancer radiotherapy. A novel regional convolutional neural network (R-CNN) architecture, namely, mask scoring R-CNN, has been developed in this study. In the proposed model, a deep attention feature pyramid network is used as a backbone to extract the coarse features given by MRI, followed by feature refinement using R-CNN. The final segmentation is obtained through mask and mask scoring networks taking those refined feature maps as input. With the mask scoring mechanism incorporated into conventional mask supervision, the classification error can be highly minimized in conventional mask R-CNN architecture. A cohort of 60 HN cancer patients receiving external beam radiation therapy was used for experimental validation. Five-fold cross-validation was performed for the assessment of our proposed method. The Dice similarity coefficients of brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord were 0.89 ± 0.06, 0.68 ± 0.14/0.68 ± 0.18, 0.89 ± 0.07/0.89 ± 0.05, 0.90 ± 0.07, 0.67 ± 0.18/0.67 ± 0.10, 0.82 ± 0.10, 0.61 ± 0.14, 0.67 ± 0.11/0.68 ± 0.11, 0.92 ± 0.07, 0.85 ± 0.06/0.86 ± 0.05, 0.80 ± 0.13, and 0.77 ± 0.15, respectively. After the model training, all OARs can be segmented within 1 min.


Asunto(s)
Neoplasias de Cabeza y Cuello , Órganos en Riesgo , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Órganos en Riesgo/diagnóstico por imagen , Tomografía Computarizada por Rayos X
5.
Med Phys ; 48(12): 7747-7756, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34724712

RESUMEN

PURPOSE: Ultrasound (US) imaging is an established imaging modality capable of offering video-rate volumetric images without ionizing radiation. It has the potential for intra-fraction motion tracking in radiation therapy. In this study, a deep learning-based method has been developed to tackle the challenges in motion tracking using US imaging. METHODS: We present a Markov-like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame. RESULTS: The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 ± 0.38 mm for the 2D sequences and 1.71 ± 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 ± 1.24 mm for the landmarks in the left atrium was achieved. CONCLUSIONS: A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter-level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.


Asunto(s)
Aprendizaje Profundo , Radioterapia Guiada por Imagen , Humanos , Imagenología Tridimensional , Movimiento (Física) , Ultrasonografía
6.
Med Phys ; 48(11): 7063-7073, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34609745

RESUMEN

PURPOSE: The delineation of organs at risk (OARs) is fundamental to cone-beam CT (CBCT)-based adaptive radiotherapy treatment planning, but is time consuming, labor intensive, and subject to interoperator variability. We investigated a deep learning-based rapid multiorgan delineation method for use in CBCT-guided adaptive pancreatic radiotherapy. METHODS: To improve the accuracy of OAR delineation, two innovative solutions have been proposed in this study. First, instead of directly segmenting organs on CBCT images, a pretrained cycle-consistent generative adversarial network (cycleGAN) was applied to generating synthetic CT images given CBCT images. Second, an advanced deep learning model called mask-scoring regional convolutional neural network (MS R-CNN) was applied on those synthetic CT to detect the positions and shapes of multiple organs simultaneously for final segmentation. The OAR contours delineated by the proposed method were validated and compared with expert-drawn contours for geometric agreement using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS: Across eight abdominal OARs including duodenum, large bowel, small bowel, left and right kidneys, liver, spinal cord, and stomach, the geometric comparisons between automated and expert contours are as follows: 0.92 (0.89-0.97) mean DSC, 2.90 mm (1.63-4.19 mm) mean HD95, 0.89 mm (0.61-1.36 mm) mean MSD, and 1.43 mm (0.90-2.10 mm) mean RMS. Compared to the competing methods, our proposed method had significant improvements (p < 0.05) in all the metrics for all the eight organs. Once the model was trained, the contours of eight OARs can be obtained on the order of seconds. CONCLUSIONS: We demonstrated the feasibility of a synthetic CT-aided deep learning framework for automated delineation of multiple OARs on CBCT. The proposed method could be implemented in the setting of pancreatic adaptive radiotherapy to rapidly contour OARs with high accuracy.


Asunto(s)
Páncreas , Planificación de la Radioterapia Asistida por Computador , Tomografía Computarizada de Haz Cónico Espiral , Tomografía Computarizada de Haz Cónico , Humanos , Procesamiento de Imagen Asistido por Computador , Órganos en Riesgo
7.
Med Phys ; 48(10): 5862-5873, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34342878

RESUMEN

PURPOSE: Auto-segmentation algorithms offer a potential solution to eliminate the labor-intensive, time-consuming, and observer-dependent manual delineation of organs-at-risk (OARs) in radiotherapy treatment planning. This study aimed to develop a deep learning-based automated OAR delineation method to tackle the current challenges remaining in achieving reliable expert performance with the state-of-the-art auto-delineation algorithms. METHODS: The accuracy of OAR delineation is expected to be improved by utilizing the complementary contrasts provided by computed tomography (CT) (bony-structure contrast) and magnetic resonance imaging (MRI) (soft-tissue contrast). Given CT images, synthetic MR images were firstly generated by a pre-trained cycle-consistent generative adversarial network. The features of CT and synthetic MRI were then extracted and combined for the final delineation of organs using mask scoring regional convolutional neural network. Both in-house and public datasets containing CT scans from head-and-neck (HN) cancer patients were adopted to quantitatively evaluate the performance of the proposed method against current state-of-the-art algorithms in metrics including Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS: Across all of 18 OARs in our in-house dataset, the proposed method achieved an average DSC, HD95, MSD, and RMS of 0.77 (0.58-0.90), 2.90 mm (1.32-7.63 mm), 0.89 mm (0.42-1.85 mm), and 1.44 mm (0.71-3.15 mm), respectively, outperforming the current state-of-the-art algorithms by 6%, 16%, 25%, and 36%, respectively. On public datasets, for all nine OARs, an average DSC of 0.86 (0.73-0.97) were achieved, 6% better than the competing methods. CONCLUSION: We demonstrated the feasibility of a synthetic MRI-aided deep learning framework for automated delineation of OARs in HN radiotherapy treatment planning. The proposed method could be adopted into routine HN cancer radiotherapy treatment planning to rapidly contour OARs with high accuracy.


Asunto(s)
Neoplasias de Cabeza y Cuello , Órganos en Riesgo , Cabeza/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Planificación de la Radioterapia Asistida por Computador
8.
Med Phys ; 48(7): 3916-3926, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33993508

RESUMEN

PURPOSE: Ultrasound (US) imaging has been widely used in diagnosis, image-guided intervention, and therapy, where high-quality three-dimensional (3D) images are highly desired from sparsely acquired two-dimensional (2D) images. This study aims to develop a deep learning-based algorithm to reconstruct high-resolution (HR) 3D US images only reliant on the acquired sparsely distributed 2D images. METHODS: We propose a self-supervised learning framework using cycle-consistent generative adversarial network (cycleGAN), where two independent cycleGAN models are trained with paired original US images and two sets of low-resolution (LR) US images, respectively. The two sets of LR US images are obtained through down-sampling the original US images along the two axes, respectively. In US imaging, in-plane spatial resolution is generally much higher than through-plane resolution. By learning the mapping from down-sampled in-plane LR images to original HR US images, cycleGAN can generate through-plane HR images from original sparely distributed 2D images. Finally, HR 3D US images are reconstructed by combining the generated 2D images from the two cycleGAN models. RESULTS: The proposed method was assessed on two different datasets. One is automatic breast ultrasound (ABUS) images from 70 breast cancer patients, the other is collected from 45 prostate cancer patients. By applying a spatial resolution enhancement factor of 3 to the breast cases, our proposed method achieved the mean absolute error (MAE) value of 0.90 ± 0.15, the peak signal-to-noise ratio (PSNR) value of 37.88 ± 0.88 dB, and the visual information fidelity (VIF) value of 0.69 ± 0.01, which significantly outperforms bicubic interpolation. Similar performances have been achieved using the enhancement factor of 5 in these breast cases and using the enhancement factors of 5 and 10 in the prostate cases. CONCLUSIONS: We have proposed and investigated a new deep learning-based algorithm for reconstructing HR 3D US images from sparely acquired 2D images. Significant improvement on through-plane resolution has been achieved by only using the acquired 2D images without any external atlas images. Its self-supervision capability could accelerate HR US imaging.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Humanos , Masculino , Relación Señal-Ruido , Aprendizaje Automático Supervisado , Ultrasonografía
9.
Phys Med Biol ; 66(4): 045021, 2021 02 11.
Artículo en Inglés | MEDLINE | ID: mdl-33412527

RESUMEN

Organ-at-risk (OAR) delineation is a key step for cone-beam CT (CBCT) based adaptive radiotherapy planning that can be a time-consuming, labor-intensive, and subject-to-variability process. We aim to develop a fully automated approach aided by synthetic MRI for rapid and accurate CBCT multi-organ contouring in head-and-neck (HN) cancer patients. MRI has superb soft-tissue contrasts, while CBCT offers bony-structure contrasts. Using the complementary information provided by MRI and CBCT is expected to enable accurate multi-organ segmentation in HN cancer patients. In our proposed method, MR images are firstly synthesized using a pre-trained cycle-consistent generative adversarial network given CBCT. The features of CBCT and synthetic MRI (sMRI) are then extracted using dual pyramid networks for final delineation of organs. CBCT images and their corresponding manual contours were used as pairs to train and test the proposed model. Quantitative metrics including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance, and residual mean square distance (RMS) were used to evaluate the proposed method. The proposed method was evaluated on a cohort of 65 HN cancer patients. CBCT images were collected from those patients who received proton therapy. Overall, DSC values of 0.87 ± 0.03, 0.79 ± 0.10/0.79 ± 0.11, 0.89 ± 0.08/0.89 ± 0.07, 0.90 ± 0.08, 0.75 ± 0.06/0.77 ± 0.06, 0.86 ± 0.13, 0.66 ± 0.14, 0.78 ± 0.05/0.77 ± 0.04, 0.96 ± 0.04, 0.89 ± 0.04/0.89 ± 0.04, 0.83 ± 0.02, and 0.84 ± 0.07 for commonly used OARs for treatment planning including brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord, respectively, were achieved. This study provides a rapid and accurate OAR auto-delineation approach, which can be used for adaptive radiation therapy.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Neoplasias de Cabeza y Cuello/radioterapia , Órganos en Riesgo/efectos de la radiación , Radioterapia Guiada por Imagen/métodos , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Planificación de la Radioterapia Asistida por Computador
10.
Phys Med Biol ; 65(21): 215025, 2020 11 27.
Artículo en Inglés | MEDLINE | ID: mdl-33245059

RESUMEN

Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative magnetic resonance (MR) image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU can highly degrade the performance of automatic quantitative analysis such as segmentation, registration, feature extraction and radiomics. In this study, we present an advanced deep learning based INU correction algorithm called residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected magnetic resonance imaging (MRI) images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 55 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were made among the proposed method and other approaches. Our res-cycle GAN based method achieved an NMAE of 0.011 ± 0.002, a PSNR of 28.0 ± 1.9 dB, an NCC of 0.970 ± 0.017, and a SNU of 0.298 ± 0.085. Our proposed method has significant improvements (p < 0.05) in NMAE, PSNR, NCC and SNU over other algorithms including conventional GAN and U-net. Once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos , Relación Señal-Ruido
11.
Med Phys ; 47(12): 6343-6354, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33053202

RESUMEN

PURPOSE: Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. METHODS: A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE). RESULTS: The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively. CONCLUSIONS: We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Humanos , Relación Señal-Ruido
12.
Med Phys ; 47(9): 4115-4124, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32484573

RESUMEN

PURPOSE: High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS: Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS: Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS: In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.


Asunto(s)
Braquiterapia , Neoplasias de la Próstata , Catéteres , Humanos , Imagen por Resonancia Magnética , Masculino , Redes Neurales de la Computación , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Dosificación Radioterapéutica
13.
J Biophotonics ; 13(9): e202000066, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32445254

RESUMEN

X-ray-induced luminescence computed tomography (XLCT) is an emerging molecular imaging. Challenges in improving spatial resolution and reducing the scan time in a whole-body field of view (FOV) still remain for practical in vivo applications. In this study, we present a novel XLCT technique capable of obtaining three-dimensional (3D) images from a single snapshot. Specifically, a customed two-planar-mirror component is integrated into a cone beam XLCT imaging system to obtain multiple optical views of an object simultaneously. Furthermore, a compressive sensing based algorithm is adopted to improve the efficiency of 3D XLCT image reconstruction. Numerical simulations and experiments were conducted to validate the single snapshot X-ray-induced luminescence computed tomography (SS-XLCT). The results show that the 3D distribution of the nanophosphor targets can be visualized much faster than conventional cone beam XLCT imaging method that was used in our comparisons while maintaining comparable spatial resolution as in conventional XLCT imaging. SS-XLCT has the potential to harness the power of XLCT for rapid whole-body in vivo molecular imaging of small animals.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Luminiscencia , Algoritmos , Animales , Fantasmas de Imagen , Tomografía Computarizada por Rayos X , Rayos X
14.
Opt Lett ; 44(19): 4769-4772, 2019 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-31568438

RESUMEN

X-ray luminescence computed tomography (XLCT) based on x-ray-excitable nanophosphors has been proposed as a new modality for molecular imaging. The technique has two main advantages compared to other modalities. First, autofluorescence, which is problematic for fluorescence imaging, can be substantially reduced. Second, deep-tissue in vivo imaging with high optical contrast and spatial resolution becomes achievable. Here, we extend the novel XLCT modality from the visible or infrared region to a shortwave infrared wavelength by developing an x-ray-induced shortwave infrared luminescence computed tomography (SWIR-XLCT). For this application, rare-earth nanophosphors (RENPs) were synthesized as core/shell structures consisting of a Ho-doped NaYbF4 core surrounded by a NaYF4 shell that emit light efficiently in the shortwave infrared spectral region under x-ray excitation. Through numerical simulations and phantom experiments, we showed the feasibility of SWIR-XLCT and demonstrated its potential for x-ray luminescence imaging with high spatial resolution and deep depth.

15.
Med Phys ; 46(12): 5696-5702, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31512753

RESUMEN

PURPOSE: X-ray fluorescence computed tomography (XFCT) is an emerging molecular imaging modality for preclinical and clinical applications with high atomic number contrast agents. XFCT allows detection of molecular biomarkers at tissue depths of 4-9 mm at L-shell energies and several centimeters for K-shell energies, while maintaining high spatial resolution. This is typically not possible for other molecular imaging modalities. The purpose of this study is to demonstrate XFCT imaging with reduced acquisition times. To accomplish this, x-ray focusing polycapillary optics are utilized to simultaneously increase x-ray fluence rate and spatial resolution in L-shell XFCT imaging. MATERIALS AND METHODS: A prototype imaging system using a polycapillary focusing optic was demonstrated. The optic, which was custom-designed for this prototype, provided a high fluence rate with a focal spot size of 2.6 mm at a source to isocenter distance of 3 cm with a ten times higher fluence rate compared to standard collimation. The study evaluates three different phantoms to explore different trade-offs and limitations of L-shell XFCT imaging. A low-contrast gold phantom and a high-contrast gold phantom, each with three target regions with gold concentrations of 60, 80, and 100 µg ml - 1 for low contrast and 200, 600, and 1000 µg ml - 1 for high contrast, and a mouse-sized water phantom with gold concentrations between 300 and 500 µg ml - 1 were imaged. X-ray fluorescence photons were measured using a silicon drift detector (SDD) with an energy resolution of 180 eV FWHM at an x-ray energy of 11 keV. Images were reconstructed with an iterative image reconstruction algorithm and analyzed for contrast to noise ratio (CNR) and signal to noise ratio (SNR). RESULTS: The XFCT data acquisition could be reduced from 17 h to under 1 h. The polycapillary x-ray optic increases the x-ray fluence rate and lowers the amount of background scatter which leads to reduced imaging time and improved sensitivity. The quantitative analysis of the reconstructed images validates that concentrations of 60 µg ml - 1 of gold can be visualized with L-shell XFCT imaging. For a mouse-sized phantom, a concentration of 300 µg ml - 1 gold was detected within a 66 min measurement. CONCLUSIONS: With a high fluence rate pencil beam from a polycapillary x-ray source, a reduction in signal integration time is achieved. It is presented that subtle amounts of contrast agents can be detected with L-shell XFCT within biologically relevant time frames. Our basic measurements show that the polycapillary x-ray source technology is appropriate to realize preclinical L-shell XFCT imaging. The integration of more SDDs into the system will lower the dose and increase the sensitivity.


Asunto(s)
Fluorescencia , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X , Animales , Ratones , Fantasmas de Imagen , Dosis de Radiación , Factores de Tiempo
16.
Phys Med Biol ; 64(12): 125015, 2019 06 20.
Artículo en Inglés | MEDLINE | ID: mdl-31117059

RESUMEN

We present a novel FMT endoscope by using a MEMS scanning mirror and an optical fiberscope. The diameter of this highly miniaturized FMT device is only 5 mm. To our knowledge, this is the smallest FMT device we found so far. Several phantom experiments based on indocyanine green (ICG) were conducted to demonstrate the imaging ability of this device. Two tumor-bearing mice were systematically injected with tumor-targeted NIR fluorescent probes (ATF-PEG-IO-830) and were then imaged to further demonstrate the ability of this FMT endoscope for imaging small animals.


Asunto(s)
Endoscopía/instrumentación , Tecnología de Fibra Óptica/instrumentación , Fluorescencia , Sistemas Microelectromecánicos/instrumentación , Miniaturización/métodos , Fantasmas de Imagen , Tomografía/instrumentación , Animales , Colorantes Fluorescentes , Verde de Indocianina , Ratones
17.
Appl Opt ; 57(27): 7938-7941, 2018 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-30462063

RESUMEN

We present a novel method called full density fluorescence molecular tomography (FD-FMT) that can considerably improve the performance of conventional FMT. By converting each source (or detector) to a detector (or source) through the use of a dichroic mirror, FD-FMT not only increases the amount of optical projections by more than fourfold (compared to conventional FMT) to achieve high-resolution image reconstruction, but also offers the possibility to realize miniaturized FMT systems.

18.
J Biophotonics ; 11(3)2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-28696034

RESUMEN

Advances in epilepsy studies have shown that specific changes in hemodynamics precede and accompany seizure onset and propagation. However, it has been challenging to noninvasively detect these changes in real time and in humans, due to the lack of fast functional neuroimaging tools. In this study, we present a functional diffuse optical tomography (DOT) method with the guidance of an anatomical human head atlas for 3-dimensionally mapping the brain in real time. Central to our DOT system is a human head interface coupled with a technique that can incorporate topological information of the brain surface into the DOT image reconstruction. The performance of the DOT system was tested by imaging motor tasks-involved brain activities on N = 6 subjects (3 epilepsy patients and 3 healthy controls). We observed diffuse areas of activations from the reconstructed [HbT] images of patients, relative to more focal activations for healthy subjects. Moreover, significant pretask hemodynamic activations were also seen in the motor cortex of patients, which indicated abnormal activities persistent in the brain of an epilepsy patient. This work demonstrates that fast functional DOT is a valuable tool for noninvasive 3-dimensional mapping of brain hemodynamics.


Asunto(s)
Encéfalo/diagnóstico por imagen , Tomografía Óptica , Adolescente , Adulto , Encéfalo/irrigación sanguínea , Estudios de Casos y Controles , Epilepsia/diagnóstico por imagen , Epilepsia/fisiopatología , Femenino , Hemodinámica , Humanos , Imagenología Tridimensional , Masculino , Persona de Mediana Edad , Fantasmas de Imagen , Factores de Tiempo , Adulto Joven
19.
Opt Lett ; 42(7): 1456-1459, 2017 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-28362791

RESUMEN

In this Letter, we present a photoacoustic imaging (PAI) system based on a low-cost high-power miniature light emitting diode (LED) that is capable of in vivo mapping vasculature networks in biological tissue. Overdriving with 200 ns pulses and operating at a repetition rate of 40 kHz, a 1.2 W 405 nm LED with a radiation area of 1000 µm×1000 µm and a size of 3.5 mm×3.5 mm was used to excite photoacoustic signals in tissue. Phantoms including black stripes, lead, and hair were used to validate the system in which a volumetric PAI image was obtained by scanning the transducer and the light beam in a two-dimensional x-y plane over the object. In vivo imaging of the vasculature of a mouse ear shows that LED-based PAI could have great potential for label-free biomedical imaging applications where the use of bulky and expensive pulsed lasers is impractical.

20.
Appl Sci (Basel) ; 7(12)2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31205772

RESUMEN

It is highly desirable to develop novel approaches to improve patient survival rate of pancreatic cancer through early detection. Here, we present such an approach based on photoacoustic and fluorescence molecular imaging of pancreatic tumor using a miniature multimodal endoscope in combination with targeted multifunctional iron oxide nanoparticles (IONPs). A novel fan-shaped scanning mechanism was developed to minimize the invasiveness for endoscopic imaging of pancreatic tumors. The results show that the enhancements in photoacoustic and fluorescence signals using amino-terminal fragment (ATF) targeted IONPs were ~four to six times higher compared to that using non-targeted IONPs. Our study indicates the potential of the combination of the multimodal photoacoustic-fluorescence endoscopy and targeted multifunctional nanoparticles as an efficient tool to provide improved specificity and sensitivity for pancreatic cancer detection.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...