Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Int J Mol Sci ; 25(10)2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38791508

RESUMO

Cryogenic electron tomography (cryoET) is a powerful tool in structural biology, enabling detailed 3D imaging of biological specimens at a resolution of nanometers. Despite its potential, cryoET faces challenges such as the missing wedge problem, which limits reconstruction quality due to incomplete data collection angles. Recently, supervised deep learning methods leveraging convolutional neural networks (CNNs) have considerably addressed this issue; however, their pretraining requirements render them susceptible to inaccuracies and artifacts, particularly when representative training data is scarce. To overcome these limitations, we introduce a proof-of-concept unsupervised learning approach using coordinate networks (CNs) that optimizes network weights directly against input projections. This eliminates the need for pretraining, reducing reconstruction runtime by 3-20× compared to supervised methods. Our in silico results show improved shape completion and reduction of missing wedge artifacts, assessed through several voxel-based image quality metrics in real space and a novel directional Fourier Shell Correlation (FSC) metric. Our study illuminates benefits and considerations of both supervised and unsupervised approaches, guiding the development of improved reconstruction strategies.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Aprendizado de Máquina não Supervisionado , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Tomografia com Microscopia Eletrônica/métodos , Microscopia Crioeletrônica/métodos , Algoritmos , Aprendizado Profundo
2.
bioRxiv ; 2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38712113

RESUMO

Cryogenic electron tomography (cryoET) is a powerful tool in structural biology, enabling detailed 3D imaging of biological specimens at a resolution of nanometers. Despite its potential, cryoET faces challenges such as the missing wedge problem, which limits reconstruction quality due to incomplete data collection angles. Recently, supervised deep learning methods leveraging convolutional neural networks (CNNs) have considerably addressed this issue; however, their pretraining requirements render them susceptible to inaccuracies and artifacts, particularly when representative training data is scarce. To overcome these limitations, we introduce a proof-of-concept unsupervised learning approach using coordinate networks (CNs) that optimizes network weights directly against input projections. This eliminates the need for pretraining, reducing reconstruction runtime by 3 - 20× compared to supervised methods. Our in silico results show improved shape completion and reduction of missing wedge artifacts, assessed through several voxel-based image quality metrics in real space and a novel directional Fourier Shell Correlation (FSC) metric. Our study illuminates benefits and considerations of both supervised and unsupervised approaches, guiding the development of improved reconstruction strategies.

3.
Med Phys ; 51(4): 2526-2537, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38014764

RESUMO

BACKGROUND: Volumetric reconstruction of magnetic resonance imaging (MRI) from sparse samples is desirable for 3D motion tracking and promises to improve magnetic resonance (MR)-guided radiation treatment precision. Data-driven sparse MRI reconstruction, however, requires large-scale training datasets for prior learning, which is time-consuming and challenging to acquire in clinical settings. PURPOSE: To investigate volumetric reconstruction of MRI from sparse samples of two orthogonal slices aided by sparse priors of two static 3D MRI through implicit neural representation (NeRP) learning, in support of 3D motion tracking during MR-guided radiotherapy. METHODS: A multi-layer perceptron network was trained to parameterize the NeRP model of a patient-specific MRI dataset, where the network takes 4D data coordinates of voxel locations and motion states as inputs and outputs corresponding voxel intensities. By first training the network to learn the NeRP of two static 3D MRI with different breathing motion states, prior information of patient breathing motion was embedded into network weights through optimization. The prior information was then augmented from two motion states to 31 motion states by querying the optimized network at interpolated and extrapolated motion state coordinates. Starting from the prior-augmented NeRP model as an initialization point, we further trained the network to fit sparse samples of two orthogonal MRI slices and the final volumetric reconstruction was obtained by querying the trained network at 3D spatial locations. We evaluated the proposed method using 5-min volumetric MRI time series with 340 ms temporal resolution for seven abdominal patients with hepatocellular carcinoma, acquired using golden-angle radial MRI sequence and reconstructed through retrospective sorting. Two volumetric MRI with inhale and exhale states respectively were selected from the first 30 s of the time series for prior embedding and augmentation. The remaining 4.5-min time series was used for volumetric reconstruction evaluation, where we retrospectively subsampled each MRI to two orthogonal slices and compared model-reconstructed images to ground truth images in terms of image quality and the capability of supporting 3D target motion tracking. RESULTS: Across the seven patients evaluated, the peak signal-to-noise-ratio between model-reconstructed and ground truth MR images was 38.02 ± 2.60 dB and the structure similarity index measure was 0.98 ± 0.01. Throughout the 4.5-min time period, gross tumor volume (GTV) motion estimated by deforming a reference state MRI to model-reconstructed and ground truth MRI showed good consistency. The 95-percentile Hausdorff distance between GTV contours was 2.41 ± 0.77 mm, which is less than the voxel dimension. The mean GTV centroid position difference between ground truth and model estimation was less than 1 mm in all three orthogonal directions. CONCLUSION: A prior-augmented NeRP model has been developed to reconstruct volumetric MRI from sparse samples of orthogonal cine slices. Only one exhale and one inhale 3D MRI were needed to train the model to learn prior information of patient breathing motion for sparse image reconstruction. The proposed model has the potential of supporting 3D motion tracking during MR-guided radiotherapy for improved treatment precision and promises a major simplification of the workflow by eliminating the need for large-scale training datasets.


Assuntos
Abdome , Imageamento por Ressonância Magnética , Humanos , Estudos Retrospectivos , Movimento (Física) , Respiração , Espectroscopia de Ressonância Magnética , Imageamento Tridimensional
4.
Phys Med Biol ; 68(20)2023 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-37757838

RESUMO

Objective.Supervised deep learning for image super-resolution (SR) has limitations in biomedical imaging due to the lack of large amounts of low- and high-resolution image pairs for model training. In this work, we propose a reference-free statistical implicit neural representation (INR) framework, which needs only a single or a few observed low-resolution (LR) image(s), to generate high-quality SR images.Approach.The framework models the statistics of the observed LR images via maximum likelihood estimation and trains the INR network to represent the latent high-resolution (HR) image as a continuous function in the spatial domain. The INR network is constructed as a coordinate-based multi-layer perceptron, whose inputs are image spatial coordinates and outputs are corresponding pixel intensities. The trained INR not only constrains functional smoothness but also allows an arbitrary scale in SR imaging.Main results.We demonstrate the efficacy of the proposed framework on various biomedical images, including computed tomography (CT), magnetic resonance imaging (MRI), fluorescence microscopy, and ultrasound images, across different SR magnification scales of 2×, 4×, and 8×. A limited number of LR images were used for each of the SR imaging tasks to show the potential of the proposed statistical INR framework.Significance.The proposed method provides an urgently needed unsupervised deep learning framework for numerous biomedical SR applications that lack HR reference images.


Assuntos
Algoritmos , Redes Neurais de Computação , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Microscopia de Fluorescência , Processamento de Imagem Assistida por Computador/métodos
5.
Int J Mol Sci ; 24(16)2023 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-37628743

RESUMO

Immunochromatographic assay (ICA) plays an important role in in vitro diagnostics because of its simpleness, convenience, fastness, sensitivity, accuracy, and low cost. The employment of magnetic nanoparticles (MNPs), possessing both excellent optical properties and magnetic separation functions, can effectively promote the performances of ICA. In this study, an ICA based on MNPs (MNP-ICA) has been successfully developed for the sensitive detection of carcinoembryonic antigen (CEA). The magnetic probes were prepared by covalently conjugating carboxylated MNPs with the specific monoclonal antibody against CEA, which were not only employed to enrich and extract CEA from serum samples under an external magnetic field but also used as a signal output with its inherent optical property. Under the optimal parameters, the limit of detection (LOD) for qualitative detection with naked eyes was 1.0 ng/mL, and the quantitative detection could be realized with the help of a portable optical reader, indicating that the ratio of optical signal intensity correlated well with CEA concentration ranging from 1.0 ng/mL to 64.0 ng/mL (R2 = 0.9997). Additionally, method comparison demonstrated that the magnetic probes were beneficial for sensitivity improvement due to the matrix effect reduction after magnetic separation, and the MNP-ICA is eight times higher sensitive than ICA based on colloidal gold nanoparticles. The developed MNP-ICA will provide sensitive, convenient, and efficient technical support for biomarkers rapid screening in cancer diagnosis and prognosis.


Assuntos
Antígeno Carcinoembrionário , Nanopartículas de Magnetita , Ouro , Anticorpos Monoclonais , Imunoensaio
6.
Int J Radiat Oncol Biol Phys ; 117(2): 505-514, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37141982

RESUMO

PURPOSE: This study explored deep-learning-based patient-specific auto-segmentation using transfer learning on daily RefleXion kilovoltage computed tomography (kVCT) images to facilitate adaptive radiation therapy, based on data from the first group of patients treated with the innovative RefleXion system. METHODS AND MATERIALS: For head and neck (HaN) and pelvic cancers, a deep convolutional segmentation network was initially trained on a population data set that contained 67 and 56 patient cases, respectively. Then the pretrained population network was adapted to the specific RefleXion patient by fine-tuning the network weights with a transfer learning method. For each of the 6 collected RefleXion HaN cases and 4 pelvic cases, initial planning computed tomography (CT) scans and 5 to 26 sets of daily kVCT images were used for the patient-specific learning and evaluation separately. The performance of the patient-specific network was compared with the population network and the clinical rigid registration method and evaluated by the Dice similarity coefficient (DSC) with manual contours being the reference. The corresponding dosimetric effects resulting from different auto-segmentation and registration methods were also investigated. RESULTS: The proposed patient-specific network achieved mean DSC results of 0.88 for 3 HaN organs at risk (OARs) of interest and 0.90 for 8 pelvic target and OARs, outperforming the population network (0.70 and 0.63) and the registration method (0.72 and 0.72). The DSC of the patient-specific network gradually increased with the increment of longitudinal training cases and approached saturation with more than 6 training cases. Compared with using the registration contour, the target and OAR mean doses and dose-volume histograms obtained using the patient-specific auto-segmentation were closer to the results using the manual contour. CONCLUSIONS: Auto-segmentation of RefleXion kVCT images based on the patient-specific transfer learning could achieve higher accuracy, outperforming a common population network and clinical registration-based method. This approach shows promise in improving dose evaluation accuracy in RefleXion adaptive radiation therapy.


Assuntos
Processamento de Imagem Assistida por Computador , Planejamento da Radioterapia Assistida por Computador , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Órgãos em Risco/diagnóstico por imagem , Órgãos em Risco/efeitos da radiação , Radiometria , Tomografia Computadorizada por Raios X
7.
IEEE Trans Med Imaging ; 42(7): 1932-1943, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37018314

RESUMO

The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.


Assuntos
Algoritmos , Diagnóstico por Imagem , Radiografia , Retina
8.
Med Phys ; 50(5): 3137-3147, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36621812

RESUMO

BACKGROUND: Linear accelerator (Linac) beam data commissioning and quality assurance (QA) play a vital role in accurate radiation treatment delivery and entail a large number of measurements using a variety of field sizes. How to optimize the effort in data acquisition while maintaining high quality of medical physics practice has been sought after. PURPOSE: We propose to model Linac beam data through implicit neural representation (NeRP) learning. The potential of the beam model in predicting beam data from sparse measurements and detecting data collection errors was evaluated, with the goal of using the beam model to verify beam data collection accuracy and simplify the commissioning and QA process. MATERIALS AND METHODS: NeRP models with continuous and differentiable functions parameterized by multilayer perceptrons (MLPs) were used to represent various beam data including percentage depth dose (PDD) and profiles of 6 MV beams with and without flattening filter. Prior knowledge of the beam data was embedded into the MLP network by learning the NeRP of a vendor-provided "golden" beam dataset. The prior-embedded network was then trained to fit clinical beam data collected at one field size and used to predict beam data at other field sizes. We evaluated the prediction accuracy by comparing network-predicted beam data to water tank measurements collected from 14 clinical Linacs. Beam datasets with intentionally introduced errors were used to investigate the potential use of the NeRP model for beam data verification, by evaluating the model performance when trained with erroneous beam data samples. RESULTS: Linac beam data predicted by the model agreed well with water tank measurements, with averaged Gamma passing rates (1%/1 mm passing criteria) higher than 95% and averaged mean absolute errors less than 0.6%. Beam data samples with measurement errors were revealed by inconsistent beam predictions between networks trained with correct versus erroneous data samples, characterized by a Gamma passing rate lower than 90%. CONCLUSION: A NeRP beam data modeling technique has been established for predicting beam characteristics from sparse measurements. The model provides a valuable tool to verify beam data collection accuracy and promises to simplify commissioning/QA processes by reducing the number of measurements without compromising the quality of medical physics service.


Assuntos
Radioterapia de Intensidade Modulada , Radioterapia de Intensidade Modulada/métodos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Aceleradores de Partículas , Água
9.
Nat Commun ; 13(1): 7142, 2022 11 21.
Artigo em Inglês | MEDLINE | ID: mdl-36414658

RESUMO

Single cell RNA sequencing is a promising technique to determine the states of individual cells and classify novel cell subtypes. In current sequence data analysis, however, genes with low expressions are omitted, which leads to inaccurate gene counts and hinders downstream analysis. Recovering these omitted expression values presents a challenge because of the large size of the data. Here, we introduce a data-driven gene expression recovery framework, referred to as self-consistent expression recovery machine (SERM), to impute the missing expressions. Using a neural network, the technique first learns the underlying data distribution from a subset of the noisy data. It then recovers the overall expression data by imposing a self-consistency on the expression matrix, thus ensuring that the expression levels are similarly distributed in different parts of the matrix. We show that SERM improves the accuracy of gene imputation with orders of magnitude enhancement in computational efficiency in comparison to the state-of-the-art imputation techniques.


Assuntos
Moduladores Seletivos de Receptor Estrogênico , Expressão Gênica
10.
Artigo em Inglês | MEDLINE | ID: mdl-35657845

RESUMO

Image reconstruction is an inverse problem that solves for a computational image based on sampled sensor measurement. Sparsely sampled image reconstruction poses additional challenges due to limited measurements. In this work, we propose a methodology of implicit Neural Representation learning with Prior embedding (NeRP) to reconstruct a computational image from sparsely sampled measurements. The method differs fundamentally from previous deep learning-based image reconstruction approaches in that NeRP exploits the internal information in an image prior and the physics of the sparsely sampled measurements to produce a representation of the unknown subject. No large-scale data is required to train the NeRP except for a prior image and sparsely sampled measurements. In addition, we demonstrate that NeRP is a general methodology that generalizes to different imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). We also show that NeRP can robustly capture the subtle yet significant image changes required for assessing tumor progression.

11.
Med Phys ; 49(9): 6110-6119, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35766221

RESUMO

PURPOSE: To develop a geometry-informed deep learning framework for volumetric MRI with sub-second acquisition time in support of 3D motion tracking, which is highly desirable for improved radiotherapy precision but hindered by the long image acquisition time. METHODS: A 2D-3D deep learning network with an explicitly defined geometry module that embeds geometric priors of the k-space encoding pattern was investigated, where a 2D generation network first augmented the sparsely sampled image dataset by generating new 2D representations of the underlying 3D subject. A geometry module then unfolded the 2D representations to the volumetric space. Finally, a 3D refinement network took the unfolded 3D data and outputted high-resolution volumetric images. Patient-specific models were trained for seven abdominal patients to reconstruct volumetric MRI from both orthogonal cine slices and sparse radial samples. To evaluate the robustness of the proposed method to longitudinal patient anatomy and position changes, we tested the trained model on separate datasets acquired more than one month later and evaluated 3D target motion tracking accuracy using the model-reconstructed images by deforming a reference MRI with gross tumor volume (GTV) contours to a 5-min time series of both ground truth and model-reconstructed volumetric images with a temporal resolution of 340 ms. RESULTS: Across the seven patients evaluated, the median distances between model-predicted and ground truth GTV centroids in the superior-inferior direction were 0.4 ± 0.3 mm and 0.5 ± 0.4 mm for cine and radial acquisitions, respectively. The 95-percentile Hausdorff distances between model-predicted and ground truth GTV contours were 4.7 ± 1.1 mm and 3.2 ± 1.5 mm for cine and radial acquisitions, which are of the same scale as cross-plane image resolution. CONCLUSION: Incorporating geometric priors into deep learning model enables volumetric imaging with high spatial and temporal resolution, which is particularly valuable for 3D motion tracking and has the potential of greatly improving MRI-guided radiotherapy precision.


Assuntos
Aprendizado Profundo , Radioterapia Guiada por Imagem , Humanos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética , Movimento (Física) , Radioterapia Guiada por Imagem/métodos
12.
Comput Biol Med ; 148: 105710, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35715260

RESUMO

Deep learning affords enormous opportunities to augment the armamentarium of biomedical imaging. However, the pure data-driven nature of deep learning models may limit the model generalizability and application scope. Here we establish a geometry-informed deep learning framework for ultra-sparse 3D tomographic image reconstruction. We introduce a novel mechanism for integrating geometric priors of the imaging system. We demonstrate that the seamless inclusion of known priors is essential to enhance the performance of 3D volumetric computed tomography imaging with ultra-sparse sampling. The study opens new avenues for data-driven biomedical imaging and promises to provide substantially improved imaging tools for various clinical imaging and image-guided interventions.


Assuntos
Aprendizado Profundo , Algoritmos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional
13.
Phys Med Biol ; 67(12)2022 06 13.
Artigo em Inglês | MEDLINE | ID: mdl-35477171

RESUMO

Objective. Dose distribution data plays a pivotal role in radiotherapy treatment planning. The data is typically represented using voxel grids, and its size ranges from 106to 108. A concise representation of the treatment plan is of great value in facilitating treatment planning and downstream applications. This work aims to develop an implicit neural representation of 3D dose distribution data.Approach. Instead of storing the dose values at each voxel, in the proposed approach, the weights of a multilayer perceptron (MLP) are employed to characterize the dosimetric data for plan representation and subsequent applications. We train a coordinate-based MLP with sinusoidal activations to map the voxel spatial coordinates to the corresponding dose values. We identify the best architecture for a given parameter budget and use that to train a model for each patient. The trained MLP is evaluated at each voxel location to reconstruct the dose distribution. We perform extensive experiments on dose distributions of prostate, spine, and head and neck tumor cases to evaluate the quality of the proposed representation. We also study the change in representation quality by varying model size and activation function.Main results. Using coordinate-based MLPs with sinusoidal activations, we can learn implicit representations that achieve a mean-squared error of 10-6and peak signal-to-noise ratio greater than 50 dB at a target bitrate of ∼1 across all the datasets, with a compression ratio of ∼32. Our results also show that model sizes with a bitrate of 1-2 achieve optimal accuracy. For smaller bitrates, performance starts to drop significantly.Significance. The proposed model provides a low-dimensional, implicit, and continuous representation of 3D dose data. In summary, given a dose distribution, we systematically show how to find a compact model to fit the data accurately. This study lays the groundwork for future applications of neural representations of dose data in radiation oncology.


Assuntos
Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada , Humanos , Masculino , Redes Neurais de Computação , Radiometria , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos
14.
Med Image Anal ; 77: 102372, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35131701

RESUMO

X-ray imaging is a widely used approach to view the internal structure of a subject for clinical diagnosis, image-guided interventions and decision-making. The X-ray projections acquired at different view angles provide complementary information of patient's anatomy and are required for stereoscopic or volumetric imaging of the subject. In reality, obtaining multiple-view projections inevitably increases radiation dose and complicates clinical workflow. Here we investigate a strategy of obtaining the X-ray projection image at a novel view angle from a given projection image at a specific view angle to alleviate the need for actual projection measurement. Specifically, a Deep Learning-based Geometry-Integrated Projection Synthesis (DL-GIPS) framework is proposed for the generation of novel-view X-ray projections. The proposed deep learning model extracts geometry and texture features from a source-view projection, and then conducts geometry transformation on the geometry features to accommodate the change of view angle. At the final stage, the X-ray projection in the target view is synthesized from the transformed geometry and the shared texture features via an image generator. The feasibility and potential impact of the proposed DL-GIPS model are demonstrated using lung imaging cases. The proposed strategy can be generalized to a general case of multiple projections synthesis from multiple input views and potentially provides a new paradigm for various stereoscopic and volumetric imaging with substantially reduced efforts in data acquisition.


Assuntos
Aprendizado Profundo , Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pulmão , Imagens de Fantasmas , Radiografia , Raios X
15.
Sci Rep ; 12(1): 1408, 2022 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-35082346

RESUMO

Magnetic resonance imaging offers unrivaled visualization of the fetal brain, forming the basis for establishing age-specific morphologic milestones. However, gauging age-appropriate neural development remains a difficult task due to the constantly changing appearance of the fetal brain, variable image quality, and frequent motion artifacts. Here we present an end-to-end, attention-guided deep learning model that predicts gestational age with R2 score of 0.945, mean absolute error of 6.7 days, and concordance correlation coefficient of 0.970. The convolutional neural network was trained on a heterogeneous dataset of 741 developmentally normal fetal brain images ranging from 19 to 39 weeks in gestational age. We also demonstrate model performance and generalizability using independent datasets from four academic institutions across the U.S. and Turkey with R2 scores of 0.81-0.90 after minimal fine-tuning. The proposed regression algorithm provides an automated machine-enabled tool with the potential to better characterize in utero neurodevelopment and guide real-time gestational age estimation after the first trimester.


Assuntos
Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Idade Gestacional , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento por Ressonância Magnética/normas , Neuroimagem/normas , Artefatos , Encéfalo/crescimento & desenvolvimento , Conjuntos de Dados como Assunto , Feminino , Feto , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Gravidez , Trimestres da Gravidez/fisiologia , Turquia , Estados Unidos
16.
Quant Imaging Med Surg ; 11(12): 4881-4894, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34888196

RESUMO

Modern conformal beam delivery techniques require image-guidance to ensure the prescribed dose to be delivered as planned. Recent advances in artificial intelligence (AI) have greatly augmented our ability to accurately localize the treatment target while sparing the normal tissues. In this paper, we review the applications of AI-based algorithms in image-guided radiotherapy (IGRT), and discuss the indications of these applications to the future of clinical practice of radiotherapy. The benefits, limitations and some important trends in research and development of the AI-based IGRT techniques are also discussed. AI-based IGRT techniques have the potential to monitor tumor motion, reduce treatment uncertainty and improve treatment precision. Particularly, these techniques also allow more healthy tissue to be spared while keeping tumor coverage the same or even better.

17.
IEEE Trans Med Imaging ; 40(12): 3369-3378, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34048339

RESUMO

Deep learning is becoming an indispensable tool for imaging applications, such as image segmentation, classification, and detection. In this work, we reformulate a standard deep learning problem into a new neural network architecture with multi-output channels, which reflects different facets of the objective, and apply the deep neural network to improve the performance of image segmentation. By adding one or more interrelated auxiliary-output channels, we impose an effective consistency regularization for the main task of pixelated classification (i.e., image segmentation). Specifically, multi-output-channel consistency regularization is realized by residual learning via additive paths that connect main-output channel and auxiliary-output channels in the network. The method is evaluated on the detection and delineation of lung and liver tumors with public data. The results clearly show that multi-output-channel consistency implemented by residual learning improves the standard deep neural network. The proposed framework is quite broad and should find widespread applications in various deep learning problems.


Assuntos
Neoplasias , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias/diagnóstico por imagem
18.
IEEE Trans Med Imaging ; 40(4): 1113-1122, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33351753

RESUMO

Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources in practice, which makes it challenging to build a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the random missing domain(s) data in real applications. Specifically, we develop a novel multi-domain image completion method that utilizes a generative adversarial network (GAN) with a representational disentanglement scheme to extract shared content encoding and separate style encoding across multiple domains. We further illustrate that the learned representation in multi-domain image completion could be leveraged for high-level tasks, e.g., segmentation, by introducing a unified framework consisting of image completion and segmentation with a shared content encoder. The experiments demonstrate consistent performance improvement on three datasets for brain tumor segmentation, prostate segmentation, and facial expression image completion respectively.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Masculino
19.
Nat Biomed Eng ; 3(11): 880-888, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31659306

RESUMO

Tomographic imaging using penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Aprendizado Profundo , Planejamento da Radioterapia Assistida por Computador/métodos , Abdome/diagnóstico por imagem , Biópsia por Agulha , Cabeça/diagnóstico por imagem , Humanos , Imageamento Tridimensional/métodos , Pulmão/diagnóstico por imagem , Pescoço/diagnóstico por imagem , Radioterapia
20.
Int J Radiat Oncol Biol Phys ; 105(2): 432-439, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31201892

RESUMO

PURPOSE: Deep learning is an emerging technique that allows us to capture imaging information beyond the visually recognizable level of a human being. Because of the anatomic characteristics and location, on-board target verification for radiation delivery to pancreatic tumors is a challenging task. Our goal was to use a deep neural network to localize the pancreatic tumor target on kV x-ray images acquired using an on-board imager for image guided radiation therapy. METHODS AND MATERIALS: The network is set up in such a way that the input is either a digitally reconstructed radiograph image or a monoscopic x-ray projection image acquired by the on-board imager from a given direction, and the output is the location of the planning target volume in the projection image. To produce a sufficient number of training x-ray images reflecting the vast number of possible clinical scenarios of anatomy distribution, a series of changes were introduced to the planning computed tomography images, including deformation, rotation, and translation, to simulate inter- and intrafractional variations. After model training, the accuracy of the model was evaluated by retrospectively studying patients who underwent pancreatic cancer radiation therapy. Statistical analysis using mean absolute differences (MADs) and Lin's concordance correlation coefficient were used to assess the accuracy of the predicted target positions. RESULTS: MADs between the model-predicted and the actual positions were found to be less than 2.60 mm in anteroposterior, lateral, and oblique directions for both axes in the detector plane. For comparison studies with and without fiducials, MADs are less than 2.49 mm. For all cases, Lin's concordance correlation coefficients between the predicted and actual positions were found to be better than 93%, demonstrating the success of the proposed deep learning for image guided radiation therapy. CONCLUSIONS: We demonstrated that markerless pancreatic tumor target localization is achievable with high accuracy by using a deep learning technique approach.


Assuntos
Aprendizado Profundo , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/radioterapia , Radioterapia Guiada por Imagem/métodos , Conjuntos de Dados como Assunto , Marcadores Fiduciais , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Movimentos dos Órgãos , Pâncreas/diagnóstico por imagem , Radiografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA