Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Phys Med Biol ; 69(11)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38688292

RESUMO

Objective.The mean squared error (MSE), also known asL2loss, has been widely used as a loss function to optimize image denoising models due to its strong performance as a mean estimator of the Gaussian noise model. Recently, various low-dose computed tomography (LDCT) image denoising methods using deep learning combined with the MSE loss have been developed; however, this approach has been observed to suffer from the regression-to-the-mean problem, leading to over-smoothed edges and degradation of texture in the image.Approach.To overcome this issue, we propose a stochastic function in the loss function to improve the texture of the denoised CT images, rather than relying on complicated networks or feature space losses. The proposed loss function includes the MSE loss to learn the mean distribution and the Pearson divergence loss to learn feature textures. Specifically, the Pearson divergence loss is computed in an image space to measure the distance between two intensity measures of denoised low-dose and normal-dose CT images. The evaluation of the proposed model employs a novel approach of multi-metric quantitative analysis utilizing relative texture feature distance.Results.Our experimental results show that the proposed Pearson divergence loss leads to a significant improvement in texture compared to the conventional MSE loss and generative adversarial network (GAN), both qualitatively and quantitatively.Significance.Achieving consistent texture preservation in LDCT is a challenge in conventional GAN-type methods due to adversarial aspects aimed at minimizing noise while preserving texture. By incorporating the Pearson regularizer in the loss function, we can easily achieve a balance between two conflicting properties. Consistent high-quality CT images can significantly help clinicians in diagnoses and supporting researchers in the development of AI-diagnostic models.


Assuntos
Processamento de Imagem Assistida por Computador , Doses de Radiação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Humanos , Aprendizado Profundo
2.
Phys Med Biol ; 68(20)2023 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-37726013

RESUMO

Objective. Ultrasound is extensively utilized as a convenient and cost-effective method in emergency situations. Unfortunately, the limited availability of skilled clinicians in emergency hinders the wider adoption of point-of-care ultrasound. To overcome this challenge, this paper aims to aid less experienced healthcare providers in emergency lung ultrasound scans.Approach. To assist healthcare providers, it is important to have a comprehensive model that can automatically guide the entire process of lung ultrasound based on the clinician's workflow. In this paper, we propose a framework for diagnosing pneumothorax using artificial intelligence (AI) assistance. Specifically, the proposed framework for lung ultrasound scan follows the steps taken by skilled physicians. It begins with finding the appropriate transducer position on the chest to locate the pleural line accurately in B-mode. The next step involves acquiring temporal M-mode data to determine the presence of lung sliding, a crucial indicator for pneumothorax. To mimic the sequential process of clinicians, two DL models were developed. The first model focuses on quality assurance (QA) and regression of the pleural line region-of-interest, while the second model classifies lung sliding. To achieve the inference on a mobile device, a size of EfficientNet-Lite0 model was further reduced to have fewer than 3 million parameters.Main results. The results showed that both the QA and lung sliding classification models achieved over 95% in area under the receiver operating characteristic (AUC), while the ROI performance reached 89% in the dice similarity coefficient. The entire stepwise pipeline was simulated using retrospective data, yielding an AUC of 89%.Significance. The step-wise AI framework for the pneumothorax diagnosis with QA offers an intelligible guide for each clinical workflow, which achieved significantly high precision and real-time inferences.


Assuntos
Pneumotórax , Humanos , Pneumotórax/diagnóstico por imagem , Estudos Retrospectivos , Sistemas Automatizados de Assistência Junto ao Leito , Inteligência Artificial , Ultrassonografia/métodos
3.
Med Image Anal ; 80: 102519, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35767910

RESUMO

Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient's computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Algoritmos , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos
4.
Phys Med Biol ; 67(11)2022 05 16.
Artigo em Inglês | MEDLINE | ID: mdl-35390782

RESUMO

Objective.There are several x-ray computed tomography (CT) scanning strategies used to reduce radiation dose, such as (1) sparse-view CT, (2) low-dose CT and (3) region-of-interest (ROI) CT (called interior tomography). To further reduce the dose, sparse-view and/or low-dose CT settings can be applied together with interior tomography. Interior tomography has various advantages in terms of reducing the number of detectors and decreasing the x-ray radiation dose. However, a large patient or a small field-of-view (FOV) detector can cause truncated projections, and then the reconstructed images suffer from severe cupping artifacts. In addition, although low-dose CT can reduce the radiation exposure dose, analytic reconstruction algorithms produce image noise. Recently, many researchers have utilized image-domain deep learning (DL) approaches to remove each artifact and demonstrated impressive performances, and the theory of deep convolutional framelets supports the reason for the performance improvement.Approach.In this paper, we found that it is difficult to solve coupled artifacts using an image-domain convolutional neural network (CNN) based on deep convolutional framelets.Significance.To address the coupled problem, we decouple it into two sub-problems: (i) image-domain noise reduction inside the truncated projection to solve low-dose CT problem and (ii) extrapolation of the projection outside the truncated projection to solve the ROI CT problem. The decoupled sub-problems are solved directly with a novel proposed end-to-end learning method using dual-domain CNNs.Main results.We demonstrate that the proposed method outperforms the conventional image-domain DL methods, and a projection-domain CNN shows better performance than the image-domain CNNs commonly used by many researchers.


Assuntos
Aprendizado Profundo , Algoritmos , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Raios X
5.
Diagnostics (Basel) ; 12(1)2022 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-35054267

RESUMO

Imaging plays an important role in assessing the severity of COVID-19 pneumonia. Recent COVID-19 research indicates that the disease progress propagates from the bottom of the lungs to the top. However, chest radiography (CXR) cannot directly provide a quantitative metric of radiographic opacities, and existing AI-assisted CXR analysis methods do not quantify the regional severity. In this paper, to assist the regional analysis, we developed a fully automated framework using deep learning-based four-region segmentation and detection models to assist the quantification of COVID-19 pneumonia. Specifically, a segmentation model is first applied to separate left and right lungs, and then a detection network of the carina and left hilum is used to separate upper and lower lungs. To improve the segmentation performance, an ensemble strategy with five models is exploited. We evaluated the clinical relevance of the proposed method compared with the radiographic assessment of the quality of lung edema (RALE) annotated by physicians. Mean intensities of segmented four regions indicate a positive correlation to the regional extent and density scores of pulmonary opacities based on the RALE. Therefore, the proposed method can accurately assist the quantification of regional pulmonary opacities of COVID-19 pneumonia patients.

6.
Med Phys ; 48(12): 7657-7672, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34791655

RESUMO

PURPOSE: Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training. METHODS: In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction. RESULTS: We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training. CONCLUSIONS: The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Humanos , Imagens de Fantasmas , Projetos de Pesquisa , Tomografia Computadorizada por Raios X
7.
Phys Med Biol ; 66(23)2021 11 26.
Artigo em Inglês | MEDLINE | ID: mdl-34768246

RESUMO

Segmentation has been widely used in diagnosis, lesion detection, and surgery planning. Although deep learning (DL)-based segmentation methods currently outperform traditional methods, most DL-based segmentation models are computationally expensive and memory inefficient, which are not suitable for the intervention of liver surgery. To address this issue, a simple solution is to make a segmentation model very small for the fast inference time, however, there is a trade-off between the model size and performance. In this paper, we propose a DL-based real-time 3-D liver CT segmentation method, where knowledge distillation (KD) method, known as knowledge transfer from teacher to student models, is incorporated to compress the model while preserving the performance. Because it is well known that the knowledge transfer is inefficient when the disparity of teacher and student model sizes is large, we propose a growing teacher assistant network (GTAN) to gradually learn the knowledge without extra computational cost, which can efficiently transfer knowledge even with the large gap of teacher and student model sizes. In our results, dice similarity coefficient of the student model with KD improved 1.2% (85.9% to 87.1%) compared to the student model without KD, which is a similar performance of the teacher model using only 8% (100k) parameters. Furthermore, with a student model of 2% (30k) parameters, the proposed model using the GTAN improved the dice coefficient about 2% compared to the student model without KD, and the inference time is 13 ms per a 3-D image. Therefore, the proposed method has a great potential for intervention in liver surgery as well as in many real-time applications.


Assuntos
Fígado , Tomografia Computadorizada por Raios X , Humanos , Fígado/diagnóstico por imagem , Cintilografia
8.
PET Clin ; 16(4): 533-542, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34537129

RESUMO

PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Razão Sinal-Ruído
9.
Phys Med Biol ; 66(15)2021 07 19.
Artigo em Inglês | MEDLINE | ID: mdl-34198277

RESUMO

Our study aims to improve the signal-to-noise ratio of positron emission tomography (PET) imaging using conditional unsupervised learning. The proposed method does not require low- and high-quality pairs for network training which can be easily applied to existing PET/computed tomography (CT) and PET/magnetic resonance (MR) datasets. This method consists of two steps: populational training and individual fine-tuning. As for populational training, a network was first pre-trained by a group of patients' noisy PET images and the corresponding anatomical prior images from CT or MR. As for individual fine-tuning, a new network with initial parameters inherited from the pre-trained network was fine-tuned by the test patient's noisy PET image and the corresponding anatomical prior image. Only the last few layers were fine-tuned to take advantage of the populational information and the pre-training efforts. Both networks shared the same structure and took the CT or MR images as the network input so that the network output was conditioned on the patient's anatomic prior information. The noisy PET images were used as the training and fine-tuning labels. The proposed method was evaluated on a68Ga-PPRGD2 PET/CT dataset and a18F-FDG PET/MR dataset. For the PET/CT dataset, with the original noisy PET image as the baseline, the proposed method has a significantly higher contrast-to noise ratio (CNR) improvement (71.85% ± 27.05%) than Gaussian (12.66% ± 6.19%,P= 0.002), nonlocal mean method (22.60% ± 13.11%,P= 0.002) and conditional deep image prior method (52.94% ± 21.79%,P= 0.0039). For the PET/MR dataset, compared to Gaussian (18.73% ± 9.98%,P< 0.0001), NLM (26.01% ± 19.40%,P< 0.0001) and CDIP (47.48% ± 25.36%,P< 0.0001), the CNR improvement ratio of the proposed method (58.07% ± 28.45%) is the highest. In addition, the denoised images using both datasets also showed that the proposed method can accurately restore tumor structures while also smoothing out the noise.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Aprendizado de Máquina não Supervisionado , Humanos , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Razão Sinal-Ruído
10.
Phys Med Biol ; 66(15)2021 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-34126602

RESUMO

Compared to conventional computed tomography (CT), spectral CT can provide the capability of material decomposition, which can be used in many clinical diagnosis applications. However, the decomposed images can be very noisy due to the dose limit in CT scanning and the noise magnification of the material decomposition process. To alleviate this situation, we proposed an iterative one-step inversion material decomposition algorithm with a Noise2Noise prior. The algorithm estimated material images directly from projection data and used a Noise2Noise prior for denoising. In contrast to supervised deep learning methods, the designed Noise2Noise prior was built based on self-supervised learning and did not need external data for training. In our method, the data consistency term and the Noise2Noise network were alternatively optimized in the iterative framework, respectively, using a separable quadratic surrogate (SQS) and the Adam algorithm. The proposed iterative algorithm was validated and compared to other methods on simulated spectral CT data, preclinical photon-counting CT data and clinical dual-energy CT data. Quantitative analysis showed that our proposed method performs promisingly on noise suppression and structure detail recovery.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X
11.
Eur J Radiol ; 139: 109583, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33846041

RESUMO

PURPOSE: As of August 30th, there were in total 25.1 million confirmed cases and 845 thousand deaths caused by coronavirus disease of 2019 (COVID-19) worldwide. With overwhelming demands on medical resources, patient stratification based on their risks is essential. In this multi-center study, we built prognosis models to predict severity outcomes, combining patients' electronic health records (EHR), which included vital signs and laboratory data, with deep learning- and CT-based severity prediction. METHOD: We first developed a CT segmentation network using datasets from multiple institutions worldwide. Two biomarkers were extracted from the CT images: total opacity ratio (TOR) and consolidation ratio (CR). After obtaining TOR and CR, further prognosis analysis was conducted on datasets from INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3. For each data cohort, generalized linear model (GLM) was applied for prognosis prediction. RESULTS: For the deep learning model, the correlation coefficient of the network prediction and manual segmentation was 0.755, 0.919, and 0.824 for the three cohorts, respectively. The AUC (95 % CI) of the final prognosis models was 0.85(0.77,0.92), 0.93(0.87,0.98), and 0.86(0.75,0.94) for INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3 cohorts, respectively. Either TOR or CR exist in all three final prognosis models. Age, white blood cell (WBC), and platelet (PLT) were chosen predictors in two cohorts. Oxygen saturation (SpO2) was a chosen predictor in one cohort. CONCLUSION: The developed deep learning method can segment lung infection regions. Prognosis results indicated that age, SpO2, CT biomarkers, PLT, and WBC were the most important prognostic predictors of COVID-19 in our prognosis model.


Assuntos
COVID-19 , Aprendizado Profundo , Registros Eletrônicos de Saúde , Humanos , Pulmão , Prognóstico , SARS-CoV-2 , Tomografia Computadorizada por Raios X
12.
Med Image Anal ; 70: 101993, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33711739

RESUMO

In recent years, deep learning-based image analysis methods have been widely applied in computer-aided detection, diagnosis and prognosis, and has shown its value during the public health crisis of the novel coronavirus disease 2019 (COVID-19) pandemic. Chest radiograph (CXR) has been playing a crucial role in COVID-19 patient triaging, diagnosing and monitoring, particularly in the United States. Considering the mixed and unspecific signals in CXR, an image retrieval model of CXR that provides both similar images and associated clinical information can be more clinically meaningful than a direct image diagnostic model. In this work we develop a novel CXR image retrieval model based on deep metric learning. Unlike traditional diagnostic models which aim at learning the direct mapping from images to labels, the proposed model aims at learning the optimized embedding space of images, where images with the same labels and similar contents are pulled together. The proposed model utilizes multi-similarity loss with hard-mining sampling strategy and attention mechanism to learn the optimized embedding space, and provides similar images, the visualizations of disease-related attention maps and useful clinical information to assist clinical decisions. The model is trained and validated on an international multi-site COVID-19 dataset collected from 3 different sources. Experimental results of COVID-19 image retrieval and diagnosis tasks show that the proposed model can serve as a robust solution for CXR analysis and patient management for COVID-19. The model is also tested on its transferability on a different clinical decision support task for COVID-19, where the pre-trained model is applied to extract image features from a new dataset without any further training. The extracted features are then combined with COVID-19 patient's vitals, lab tests and medical histories to predict the possibility of airway intubation in 72 hours, which is strongly associated with patient prognosis, and is crucial for patient care and hospital resource planning. These results demonstrate our deep metric learning based image retrieval model is highly efficient in the CXR retrieval, diagnosis and prognosis, and thus has great clinical value for the treatment and management of COVID-19 patients.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias
13.
Sci Rep ; 11(1): 858, 2021 01 13.
Artigo em Inglês | MEDLINE | ID: mdl-33441578

RESUMO

To compare the performance of artificial intelligence (AI) and Radiographic Assessment of Lung Edema (RALE) scores from frontal chest radiographs (CXRs) for predicting patient outcomes and the need for mechanical ventilation in COVID-19 pneumonia. Our IRB-approved study included 1367 serial CXRs from 405 adult patients (mean age 65 ± 16 years) from two sites in the US (Site A) and South Korea (Site B). We recorded information pertaining to patient demographics (age, gender), smoking history, comorbid conditions (such as cancer, cardiovascular and other diseases), vital signs (temperature, oxygen saturation), and available laboratory data (such as WBC count and CRP). Two thoracic radiologists performed the qualitative assessment of all CXRs based on the RALE score for assessing the severity of lung involvement. All CXRs were processed with a commercial AI algorithm to obtain the percentage of the lung affected with findings related to COVID-19 (AI score). Independent t- and chi-square tests were used in addition to multiple logistic regression with Area Under the Curve (AUC) as output for predicting disease outcome and the need for mechanical ventilation. The RALE and AI scores had a strong positive correlation in CXRs from each site (r2 = 0.79-0.86; p < 0.0001). Patients who died or received mechanical ventilation had significantly higher RALE and AI scores than those with recovery or without the need for mechanical ventilation (p < 0.001). Patients with a more substantial difference in baseline and maximum RALE scores and AI scores had a higher prevalence of death and mechanical ventilation (p < 0.001). The addition of patients' age, gender, WBC count, and peripheral oxygen saturation increased the outcome prediction from 0.87 to 0.94 (95% CI 0.90-0.97) for RALE scores and from 0.82 to 0.91 (95% CI 0.87-0.95) for the AI scores. AI algorithm is as robust a predictor of adverse patient outcome (death or need for mechanical ventilation) as subjective RALE scores in patients with COVID-19 pneumonia.


Assuntos
Inteligência Artificial , COVID-19/diagnóstico , COVID-19/terapia , Respiração Artificial , Adulto , Idoso , Idoso de 80 Anos ou mais , COVID-19/diagnóstico por imagem , Estudos de Coortes , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem , Pulmão/patologia , Masculino , Pessoa de Meia-Idade , Tamanho do Órgão , Prognóstico , Tomografia Computadorizada por Raios X , Adulto Jovem
14.
IEEE J Biomed Health Inform ; 24(12): 3529-3538, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33044938

RESUMO

Early and accurate diagnosis of Coronavirus disease (COVID-19) is essential for patient isolation and contact tracing so that the spread of infection can be limited. Computed tomography (CT) can provide important information in COVID-19, especially for patients with moderate to severe disease as well as those with worsening cardiopulmonary status. As an automatic tool, deep learning methods can be utilized to perform semantic segmentation of affected lung regions, which is important to establish disease severity and prognosis prediction. Both the extent and type of pulmonary opacities help assess disease severity. However, manually pixel-level multi-class labelling is time-consuming, subjective, and non-quantitative. In this article, we proposed a hybrid weak label-based deep learning method that utilize both the manually annotated pulmonary opacities from COVID-19 pneumonia and the patient-level disease-type information available from the clinical report. A UNet was firstly trained with semantic labels to segment the total infected region. It was used to initialize another UNet, which was trained to segment the consolidations with patient-level information using the Expectation-Maximization (EM) algorithm. To demonstrate the performance of the proposed method, multi-institutional CT datasets from Iran, Italy, South Korea, and the United States were utilized. Results show that our proposed method can predict the infected regions as well as the consolidation regions with good correlation to human annotation.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Tomografia Computadorizada por Raios X/métodos , Algoritmos , COVID-19/virologia , Feminino , Humanos , Masculino , Estudos Retrospectivos , SARS-CoV-2/isolamento & purificação , Índice de Gravidade de Doença
15.
N Engl J Med ; 382(20): 1926-1932, 2020 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-32402162

RESUMO

We report the implantation of patient-derived midbrain dopaminergic progenitor cells, differentiated in vitro from autologous induced pluripotent stem cells (iPSCs), in a patient with idiopathic Parkinson's disease. The patient-specific progenitor cells were produced under Good Manufacturing Practice conditions and characterized as having the phenotypic properties of substantia nigra pars compacta neurons; testing in a humanized mouse model (involving peripheral-blood mononuclear cells) indicated an absence of immunogenicity to these cells. The cells were implanted into the putamen (left hemisphere followed by right hemisphere, 6 months apart) of a patient with Parkinson's disease, without the need for immunosuppression. Positron-emission tomography with the use of fluorine-18-L-dihydroxyphenylalanine suggested graft survival. Clinical measures of symptoms of Parkinson's disease after surgery stabilized or improved at 18 to 24 months after implantation. (Funded by the National Institutes of Health and others.).


Assuntos
Neurônios Dopaminérgicos/citologia , Células-Tronco Pluripotentes Induzidas/transplante , Doença de Parkinson/terapia , Parte Compacta da Substância Negra/citologia , Idoso , Animais , Gânglios da Base/diagnóstico por imagem , Gânglios da Base/metabolismo , Diferenciação Celular , Modelos Animais de Doenças , Neurônios Dopaminérgicos/metabolismo , Neurônios Dopaminérgicos/transplante , Seguimentos , Humanos , Células-Tronco Pluripotentes Induzidas/imunologia , Masculino , Camundongos , Camundongos SCID , Doença de Parkinson/diagnóstico por imagem , Tomografia por Emissão de Pósitrons , Putamen/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Transplante Autólogo , Transplante Homólogo
16.
Phys Med Biol ; 65(16): 165007, 2020 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-32454466

RESUMO

It is important to measure the respiratory cycle in positron emission tomography (PET) to enhance the contrast of the tumor as well as the accuracy of its localization in organs such as the lung and liver. Several types of data-driven respiratory gating methods, such as center of mass and principal component analysis, have been developed to directly measure the breathing cycle from PET images and listmode data. However, the breathing cycle is still hard to detect in low signal-to-noise ratio (SNR) data, particularly in low dose PET/CT scans. To address this issue, a time-of-flight (TOF) PET is currently utilized for the data-driven respiratory gating because of its higher SNR and better localization of the region of interest. To further improve the accuracy of respiratory gating with TOF information, we propose an accurate data-driven respiratory gating method, which retrospectively derives the respiratory signal using a localized sensing method based on a diaphragm mask in TOF PET data. To assess the accuracy of the proposed method, the performance is evaluated with three patient datasets, and a pressure-belt signal as the ground truth is compared. In our experiments, we validate that the respiratory signal using the proposed data-driven gating method is well matched to the pressure-belt respiratory signal with less than 5% peak time errors and over 80% trace correlations. Based on gated signals, the respiratory-gated image of the proposed method provides more clear edges of organs compared to images using conventional non-TOF methods. Therefore, we demonstrate that the proposed method can achieve improvements for the accuracy of gating signals and image quality.


Assuntos
Diafragma/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Técnicas de Imagem de Sincronização Respiratória/métodos , Humanos , Respiração , Estudos Retrospectivos , Razão Sinal-Ruído
17.
Med Phys ; 47(7): 3064-3077, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32279317

RESUMO

PURPOSE: To develop a magnetic resonance (MR)-based method for estimation of continuous linear attenuation coefficients (LACs) in positron emission tomography (PET) using a physical compartmental model and ultrashort echo time (UTE)/multi-echo Dixon (mUTE) acquisitions. METHODS: We propose a three-dimensional (3D) mUTE sequence to acquire signals from water, fat, and short T2 components (e.g., bones) simultaneously in a single acquisition. The proposed mUTE sequence integrates 3D UTE with multi-echo Dixon acquisitions and uses sparse radial trajectories to accelerate imaging speed. Errors in the radial k-space trajectories are measured using a special k-space trajectory mapping sequence and corrected for image reconstruction. A physical compartmental model is used to fit the measured multi-echo MR signals to obtain fractions of water, fat, and bone components for each voxel, which are then used to estimate the continuous LAC map for PET attenuation correction. RESULTS: The performance of the proposed method was evaluated via phantom and in vivo human studies, using LACs from computed tomography (CT) as reference. Compared to Dixon- and atlas-based MRAC methods, the proposed method yielded PET images with higher correlation and similarity in relation to the reference. The relative absolute errors of PET activity values reconstructed by the proposed method were below 5% in all of the four lobes (frontal, temporal, parietal, and occipital), cerebellum, whole white matter, and gray matter regions across all subjects (n = 6). CONCLUSIONS: The proposed mUTE method can generate subject-specific, continuous LAC map for PET attenuation correction in PET/MR.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Imageamento por Ressonância Magnética , Imagens de Fantasmas , Tomografia Computadorizada por Raios X
18.
IEEE Trans Comput Imaging ; 5(4): 530-539, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31723575

RESUMO

The intrinsically limited spatial resolution of PET confounds image quantitation. This paper presents an image deblurring and super-resolution framework for PET using anatomical guidance provided by high-resolution MR images. The framework relies on image-domain post-processing of already-reconstructed PET images by means of spatially-variant deconvolution stabilized by an MR-based joint entropy penalty function. The method is validated through simulation studies based on the BrainWeb digital phantom, experimental studies based on the Hoffman phantom, and clinical neuroimaging studies pertaining to aging and Alzheimer's disease. The developed technique was compared with direct deconvolution and deconvolution stabilized by a quadratic difference penalty, a total variation penalty, and a Bowsher penalty. The BrainWeb simulation study showed improved image quality and quantitative accuracy measured by contrast-to-noise ratio, structural similarity index, root-mean-square error, and peak signal-to-noise ratio generated by this technique. The Hoffman phantom study indicated noticeable improvement in the structural similarity index (relative to the MR image) and gray-to-white contrast-to-noise ratio. Finally, clinical amyloid and tau imaging studies for Alzheimer's disease showed lowering of the coefficient of variation in several key brain regions associated with two target pathologies.

19.
Eur J Nucl Med Mol Imaging ; 46(13): 2780-2789, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31468181

RESUMO

PURPOSE: Image quality of positron emission tomography (PET) is limited by various physical degradation factors. Our study aims to perform PET image denoising by utilizing prior information from the same patient. The proposed method is based on unsupervised deep learning, where no training pairs are needed. METHODS: In this method, the prior high-quality image from the patient was employed as the network input and the noisy PET image itself was treated as the training label. Constrained by the network structure and the prior image input, the network was trained to learn the intrinsic structure information from the noisy image and output a restored PET image. To validate the performance of the proposed method, a computer simulation study based on the BrainWeb phantom was first performed. A 68Ga-PRGD2 PET/CT dataset containing 10 patients and a 18F-FDG PET/MR dataset containing 30 patients were later on used for clinical data evaluation. The Gaussian, non-local mean (NLM) using CT/MR image as priors, BM4D, and Deep Decoder methods were included as reference methods. The contrast-to-noise ratio (CNR) improvements were used to rank different methods based on Wilcoxon signed-rank test. RESULTS: For the simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the proposed method achieved the best performance regarding the bias-variance tradeoff. For the clinical PET/CT dataset, the proposed method achieved the highest CNR improvement ratio (53.35% ± 21.78%), compared with the Gaussian (12.64% ± 6.15%, P = 0.002), NLM guided by CT (24.35% ± 16.30%, P = 0.002), BM4D (38.31% ± 20.26%, P = 0.002), and Deep Decoder (41.67% ± 22.28%, P = 0.002) methods. For the clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80% ± 25.23%, higher than the Gaussian (18.16% ± 10.02%, P < 0.0001), NLM guided by MR (25.36% ± 19.48%, P < 0.0001), BM4D (37.02% ± 21.38%, P < 0.0001), and Deep Decoder (30.03% ± 20.64%, P < 0.0001) methods. Restored images for all the datasets demonstrate that the proposed method can effectively smooth out the noise while recovering image details. CONCLUSION: The proposed unsupervised deep learning framework provides excellent image restoration effects, outperforming the Gaussian, NLM methods, BM4D, and Deep Decoder methods.


Assuntos
Aprendizado Profundo , Aumento da Imagem/métodos , Tomografia por Emissão de Pósitrons , Razão Sinal-Ruído , Aprendizado de Máquina não Supervisionado , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Imagens de Fantasmas , Controle de Qualidade
20.
Med Phys ; 46(11): 4763-4776, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31132144

RESUMO

PURPOSE: Deep neural network-based image reconstruction has demonstrated promising performance in medical imaging for undersampled and low-dose scenarios. However, it requires large amount of memory and extensive time for the training. It is especially challenging to train the reconstruction networks for three-dimensional computed tomography (CT) because of the high resolution of CT images. The purpose of this work is to reduce the memory and time consumption of the training of the reconstruction networks for CT to make it practical for current hardware, while maintaining the quality of the reconstructed images. METHODS: We unrolled the proximal gradient descent algorithm for iterative image reconstruction to finite iterations and replaced the terms related to the penalty function with trainable convolutional neural networks (CNN). The network was trained greedily iteration by iteration in the image domain on patches, which requires reasonable amount of memory and time on mainstream graphics processing unit (GPU). To overcome the local-minimum problem caused by greedy learning, we used deep UNet as the CNN and incorporated separable quadratic surrogate with ordered subsets for data fidelity, so that the solution could escape from easy local minimums and achieve better image quality. RESULTS: The proposed method achieved comparable image quality with state-of-the-art neural network for CT image reconstruction on two-dimensional (2D) sparse-view and limited-angle problems on the low-dose CT challenge dataset. The difference in root-mean-square-error (RMSE) and structural similarity index (SSIM) was within [-0.23,0.47] HU and [0,0.001], respectively, with 95% confidence level. For three-dimensional (3D) image reconstruction with ordinary-size CT volume, the proposed method only needed 2 GB graphics processing unit (GPU) memory and 0.45 s per training iteration as minimum requirement, whereas existing methods may require 417 GB and 31 min. The proposed method achieved improved performance compared to total variation- and dictionary learning-based iterative reconstruction for both 2D and 3D problems. CONCLUSIONS: We proposed a training-time computationally efficient neural network for CT image reconstruction. The proposed method achieved comparable image quality with state-of-the-art neural network for CT reconstruction, with significantly reduced memory and time requirement during training. The proposed method is applicable to 3D image reconstruction problems such as cone-beam CT and tomosynthesis on mainstream GPUs.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Imageamento Tridimensional , Controle de Qualidade , Doses de Radiação , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA