Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 122
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Eur J Nucl Med Mol Imaging ; 51(2): 358-368, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37787849

RESUMO

PURPOSE: Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS: Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS: Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION: DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Razão Sinal-Ruído , Modelos Estatísticos , Algoritmos
2.
N Engl J Med ; 382(20): 1926-1932, 2020 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-32402162

RESUMO

We report the implantation of patient-derived midbrain dopaminergic progenitor cells, differentiated in vitro from autologous induced pluripotent stem cells (iPSCs), in a patient with idiopathic Parkinson's disease. The patient-specific progenitor cells were produced under Good Manufacturing Practice conditions and characterized as having the phenotypic properties of substantia nigra pars compacta neurons; testing in a humanized mouse model (involving peripheral-blood mononuclear cells) indicated an absence of immunogenicity to these cells. The cells were implanted into the putamen (left hemisphere followed by right hemisphere, 6 months apart) of a patient with Parkinson's disease, without the need for immunosuppression. Positron-emission tomography with the use of fluorine-18-L-dihydroxyphenylalanine suggested graft survival. Clinical measures of symptoms of Parkinson's disease after surgery stabilized or improved at 18 to 24 months after implantation. (Funded by the National Institutes of Health and others.).


Assuntos
Neurônios Dopaminérgicos/citologia , Células-Tronco Pluripotentes Induzidas/transplante , Doença de Parkinson/terapia , Parte Compacta da Substância Negra/citologia , Idoso , Animais , Gânglios da Base/diagnóstico por imagem , Gânglios da Base/metabolismo , Diferenciação Celular , Modelos Animais de Doenças , Neurônios Dopaminérgicos/metabolismo , Neurônios Dopaminérgicos/transplante , Seguimentos , Humanos , Células-Tronco Pluripotentes Induzidas/imunologia , Masculino , Camundongos , Camundongos SCID , Doença de Parkinson/diagnóstico por imagem , Tomografia por Emissão de Pósitrons , Putamen/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Transplante Autólogo , Transplante Homólogo
3.
NMR Biomed ; 35(4): e4224, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-31865615

RESUMO

Arterial spin labeling (ASL) imaging is a powerful magnetic resonance imaging technique that allows to quantitatively measure blood perfusion non-invasively, which has great potential for assessing tissue viability in various clinical settings. However, the clinical applications of ASL are currently limited by its low signal-to-noise ratio (SNR), limited spatial resolution, and long imaging time. In this work, we propose an unsupervised deep learning-based image denoising and reconstruction framework to improve the SNR and accelerate the imaging speed of high resolution ASL imaging. The unique feature of the proposed framework is that it does not require any prior training pairs but only the subject's own anatomical prior, such as T1-weighted images, as network input. The neural network was trained from scratch in the denoising or reconstruction process, with noisy images or sparely sampled k-space data as training labels. Performance of the proposed method was evaluated using in vivo experiment data obtained from 3 healthy subjects on a 3T MR scanner, using ASL images acquired with 44-min acquisition time as the ground truth. Both qualitative and quantitative analyses demonstrate the superior performance of the proposed txtc framework over the reference methods. In summary, our proposed unsupervised deep learning-based denoising and reconstruction framework can improve the image quality and accelerate the imaging speed of ASL imaging.


Assuntos
Aprendizado Profundo , Encéfalo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Razão Sinal-Ruído , Marcadores de Spin
4.
Eur Radiol ; 32(4): 2235-2245, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34988656

RESUMO

BACKGROUND: Main challenges for COVID-19 include the lack of a rapid diagnostic test, a suitable tool to monitor and predict a patient's clinical course and an efficient way for data sharing among multicenters. We thus developed a novel artificial intelligence system based on deep learning (DL) and federated learning (FL) for the diagnosis, monitoring, and prediction of a patient's clinical course. METHODS: CT imaging derived from 6 different multicenter cohorts were used for stepwise diagnostic algorithm to diagnose COVID-19, with or without clinical data. Patients with more than 3 consecutive CT images were trained for the monitoring algorithm. FL has been applied for decentralized refinement of independently built DL models. RESULTS: A total of 1,552,988 CT slices from 4804 patients were used. The model can diagnose COVID-19 based on CT alone with the AUC being 0.98 (95% CI 0.97-0.99), and outperforms the radiologist's assessment. We have also successfully tested the incorporation of the DL diagnostic model with the FL framework. Its auto-segmentation analyses co-related well with those by radiologists and achieved a high Dice's coefficient of 0.77. It can produce a predictive curve of a patient's clinical course if serial CT assessments are available. INTERPRETATION: The system has high consistency in diagnosing COVID-19 based on CT, with or without clinical data. Alternatively, it can be implemented on a FL platform, which would potentially encourage the data sharing in the future. It also can produce an objective predictive curve of a patient's clinical course for visualization. KEY POINTS: • CoviDet could diagnose COVID-19 based on chest CT with high consistency; this outperformed the radiologist's assessment. Its auto-segmentation analyses co-related well with those by radiologists and could potentially monitor and predict a patient's clinical course if serial CT assessments are available. It can be integrated into the federated learning framework. • CoviDet can be used as an adjunct to aid clinicians with the CT diagnosis of COVID-19 and can potentially be used for disease monitoring; federated learning can potentially open opportunities for global collaboration.


Assuntos
Inteligência Artificial , COVID-19 , Algoritmos , Humanos , Radiologistas , Tomografia Computadorizada por Raios X/métodos
5.
Neuroimage ; 240: 118380, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34252526

RESUMO

Parametric imaging based on dynamic positron emission tomography (PET) has wide applications in neurology. Compared to indirect methods, direct reconstruction methods, which reconstruct parametric images directly from the raw PET data, have superior image quality due to better noise modeling and richer information extracted from the PET raw data. For low-dose scenarios, the advantages of direct methods are more obvious. However, the wide adoption of direct reconstruction is inevitably impeded by the excessive computational demand and deficiency of the accessible raw data. In addition, motion modeling inside dynamic PET image reconstruction raises more computational challenges for direct reconstruction methods. In this work, we focused on the 18F-FDG Patlak model, and proposed a data-driven approach which can estimate the motion corrected full-dose direct Patlak images from the dynamic PET reconstruction series, based on a proposed novel temporal non-local convolutional neural network. During network training, direct reconstruction with motion correction based on full-dose dynamic PET sinograms was performed to obtain the training labels. The reconstructed full-dose /low-dose dynamic PET images were supplied as the network input. In addition, a temporal non-local block based on the dynamic PET images was proposed to better recover the structural information and reduce the image noise. During testing, the proposed network can directly output high-quality Patlak parametric images from the full-dose /low-dose dynamic PET images in seconds. Experiments based on 15 full-dose and 15 low-dose 18F-FDG brain datasets were conducted and analyzed to validate the feasibility of the proposed framework. Results show that the proposed framework can generate better image quality than reference methods.


Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/metabolismo , Interpretação Estatística de Dados , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Feminino , Humanos , Masculino
6.
Eur J Nucl Med Mol Imaging ; 48(5): 1351-1361, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33108475

RESUMO

PURPOSE: PET measures of amyloid and tau pathologies are powerful biomarkers for the diagnosis and monitoring of Alzheimer's disease (AD). Because cortical regions are close to bone, quantitation accuracy of amyloid and tau PET imaging can be significantly influenced by errors of attenuation correction (AC). This work presents an MR-based AC method that combines deep learning with a novel ultrashort time-to-echo (UTE)/multi-echo Dixon (mUTE) sequence for amyloid and tau imaging. METHODS: Thirty-five subjects that underwent both 11C-PiB and 18F-MK6240 scans were included in this study. The proposed method was compared with Dixon-based atlas method as well as magnetization-prepared rapid acquisition with gradient echo (MPRAGE)- or Dixon-based deep learning methods. The Dice coefficient and validation loss of the generated pseudo-CT images were used for comparison. PET error images regarding standardized uptake value ratio (SUVR) were quantified through regional and surface analysis to evaluate the final AC accuracy. RESULTS: The Dice coefficients of the deep learning methods based on MPRAGE, Dixon, and mUTE images were 0.84 (0.91), 0.84 (0.92), and 0.87 (0.94) for the whole-brain (above-eye) bone regions, respectively, higher than the atlas method of 0.52 (0.64). The regional SUVR error for the atlas method was around 6%, higher than the regional SUV error. The regional SUV and SUVR errors for all deep learning methods were below 2%, with mUTE-based deep learning method performing the best. As for the surface analysis, the atlas method showed the largest error (> 10%) near vertices inside superior frontal, lateral occipital, superior parietal, and inferior temporal cortices. The mUTE-based deep learning method resulted in the least number of regions with error higher than 1%, with the largest error (> 5%) showing up near the inferior temporal and medial orbitofrontal cortices. CONCLUSION: Deep learning with mUTE can generate accurate AC for amyloid and tau imaging in PET/MR.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Imagem Multimodal , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
7.
PLoS Comput Biol ; 16(9): e1008186, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32941425

RESUMO

Identifying heterogeneous cognitive impairment markers at an early stage is vital for Alzheimer's disease diagnosis. However, due to complex and uncertain brain connectivity features in the cognitive domains, it remains challenging to quantify functional brain connectomic changes during non-pharmacological interventions for amnestic mild cognitive impairment (aMCI) patients. We present a quantitative method for functional brain network analysis of fMRI data based on the multi-graph unsupervised Gaussian embedding method (MG2G). This neural network-based model can effectively learn low-dimensional Gaussian distributions from the original high-dimensional sparse functional brain networks, quantify uncertainties in link prediction, and discover the intrinsic dimensionality of brain networks. Using the Wasserstein distance to measure probabilistic changes, we discovered that brain regions in the default mode network and somatosensory/somatomotor hand, fronto-parietal task control, memory retrieval, and visual and dorsal attention systems had relatively large variations during non-pharmacological training, which might provide distinct biomarkers for fine-grained monitoring of aMCI cognitive alteration. An important finding of our study is the ability of the new method to capture subtle changes for individual patients before and after short-term intervention. More broadly, the MG2G method can be used in studying multiple brain disorders and injuries, e.g., in Parkinson's disease or traumatic brain injury (TBI), and hence it will be useful to the wider neuroscience community.


Assuntos
Encéfalo , Disfunção Cognitiva , Diagnóstico por Computador/métodos , Distribuição Normal , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/fisiopatologia , Disfunção Cognitiva/terapia , Conectoma , Humanos , Imageamento por Ressonância Magnética , Memória/fisiologia , Testes de Estado Mental e Demência , Pessoa de Meia-Idade , Aprendizado de Máquina não Supervisionado
8.
Biochem Biophys Res Commun ; 509(1): 16-23, 2019 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-30581005

RESUMO

Pathological cardiac hypertrophy is a leading cause of morbidity and mortality in the world. However, it is still unclear the molecular mechanism revealing the progression of the disease. In the study, we illustrated that the expression of leukocyte immunoglobulin-like receptor B4 (LILRB4), associated with the pathological development of various inflammatory diseases, was down-regulated in pressure overload-induced hearts of patients and mice. LILRB4-knockout mice developed cardiac hypertrophy and heart failure by promoting cardiac dysfunction, fibrosis, inflammation and apoptosis. Mechanistically, transforming growth factor ß1 (TGF-ß1) expression was significantly promoted by LILRB4 deficiency in hearts of mice after aortic banding (AB) surgery. AB-induced inflammation in cardiac tissues was accelerated by LILRB4 deletion through elevating nuclear factor κB (NF-κB) signaling pathway. Furthermore, apoptosis triggered by AB operation in heart tissues was markedly enhanced in LILRB4-KO mice through promoting Caspase-3 activation. Importantly, the in vitro study indicated that LILRB4 knockdown-promoted fibrosis; inflammation and apoptosis were largely via the NF-κB signaling. Therefore, the findings above identified LILRB4 might be a negative regulator of cardiac remodeling, illustrating that LILRB4 represented as a therapeutic target for the prevention of cardiac hypertrophy and heart failure.


Assuntos
Cardiomegalia/patologia , Inflamação/patologia , Glicoproteínas de Membrana/imunologia , Miocárdio/patologia , NF-kappa B/imunologia , Receptores de Superfície Celular/imunologia , Receptores Imunológicos/imunologia , Animais , Apoptose , Cardiomegalia/genética , Cardiomegalia/imunologia , Células Cultivadas , Regulação para Baixo , Fibrose , Humanos , Inflamação/genética , Inflamação/imunologia , Masculino , Glicoproteínas de Membrana/genética , Camundongos Endogâmicos C57BL , Camundongos Knockout , Miocárdio/metabolismo , Receptores de Superfície Celular/genética , Receptores Imunológicos/genética
9.
Eur J Nucl Med Mol Imaging ; 46(13): 2780-2789, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31468181

RESUMO

PURPOSE: Image quality of positron emission tomography (PET) is limited by various physical degradation factors. Our study aims to perform PET image denoising by utilizing prior information from the same patient. The proposed method is based on unsupervised deep learning, where no training pairs are needed. METHODS: In this method, the prior high-quality image from the patient was employed as the network input and the noisy PET image itself was treated as the training label. Constrained by the network structure and the prior image input, the network was trained to learn the intrinsic structure information from the noisy image and output a restored PET image. To validate the performance of the proposed method, a computer simulation study based on the BrainWeb phantom was first performed. A 68Ga-PRGD2 PET/CT dataset containing 10 patients and a 18F-FDG PET/MR dataset containing 30 patients were later on used for clinical data evaluation. The Gaussian, non-local mean (NLM) using CT/MR image as priors, BM4D, and Deep Decoder methods were included as reference methods. The contrast-to-noise ratio (CNR) improvements were used to rank different methods based on Wilcoxon signed-rank test. RESULTS: For the simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the proposed method achieved the best performance regarding the bias-variance tradeoff. For the clinical PET/CT dataset, the proposed method achieved the highest CNR improvement ratio (53.35% ± 21.78%), compared with the Gaussian (12.64% ± 6.15%, P = 0.002), NLM guided by CT (24.35% ± 16.30%, P = 0.002), BM4D (38.31% ± 20.26%, P = 0.002), and Deep Decoder (41.67% ± 22.28%, P = 0.002) methods. For the clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80% ± 25.23%, higher than the Gaussian (18.16% ± 10.02%, P < 0.0001), NLM guided by MR (25.36% ± 19.48%, P < 0.0001), BM4D (37.02% ± 21.38%, P < 0.0001), and Deep Decoder (30.03% ± 20.64%, P < 0.0001) methods. Restored images for all the datasets demonstrate that the proposed method can effectively smooth out the noise while recovering image details. CONCLUSION: The proposed unsupervised deep learning framework provides excellent image restoration effects, outperforming the Gaussian, NLM methods, BM4D, and Deep Decoder methods.


Assuntos
Aprendizado Profundo , Aumento da Imagem/métodos , Tomografia por Emissão de Pósitrons , Razão Sinal-Ruído , Aprendizado de Máquina não Supervisionado , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Imagens de Fantasmas , Controle de Qualidade
10.
J Geriatr Psychiatry Neurol ; 32(6): 344-353, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31480987

RESUMO

It is widely recognized that depression may precipitate the incidence of dementia in the elderly individuals and individuals with amnestic mild cognitive impairment (aMCI) in particular. However, the association between subthreshold depression (SD) and cognitive deficits in patients with aMCI remains unclear. To address this, we collected demographic information and conducted a battery of neuropsychological cognitive assessments in 33 aMCI participants with SD (aMCI/SD+), 33 nondepressed aMCI participants (aMCI/SD-), and 53 normal controls (NC). Both aMCI groups showed significantly poorer performance in most cognitive domains relative to the NC group (ie, memory, language, processing speed, and executive function). Notably, the aMCI/SD+ group showed significantly poorer attention/working memory compared with the aMCI/SD- group. Multiple linear regression analyses revealed a significant negative association between the severity of depressive symptoms and attention/working memory capacity (ß = - .024, P = .024), accounting for 8.28% of the variations in this cognitive domain. All statistical analyses were adjusted by age, sex, and years of education. A logistic regression model had an accuracy of 72.4% in discriminating between the aMCI/SD+ and aMCI/SD- groups based on individual cognitive profiles over 6 domains. Our findings indicate that patients with aMCI with and without SD have distinct patterns of cognitive impairment. This finding may facilitate the diagnosis and treatment of SD in patients with aMCI.


Assuntos
Amnésia/psicologia , Disfunção Cognitiva/psicologia , Depressão/fisiopatologia , Testes Neuropsicológicos/normas , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
11.
Neuroimage ; 180(Pt A): 267-279, 2018 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-28712993

RESUMO

Visual gamma oscillations have been proposed to subserve perceptual binding, but their strong modulation by diverse stimulus features confounds interpretations of their precise functional role. Overcoming this challenge necessitates a comprehensive account of the relationship between gamma responses and stimulus features. Here we used multivariate pattern analyses on human MEG data to characterize the relationships between gamma responses and one basic stimulus feature, the orientation of contrast edges. Our findings confirmed we could decode orientation information from induced responses in two dominant frequency bands at 24-32 Hz and 50-58 Hz. Decoding was higher for cardinal than oblique orientations, with similar results also obtained for evoked MEG responses. In contrast to multivariate analyses, orientation information was mostly absent in univariate signals: evoked and induced responses in early visual cortex were similar in all orientations, with only exception an inverse oblique effect observed in induced responses, such that cardinal orientations produced weaker oscillatory signals than oblique orientations. Taken together, our results showed multivariate methods are well suited for the analysis of gamma oscillations, with multivariate patterns robustly encoding orientation information and predominantly discriminating cardinal from oblique stimuli.


Assuntos
Mapeamento Encefálico/métodos , Reconhecimento Visual de Modelos/fisiologia , Processamento de Sinais Assistido por Computador , Córtex Visual/fisiologia , Adulto , Potenciais Evocados Visuais/fisiologia , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Análise Multivariada , Orientação/fisiologia , Máquina de Vetores de Suporte , Adulto Jovem
12.
JAMA ; 318(22): 2199-2210, 2017 12 12.
Artigo em Inglês | MEDLINE | ID: mdl-29234806

RESUMO

Importance: Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective: Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Design, Setting, and Participants: Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures: Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures: The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results: The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.


Assuntos
Neoplasias da Mama/patologia , Metástase Linfática/diagnóstico , Aprendizado de Máquina , Patologistas , Algoritmos , Feminino , Humanos , Metástase Linfática/patologia , Patologia Clínica , Curva ROC
13.
Alzheimers Dement ; 13(11): 1261-1269, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28366797

RESUMO

INTRODUCTION: Misfolded tau and amyloid ß (Aß) proteins progressively accumulate in the human brain, causing altered neuronal function and neurodegeneration. This study sought to investigate whether the wide spectrum of functional reorganization in aging brains of cognitively normal individuals relates to specific pathological patterns of tau and Aß deposits. METHODS: We used functional connectivity neuroimaging and in vivo tau and Aß positron emission tomography scans to study cortical spatial relationships between imaging modalities. RESULTS: We found that a negative association between tau and functional connectivity combined with a positive association between Aß and functional connectivity is the most frequent cortical pattern among elderly subjects. Moreover, we found specific brain areas that interrelate hypoconnectivity and hyperconnectivity regions. DISCUSSION: Our findings have critical implications to understanding how the two main elements of Alzheimer's disease-related pathology affect the aging brain and how they cause alterations in large-scale neuronal circuits.


Assuntos
Envelhecimento/patologia , Encéfalo/metabolismo , Encéfalo/patologia , Vias Neurais/patologia , Proteínas tau/metabolismo , Idoso , Idoso de 80 Anos ou mais , Compostos de Anilina , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/diagnóstico por imagem , Oxigênio/sangue , Tomografia por Emissão de Pósitrons , Tiazóis
14.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 34(6): 842-849, 2017 Dec 01.
Artigo em Zh | MEDLINE | ID: mdl-29761977

RESUMO

In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

15.
Neuroimage ; 125: 587-600, 2016 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-26481679

RESUMO

Motivated by recent progress in signal processing on graphs, we have developed a matched signal detection (MSD) theory for signals with intrinsic structures described by weighted graphs. First, we regard graph Laplacian eigenvalues as frequencies of graph-signals and assume that the signal is in a subspace spanned by the first few graph Laplacian eigenvectors associated with lower eigenvalues. The conventional matched subspace detector can be applied to this case. Furthermore, we study signals that may not merely live in a subspace. Concretely, we consider signals with bounded variation on graphs and more general signals that are randomly drawn from a prior distribution. For bounded variation signals, the test is a weighted energy detector. For the random signals, the test statistic is the difference of signal variations on associated graphs, if a degenerate Gaussian distribution specified by the graph Laplacian is adopted. We evaluate the effectiveness of the MSD on graphs both with simulated and real data sets. Specifically, we apply MSD to the brain imaging data classification problem of Alzheimer's disease (AD) based on two independent data sets: 1) positron emission tomography data with Pittsburgh compound-B tracer of 30 AD and 40 normal control (NC) subjects, and 2) resting-state functional magnetic resonance imaging (R-fMRI) data of 30 early mild cognitive impairment and 20 NC subjects. Our results demonstrate that the MSD approach is able to outperform the traditional methods and help detect AD at an early stage, probably due to the success of exploiting the manifold structure of the data.


Assuntos
Doença de Alzheimer/diagnóstico , Mapeamento Encefálico/métodos , Encéfalo/patologia , Interpretação de Imagem Assistida por Computador/métodos , Modelos Neurológicos , Algoritmos , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Modelos Teóricos , Tomografia por Emissão de Pósitrons
16.
Med Phys ; 51(5): 3309-3321, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569143

RESUMO

BACKGROUND: Patient head motion is a common source of image artifacts in computed tomography (CT) of the head, leading to degraded image quality and potentially incorrect diagnoses. The partial angle reconstruction (PAR) means dividing the CT projection into several consecutive angular segments and reconstructing each segment individually. Although motion estimation and compensation using PAR has been developed and investigated in cardiac CT scans, its potential for reducing motion artifacts in head CT scans remains unexplored. PURPOSE: To develop a deep learning (DL) model capable of directly estimating head motion from PAR images of head CT scans and to integrate the estimated motion into an iterative reconstruction process to compensate for the motion. METHODS: Head motion is considered as a rigid transformation described by six time-variant variables, including the three variables for translation and three variables for rotation. Each motion variable is modeled using a B-spline defined by five control points (CP) along time. We split the full projections from 360° into 25 consecutive PARs and subsequently input them into a convolutional neural network (CNN) that outputs the estimated CPs for each motion variable. The estimated CPs are used to calculate the object motion in each projection, which are incorporated into the forward and backprojection of an iterative reconstruction algorithm to reconstruct the motion-compensated image. The performance of our DL model is evaluated through both simulation and phantom studies. RESULTS: The DL model achieved high accuracy in estimating head motion, as demonstrated in both the simulation study (mean absolute error (MAE) ranging from 0.28 to 0.45 mm or degree across different motion variables) and the phantom study (MAE ranging from 0.40 to 0.48 mm or degree). The resulting motion-corrected image, I D L , P A R ${I}_{DL,\ PAR}$ , exhibited a significant reduction in motion artifacts when compared to the traditional filtered back-projection reconstructions, which is evidenced both in the simulation study (image MAE drops from 178 ± $ \pm $ 33HU to 37 ± $ \pm $ 9HU, structural similarity index (SSIM) increases from 0.60 ± $ \pm $ 0.06 to 0.98 ± $ \pm $ 0.01) and the phantom study (image MAE drops from 117 ± $ \pm $ 17HU to 42 ± $ \pm $ 19HU, SSIM increases from 0.83 ± $ \pm $ 0.04 to 0.98 ± $ \pm $ 0.02). CONCLUSIONS: We demonstrate that using PAR and our proposed deep learning model enables accurate estimation of patient head motion and effectively reduces motion artifacts in the resulting head CT images.


Assuntos
Artefatos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Cabeça/diagnóstico por imagem , Movimentos da Cabeça , Imagens de Fantasmas
17.
Comput Med Imaging Graph ; 115: 102389, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38692199

RESUMO

Accurate reconstruction of a high-resolution 3D volume of the heart is critical for comprehensive cardiac assessments. However, cardiac magnetic resonance (CMR) data is usually acquired as a stack of 2D short-axis (SAX) slices, which suffers from the inter-slice misalignment due to cardiac motion and data sparsity from large gaps between SAX slices. Therefore, we aim to propose an end-to-end deep learning (DL) model to address these two challenges simultaneously, employing specific model components for each challenge. The objective is to reconstruct a high-resolution 3D volume of the heart (VHR) from acquired CMR SAX slices (VLR). We define the transformation from VLR to VHR as a sequential process of motion correction and super-resolution. Accordingly, our DL model incorporates two distinct components. The first component conducts motion correction by predicting displacement vectors to re-position each SAX slice accurately. The second component takes the motion-corrected SAX slices from the first component and performs the super-resolution to fill the data gaps. These two components operate in a sequential way, and the entire model is trained end-to-end. Our model significantly reduced inter-slice misalignment from originally 3.33±0.74 mm to 1.36±0.63 mm and generated accurate high resolution 3D volumes with Dice of 0.974±0.010 for left ventricle (LV) and 0.938±0.017 for myocardium in a simulation dataset. When compared to the LAX contours in a real-world dataset, our model achieved Dice of 0.945±0.023 for LV and 0.786±0.060 for myocardium. In both datasets, our model with specific components for motion correction and super-resolution significantly enhance the performance compared to the model without such design considerations. The codes for our model are available at https://github.com/zhennongchen/CMR_MC_SR_End2End.


Assuntos
Aprendizado Profundo , Coração , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Humanos , Imageamento Tridimensional/métodos , Coração/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Movimento (Física) , Processamento de Imagem Assistida por Computador/métodos
18.
Med Phys ; 51(3): 2096-2107, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37776263

RESUMO

BACKGROUND: Radiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming process. In recent years, deep convolutional neural networks (DCNN) have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long-range dependency is still limited, and this can result in sub-optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long-range information in several semantic segmentation tasks performed on medical images. PURPOSE: Despite the impressive representation capacity of vision transformer models, current vision transformer-based segmentation models still suffer from inconsistent and incorrect dense predictions when fed with multi-modal input data. We suspect that the power of their self-attention mechanism may be limited in extracting the complementary information that exists in multi-modal data. To this end, we propose a novel segmentation model, debuted, Cross-modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions. METHODS: We propose a novel architecture for cross-modal 3D semantic segmentation with two main components: (1) a cross-modal 3D Swin Transformer for integrating information from multiple modalities (PET and CT), and (2) a cross-modal shifted window attention block for learning complementary information from the modalities. To evaluate the efficacy of our approach, we conducted experiments and ablation studies on the HECKTOR 2021 challenge dataset. We compared our method against nnU-Net (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based models, including UNETR and Swin UNETR. The experiments employed a five-fold cross-validation setup using PET and CT images. RESULTS: Empirical evidence demonstrates that our proposed method consistently outperforms the comparative techniques. This success can be attributed to the CMA module's capacity to enhance inter-modality feature representations between PET and CT during head-and-neck tumor segmentation. Notably, SwinCross consistently surpasses Swin UNETR across all five folds, showcasing its proficiency in learning multi-modal feature representations at varying resolutions through the cross-modal attention modules. CONCLUSIONS: We introduced a cross-modal Swin Transformer for automating the delineation of head and neck tumors in PET and CT images. Our model incorporates a cross-modality attention module, enabling the exchange of features between modalities at multiple resolutions. The experimental results establish the superiority of our method in capturing improved inter-modality correlations between PET and CT for head-and-neck tumor segmentation. Furthermore, the proposed methodology holds applicability to other semantic segmentation tasks involving different imaging modalities like SPECT/CT or PET/MRI. Code:https://github.com/yli192/SwinCross_CrossModalSwinTransformer_for_Medical_Image_Segmentation.


Assuntos
Neoplasias de Cabeça e Pescoço , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Tomografia por Emissão de Pósitrons , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Aprendizagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
19.
IEEE Trans Med Imaging ; 43(6): 2098-2112, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38241121

RESUMO

To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Aprendizado de Máquina Supervisionado , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Imagem Corporal Total/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos
20.
Phys Med Biol ; 69(11)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38688292

RESUMO

Objective.The mean squared error (MSE), also known asL2loss, has been widely used as a loss function to optimize image denoising models due to its strong performance as a mean estimator of the Gaussian noise model. Recently, various low-dose computed tomography (LDCT) image denoising methods using deep learning combined with the MSE loss have been developed; however, this approach has been observed to suffer from the regression-to-the-mean problem, leading to over-smoothed edges and degradation of texture in the image.Approach.To overcome this issue, we propose a stochastic function in the loss function to improve the texture of the denoised CT images, rather than relying on complicated networks or feature space losses. The proposed loss function includes the MSE loss to learn the mean distribution and the Pearson divergence loss to learn feature textures. Specifically, the Pearson divergence loss is computed in an image space to measure the distance between two intensity measures of denoised low-dose and normal-dose CT images. The evaluation of the proposed model employs a novel approach of multi-metric quantitative analysis utilizing relative texture feature distance.Results.Our experimental results show that the proposed Pearson divergence loss leads to a significant improvement in texture compared to the conventional MSE loss and generative adversarial network (GAN), both qualitatively and quantitatively.Significance.Achieving consistent texture preservation in LDCT is a challenge in conventional GAN-type methods due to adversarial aspects aimed at minimizing noise while preserving texture. By incorporating the Pearson regularizer in the loss function, we can easily achieve a balance between two conflicting properties. Consistent high-quality CT images can significantly help clinicians in diagnoses and supporting researchers in the development of AI-diagnostic models.


Assuntos
Processamento de Imagem Assistida por Computador , Doses de Radiação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Humanos , Aprendizado Profundo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA