Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
AJNR Am J Neuroradiol ; 45(4): 406-411, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38331959

RESUMO

BACKGROUND AND PURPOSE: Predicting long-term clinical outcome in acute ischemic stroke is beneficial for prognosis, clinical trial design, resource management, and patient expectations. This study used a deep learning-based predictive model (DLPD) to predict 90-day mRS outcomes and compared its predictions with those made by physicians. MATERIALS AND METHODS: A previously developed DLPD that incorporated DWI and clinical data from the acute period was used to predict 90-day mRS outcomes in 80 consecutive patients with acute ischemic stroke from a single-center registry. We assessed the predictions of the model alongside those of 5 physicians (2 stroke neurologists and 3 neuroradiologists provided with the same imaging and clinical information). The primary analysis was the agreement between the ordinal mRS predictions of the model or physician and the ground truth using the Gwet Agreement Coefficient. We also evaluated the ability to identify unfavorable outcomes (mRS >2) using the area under the curve, sensitivity, and specificity. Noninferiority analyses were undertaken using limits of 0.1 for the Gwet Agreement Coefficient and 0.05 for the area under the curve analysis. The accuracy of prediction was also assessed using the mean absolute error for prediction, percentage of predictions ±1 categories away from the ground truth (±1 accuracy [ACC]), and percentage of exact predictions (ACC). RESULTS: To predict the specific mRS score, the DLPD yielded a Gwet Agreement Coefficient score of 0.79 (95% CI, 0.71-0.86), surpassing the physicians' score of 0.76 (95% CI, 0.67-0.84), and was noninferior to the readers (P < .001). For identifying unfavorable outcome, the model achieved an area under the curve of 0.81 (95% CI, 0.72-0.89), again noninferior to the readers' area under the curve of 0.79 (95% CI, 0.69-0.87) (P < .005). The mean absolute error, ±1ACC, and ACC were 0.89, 81%, and 36% for the DLPD. CONCLUSIONS: A deep learning method using acute clinical and imaging data for long-term functional outcome prediction in patients with acute ischemic stroke, the DLPD, was noninferior to that of clinical readers.


Assuntos
Aprendizado Profundo , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Valor Preditivo dos Testes , Acidente Vascular Cerebral/diagnóstico por imagem , Prognóstico
2.
J Magn Reson Imaging ; 59(3): 1010-1020, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37259967

RESUMO

BACKGROUND: 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is valuable for determining presence of viable tumor, but is limited by geographical restrictions, radiation exposure, and high cost. PURPOSE: To generate diagnostic-quality PET equivalent imaging for patients with brain neoplasms by deep learning with multi-contrast MRI. STUDY TYPE: Retrospective. SUBJECTS: Patients (59 studies from 51 subjects; age 56 ± 13 years; 29 males) who underwent 18 F-FDG PET and MRI for determining recurrent brain tumor. FIELD STRENGTH/SEQUENCE: 3T; 3D GRE T1, 3D GRE T1c, 3D FSE T2-FLAIR, and 3D FSE ASL, 18 F-FDG PET imaging. ASSESSMENT: Convolutional neural networks were trained using four MRIs as inputs and acquired FDG PET images as output. The agreement between the acquired and synthesized PET was evaluated by quality metrics and Bland-Altman plots for standardized uptake value ratio. Three physicians scored image quality on a 5-point scale, with score ≥3 as high-quality. They assessed the lesions on a 5-point scale, which was binarized to analyze diagnostic consistency of the synthesized PET compared to the acquired PET. STATISTICAL TESTS: The agreement in ratings between the acquired and synthesized PET were tested with Gwet's AC and exact Bowker test of symmetry. Agreement of the readers was assessed by Gwet's AC. P = 0.05 was used as the cutoff for statistical significance. RESULTS: The synthesized PET visually resembled the acquired PET and showed significant improvement in quality metrics (+21.7% on PSNR, +22.2% on SSIM, -31.8% on RSME) compared with ASL. A total of 49.7% of the synthesized PET were considered as high-quality compared to 73.4% of the acquired PET which was statistically significant, but with distinct variability between readers. For the positive/negative lesion assessment, the synthesized PET had an accuracy of 87% but had a tendency to overcall. CONCLUSION: The proposed deep learning model has the potential of synthesizing diagnostic quality FDG PET images without the use of radiotracers. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Masculino , Humanos , Adulto , Pessoa de Meia-Idade , Idoso , Fluordesoxiglucose F18 , Estudos Retrospectivos , Tomografia por Emissão de Pósitrons/métodos , Imageamento por Ressonância Magnética/métodos
3.
Med Image Comput Comput Assist Interv ; 14220: 279-289, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37961067

RESUMO

Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs. One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM). SOM separates the latent space into clusters and then maps the cluster centers to a discrete (typically 2D) grid preserving the high-dimensional relationship between clusters. However, learning SOM in a high-dimensional latent space tends to be unstable, especially in a self-supervision setting. Furthermore, the learned SOM grid does not necessarily capture clinically interesting information, such as brain age. To resolve these issues, we propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age solely based on longitudinal brain MRIs (i.e., without demographic or cognitive information). Called Longitudinally-consistent Self-Organized Representation learning (LSOR), the method is stable during training as it relies on soft clustering (vs. the hard cluster assignments used by existing SOM). Furthermore, our approach generates a latent space stratified according to brain age by aligning trajectories inferred from longitudinal MRIs to the reference vector associated with the corresponding SOM cluster. When applied to longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI, N=632), LSOR generates an interpretable latent space and achieves comparable or higher accuracy than the state-of-the-art representations with respect to the downstream tasks of classification (static vs. progressive mild cognitive impairment) and regression (determining ADAS-Cog score of all subjects). The code is available at https://github.com/ouyangjiahong/longitudinal-som-single-modality.

5.
Stroke ; 54(9): 2316-2327, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37485663

RESUMO

BACKGROUND: Predicting long-term clinical outcome based on the early acute ischemic stroke information is valuable for prognostication, resource management, clinical trials, and patient expectations. Current methods require subjective decisions about which imaging features to assess and may require time-consuming postprocessing. This study's goal was to predict ordinal 90-day modified Rankin Scale (mRS) score in acute ischemic stroke patients by fusing a Deep Learning model of diffusion-weighted imaging images and clinical information from the acute period. METHODS: A total of 640 acute ischemic stroke patients who underwent magnetic resonance imaging within 1 to 7 days poststroke and had 90-day mRS follow-up data were randomly divided into 70% (n=448) for model training, 15% (n=96) for validation, and 15% (n=96) for internal testing. Additionally, external testing on a cohort from Lausanne University Hospital (n=280) was performed to further evaluate model generalization. Accuracy for ordinal mRS, accuracy within ±1 mRS category, mean absolute prediction error, and determination of unfavorable outcome (mRS score >2) were evaluated for clinical only, imaging only, and 2 fused clinical-imaging models. RESULTS: The fused models demonstrated superior performance in predicting ordinal mRS score and unfavorable outcome in both internal and external test cohorts when compared with the clinical and imaging models. For the internal test cohort, the top fused model had the highest area under the curve of 0.92 for unfavorable outcome prediction and the lowest mean absolute error (0.96 [95% CI, 0.77-1.16]), with the highest proportion of mRS score predictions within ±1 category (79% [95% CI, 71%-88%]). On the external Lausanne University Hospital cohort, the best fused model had an area under the curve of 0.90 for unfavorable outcome prediction and outperformed other models with an mean absolute error of 0.90 (95% CI, 0.79-1.01), and the highest percentage of mRS score predictions within ±1 category (83% [95% CI, 78%-87%]). CONCLUSIONS: A Deep Learning-based imaging model fused with clinical variables can be used to predict 90-day stroke outcome with reduced subjectivity and user burden.


Assuntos
Aprendizado Profundo , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Prognóstico , Imageamento por Ressonância Magnética
6.
J Neurointerv Surg ; 15(6): 521-525, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35483913

RESUMO

BACKGROUND: Digital subtraction angiography (DSA) is the gold-standard method of assessing arterial blood flow and blockages prior to endovascular thrombectomy. OBJECTIVE: To detect anatomical features and arterial occlusions with DSA using artificial intelligence techniques. METHODS: We included 82 patients with acute ischemic stroke who underwent DSA imaging and whose carotid terminus was visible in at least one run. Two neurointerventionalists labeled the carotid location (when visible) and vascular occlusions on 382 total individual DSA runs. For detecting the carotid terminus, positive and negative image patches (either containing or not containing the internal carotid artery terminus) were extracted in a 1:1 ratio. Two convolutional neural network architectures (ResNet-50 pretrained on ImageNet and ResNet-50 trained from scratch) were evaluated. Area under the curve (AUC) of the receiver operating characteristic and pixel distance from the ground truth were calculated. The same training and analysis methods were used for detecting arterial occlusions. RESULTS: The ResNet-50 trained from scratch most accurately detected the carotid terminus (AUC 0.998 (95% CI 0.997 to 0.999), p<0.00001) and arterial occlusions (AUC 0.973 (95% CI 0.971 to 0.975), p<0.0001). Average pixel distances from ground truth for carotid terminus and occlusion localization were 63±45 and 98±84, corresponding to approximately 1.26±0.90 cm and 1.96±1.68 cm for a standard angiographic field-of-view. CONCLUSION: These results may serve as an unbiased standard for clinical stroke trials, as optimal standardization would be useful for core laboratories in endovascular thrombectomy studies, and also expedite decision-making during DSA-based procedures.


Assuntos
Arteriopatias Oclusivas , Aprendizado Profundo , AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Angiografia Digital/métodos , AVC Isquêmico/diagnóstico por imagem , AVC Isquêmico/cirurgia , Inteligência Artificial , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/cirurgia , Estudos Retrospectivos
7.
Radiology ; 307(1): e220882, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36472536

RESUMO

Background Perfusion imaging is important to identify a target mismatch in stroke but requires contrast agents and postprocessing software. Purpose To use a deep learning model to predict the hypoperfusion lesion in stroke and identify patients with a target mismatch profile from diffusion-weighted imaging (DWI) and clinical information alone, using perfusion MRI as the reference standard. Materials and Methods Imaging data sets of patients with acute ischemic stroke with baseline perfusion MRI and DWI were retrospectively reviewed from multicenter data available from 2008 to 2019 (Imaging Collaterals in Acute Stroke, Diffusion and Perfusion Imaging Evaluation for Understanding Stroke Evolution 2, and University of California, Los Angeles stroke registry). For perfusion MRI, rapid processing of perfusion and diffusion software automatically segmented the hypoperfusion lesion (time to maximum, ≥6 seconds) and ischemic core (apparent diffusion coefficient [ADC], ≤620 × 10-6 mm2/sec). A three-dimensional U-Net deep learning model was trained using baseline DWI, ADC, National Institutes of Health Stroke Scale score, and stroke symptom sidedness as inputs, with the union of hypoperfusion and ischemic core segmentation serving as the ground truth. Model performance was evaluated using the Dice score coefficient (DSC). Target mismatch classification based on the model was compared with that of the clinical-DWI mismatch approach defined by the DAWN trial by using the McNemar test. Results Overall, 413 patients (mean age, 67 years ± 15 [SD]; 207 men) were included for model development and primary analysis using fivefold cross-validation (247, 83, and 83 patients in the training, validation, and test sets, respectively, for each fold). The model predicted the hypoperfusion lesion with a median DSC of 0.61 (IQR, 0.45-0.71). The model identified patients with target mismatch with a sensitivity of 90% (254 of 283; 95% CI: 86, 93) and specificity of 77% (100 of 130; 95% CI: 69, 83) compared with the clinical-DWI mismatch sensitivity of 50% (140 of 281; 95% CI: 44, 56) and specificity of 89% (116 of 130; 95% CI: 83, 94) (P < .001 for all). Conclusion A three-dimensional U-Net deep learning model predicted the hypoperfusion lesion from diffusion-weighted imaging (DWI) and clinical information and identified patients with a target mismatch profile with higher sensitivity than the clinical-DWI mismatch approach. ClinicalTrials.gov registration nos. NCT02225730, NCT01349946, NCT02586415 © RSNA, 2022 Supplemental material is available for this article. See also the editorial by Kallmes and Rabinstein in this issue.


Assuntos
Isquemia Encefálica , Aprendizado Profundo , AVC Isquêmico , Acidente Vascular Cerebral , Masculino , Humanos , Idoso , AVC Isquêmico/diagnóstico por imagem , Estudos Retrospectivos , Acidente Vascular Cerebral/patologia , Imagem de Difusão por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/métodos , Isquemia Encefálica/diagnóstico por imagem , Isquemia
8.
Med Image Anal ; 82: 102571, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36115098

RESUMO

In recent years, several deep learning models recommend first to represent Magnetic Resonance Imaging (MRI) as latent features before performing a downstream task of interest (such as classification or regression). The performance of the downstream task generally improves when these latent representations are explicitly associated with factors of interest. For example, we derived such a representation for capturing brain aging by applying self-supervised learning to longitudinal MRIs and then used the resulting encoding to automatically identify diseases accelerating the aging of the brain. We now propose a refinement of this representation by replacing the linear modeling of brain aging with one that is consistent in local neighborhoods in the latent space. Called Longitudinal Neighborhood Embedding (LNE), we derive an encoding so that neighborhoods are age-consistent (i.e., brain MRIs of different subjects with similar brain ages are in close proximity of each other) and progression-consistent, i.e., the latent space is defined by a smooth trajectory field where each trajectory captures changes in brain ages between a pair of MRIs extracted from a longitudinal sequence. To make the problem computationally tractable, we further propose a strategy for mini-batch sampling so that the resulting local neighborhoods accurately approximate the ones that would be defined based on the whole cohort. We evaluate LNE on three different downstream tasks: (1) to predict chronological age from T1-w MRI of 274 healthy subjects participating in a study at SRI International; (2) to distinguish Normal Control (NC) from Alzheimer's Disease (AD) and stable Mild Cognitive Impairment (sMCI) from progressive Mild Cognitive Impairment (pMCI) based on T1-w MRI of 632 participants of the Alzheimer's Disease Neuroimaging Initiative (ADNI); and (3) to distinguish no-to-low from moderate-to-heavy alcohol drinkers based on fractional anisotropy derived from diffusion tensor MRIs of 764 adolescents recruited by the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA). Across the three data sets, the visualization of the smooth trajectory vector fields and superior accuracy on downstream tasks demonstrate the strength of the proposed method over existing self-supervised methods in extracting information related to brain aging, which could help study the impact of substance use and neurodegenerative disorders. The code is available at https://github.com/ouyangjiahong/longitudinal-neighbourhood-embedding.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Adolescente , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Disfunção Cognitiva/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Aprendizado de Máquina Supervisionado
9.
IEEE Trans Med Imaging ; 41(10): 2558-2569, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35404811

RESUMO

The continuous progression of neurological diseases are often categorized into conditions according to their severity. To relate the severity to changes in brain morphometry, there is a growing interest in replacing these categories with a continuous severity scale that longitudinal MRIs are mapped onto via deep learning algorithms. However, existing methods based on supervised learning require large numbers of samples and those that do not, such as self-supervised models, fail to clearly separate the disease effect from normal aging. Here, we propose to explicitly disentangle those two factors via weak-supervision. In other words, training is based on longitudinal MRIs being labelled either normal or diseased so that the training data can be augmented with samples from disease categories that are not of primary interest to the analysis. We do so by encouraging trajectories of controls to be fully encoded by the direction associated with brain aging. Furthermore, an orthogonal direction linked to disease severity captures the residual component from normal aging in the diseased cohort. Hence, the proposed method quantifies disease severity and its progression speed in individuals without knowing their condition. We apply the proposed method on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI, N =632 ). We then show that the model properly disentangled normal aging from the severity of cognitive impairment by plotting the resulting disentangled factors of each subject and generating simulated MRIs for a given chronological age and condition. Moreover, our representation obtains higher balanced accuracy when used for two downstream classification tasks compared to other pre-training approaches. The code for our weak-supervised approach is available at https://github.com/ouyangjiahong/longitudinal-direction-disentangle.


Assuntos
Doença de Alzheimer , Imageamento por Ressonância Magnética , Envelhecimento , Doença de Alzheimer/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Índice de Gravidade de Doença
10.
EBioMedicine ; 73: 103613, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34656880

RESUMO

BACKGROUND: Laboratory testing is routinely used to assay blood biomarkers to provide information on physiologic state beyond what clinicians can evaluate from interpreting medical imaging. We hypothesized that deep learning interpretation of echocardiogram videos can provide additional value in understanding disease states and can evaluate common biomarkers results. METHODS: We developed EchoNet-Labs, a video-based deep learning algorithm to detect evidence of anemia, elevated B-type natriuretic peptide (BNP), troponin I, and blood urea nitrogen (BUN), as well as values of ten additional lab tests directly from echocardiograms. We included patients (n = 39,460) aged 18 years or older with one or more apical-4-chamber echocardiogram videos (n = 70,066) from Stanford Healthcare for training and internal testing of EchoNet-Lab's performance in estimating the most proximal biomarker result. Without fine-tuning, the performance of EchoNet-Labs was further evaluated on an additional external test dataset (n = 1,301) from Cedars-Sinai Medical Center. We calculated the area under the curve (AUC) of the receiver operating characteristic curve for the internal and external test datasets. FINDINGS: On the held-out test set of Stanford patients not previously seen during model training, EchoNet-Labs achieved an AUC of 0.80 (0.79-0.81) in detecting anemia (low hemoglobin), 0.86 (0.85-0.88) in detecting elevated BNP, 0.75 (0.73-0.78) in detecting elevated troponin I, and 0.74 (0.72-0.76) in detecting elevated BUN. On the external test dataset from Cedars-Sinai, EchoNet-Labs achieved an AUC of 0.80 (0.77-0.82) in detecting anemia, of 0.82 (0.79-0.84) in detecting elevated BNP, of 0.75 (0.72-0.78) in detecting elevated troponin I, and of 0.69 (0.66-0.71) in detecting elevated BUN. We further demonstrate the utility of the model in detecting abnormalities in 10 additional lab tests. We investigate the features necessary for EchoNet-Labs to make successful detection and identify potential mechanisms for each biomarker using well-known and novel explainability techniques. INTERPRETATION: These results show that deep learning applied to diagnostic imaging can provide additional clinical value and identify phenotypic information beyond current imaging interpretation methods. FUNDING: J.W.H. and B.H. are supported by the NSF Graduate Research Fellowship. D.O. is supported by NIH K99 HL157421-01. J.Y.Z. is supported by NSF CAREER 1942926, NIH R21 MD012867-01, NIH P30AG059307 and by a Chan-Zuckerberg Biohub Fellowship.


Assuntos
Biomarcadores , Aprendizado Profundo , Ecocardiografia , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Humanos , Curva ROC , Software
11.
IEEE J Biomed Health Inform ; 25(6): 2082-2092, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33270567

RESUMO

Many neurological diseases are characterized by gradual deterioration of brain structure andfunction. Large longitudinal MRI datasets have revealed such deterioration, in part, by applying machine and deep learning to predict diagnosis. A popular approach is to apply Convolutional Neural Networks (CNN) to extract informative features from each visit of the longitudinal MRI and then use those features to classify each visit via Recurrent Neural Networks (RNNs). Such modeling neglects the progressive nature of the disease, which may result in clinically implausible classifications across visits. To avoid this issue, we propose to combine features across visits by coupling feature extraction with a novel longitudinal pooling layer and enforce consistency of the classification across visits in line with disease progression. We evaluate the proposed method on the longitudinal structural MRIs from three neuroimaging datasets: Alzheimer's Disease Neuroimaging Initiative (ADNI, N=404), a dataset composed of 274 normal controls and 329 patients with Alcohol Use Disorder (AUD), and 255 youths from the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA). In allthree experiments our method is superior to other widely used approaches for longitudinal classification thus making a unique contribution towards more accurate tracking of the impact of conditions on the brain. The code is available at https://github.com/ouyangjiahong/longitudinal-pooling.


Assuntos
Doença de Alzheimer , Imageamento por Ressonância Magnética , Adolescente , Doença de Alzheimer/diagnóstico por imagem , Progressão da Doença , Humanos , Redes Neurais de Computação , Neuroimagem
12.
Inf Process Med Imaging ; 12729: 321-333, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35173402

RESUMO

Multi-modal MRIs are widely used in neuroimaging applications since different MR sequences provide complementary information about brain structures. Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) information into separate image presentations. In this work, we challenge mainstream strategies by showing that they do not naturally lead to representation disentanglement both in theory and in practice. To address this issue, we propose a margin loss that regularizes the similarity in relationships of the representations across subjects and modalities. To enable robust training, we further use a conditional convolution to design a single model for encoding images of all modalities. Lastly, we propose a fusion function to combine the disentangled anatomical representations as a set of modality-invariant features for downstream tasks. We evaluate the proposed method on three multi-modal neuroimaging datasets. Experiments show that our proposed method can achieve superior disentangled representations compared to existing disentanglement strategies. Results also indicate that the fused anatomical representation has potential in the downstream task of zero-dose PET reconstruction and brain tumor segmentation.

13.
Artigo em Inglês | MEDLINE | ID: mdl-35727732

RESUMO

Longitudinal MRIs are often used to capture the gradual deterioration of brain structure and function caused by aging or neurological diseases. Analyzing this data via machine learning generally requires a large number of ground-truth labels, which are often missing or expensive to obtain. Reducing the need for labels, we propose a self-supervised strategy for representation learning named Longitudinal Neighborhood Embedding (LNE). Motivated by concepts in contrastive learning, LNE explicitly models the similarity between trajectory vectors across different subjects. We do so by building a graph in each training iteration defining neighborhoods in the latent space so that the progression direction of a subject follows the direction of its neighbors. This results in a smooth trajectory field that captures the global morphological change of the brain while maintaining the local continuity. We apply LNE to longitudinal T1w MRIs of two neuroimaging studies: a dataset composed of 274 healthy subjects, and Alzheimer's Disease Neuroimaging Initiative (ADNI, N = 632). The visualization of the smooth trajectory vector field and superior performance on downstream tasks demonstrate the strength of the proposed method over existing self-supervised methods in extracting information associated with normal aging and in revealing the impact of neurodegenerative disorders. The code is available at https://github.com/ouyangjiahong/longitudinal-neighbourhood-embedding.

14.
Eur J Nucl Med Mol Imaging ; 47(13): 2998-3007, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32535655

RESUMO

PURPOSE: We aimed to evaluate the performance of deep learning-based generalization of ultra-low-count amyloid PET/MRI enhancement when applied to studies acquired with different scanning hardware and protocols. METHODS: Eighty simultaneous [18F]florbetaben PET/MRI studies were acquired, split equally between two sites (site 1: Signa PET/MRI, GE Healthcare, 39 participants, 67 ± 8 years, 23 females; site 2: mMR, Siemens Healthineers, 64 ± 11 years, 23 females) with different MRI protocols. Twenty minutes of list-mode PET data (90-110 min post-injection) were reconstructed as ground-truth. Ultra-low-count data obtained from undersampling by a factor of 100 (site 1) or the first minute of PET acquisition (site 2) were reconstructed for ultra-low-dose/ultra-short-time (1% dose and 5% time, respectively) PET images. A deep convolution neural network was pre-trained with site 1 data and either (A) directly applied or (B) trained further on site 2 data using transfer learning. Networks were also trained from scratch based on (C) site 2 data or (D) all data. Certified physicians determined amyloid uptake (+/-) status for accuracy and scored the image quality. The peak signal-to-noise ratio, structural similarity, and root-mean-squared error were calculated between images and their ground-truth counterparts. Mean regional standardized uptake value ratios (SUVR, reference region: cerebellar cortex) from 37 successful site 2 FreeSurfer segmentations were analyzed. RESULTS: All network-synthesized images had reduced noise than their ultra-low-count reconstructions. Quantitatively, image metrics improved the most using method B, where SUVRs had the least variability from the ground-truth and the highest effect size to differentiate between positive and negative images. Method A images had lower accuracy and image quality than other methods; images synthesized from methods B-D scored similarly or better than the ground-truth images. CONCLUSIONS: Deep learning can successfully produce diagnostic amyloid PET images from short frame reconstructions. Data bias should be considered when applying pre-trained deep ultra-low-count amyloid PET/MRI networks for generalization.


Assuntos
Aprendizado Profundo , Amiloide , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X
15.
JAMA Netw Open ; 3(3): e200772, 2020 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-32163165

RESUMO

Importance: Predicting infarct size and location is important for decision-making and prognosis in patients with acute stroke. Objectives: To determine whether a deep learning model can predict final infarct lesions using magnetic resonance images (MRIs) acquired at initial presentation (baseline) and to compare the model with current clinical prediction methods. Design, Setting, and Participants: In this multicenter prognostic study, a specific type of neural network for image segmentation (U-net) was trained, validated, and tested using patients from the Imaging Collaterals in Acute Stroke (iCAS) study from April 14, 2014, to April 15, 2018, and the Diffusion Weighted Imaging Evaluation for Understanding Stroke Evolution Study-2 (DEFUSE-2) study from July 14, 2008, to September 17, 2011 (reported in October 2012). Patients underwent baseline perfusion-weighted and diffusion-weighted imaging and MRI at 3 to 7 days after baseline. Patients were grouped into unknown, minimal, partial, and major reperfusion status based on 24-hour imaging results. Baseline images acquired at presentation were inputs, and the final true infarct lesion at 3 to 7 days was considered the ground truth for the model. The model calculated the probability of infarction for every voxel, which can be thresholded to produce a prediction. Data were analyzed from July 1, 2018, to March 7, 2019. Main Outcomes and Measures: Area under the curve, Dice score coefficient (DSC) (a metric from 0-1 indicating the extent of overlap between the prediction and the ground truth; a DSC of ≥0.5 represents significant overlap), and volume error. Current clinical methods were compared with model performance in subgroups of patients with minimal or major reperfusion. Results: Among the 182 patients included in the model (97 women [53.3%]; mean [SD] age, 65 [16] years), the deep learning model achieved a median area under the curve of 0.92 (interquartile range [IQR], 0.87-0.96), DSC of 0.53 (IQR, 0.31-0.68), and volume error of 9 (IQR, -14 to 29) mL. In subgroups with minimal (DSC, 0.58 [IQR, 0.31-0.67] vs 0.55 [IQR, 0.40-0.65]; P = .37) or major (DSC, 0.48 [IQR, 0.29-0.65] vs 0.45 [IQR, 0.15-0.54]; P = .002) reperfusion for which comparison with existing clinical methods was possible, the deep learning model had comparable or better performance. Conclusions and Relevance: The deep learning model appears to have successfully predicted infarct lesions from baseline imaging without reperfusion information and achieved comparable performance to existing clinical methods. Predicting the subacute infarct lesion may help clinicians prepare for decompression treatment and aid in patient selection for neuroprotective clinical trials.


Assuntos
Isquemia Encefálica/diagnóstico , Aprendizado Profundo/estatística & dados numéricos , Imageamento por Ressonância Magnética/métodos , Seleção de Pacientes , Idoso , Isquemia Encefálica/fisiopatologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos
16.
Biomed Opt Express ; 10(10): 5291-5324, 2019 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-31646047

RESUMO

Optical Coherence Tomography (OCT) is an imaging modality that has been widely adopted for visualizing corneal, retinal and limbal tissue structure with micron resolution. It can be used to diagnose pathological conditions of the eye, and for developing pre-operative surgical plans. In contrast to the posterior retina, imaging the anterior tissue structures, such as the limbus and cornea, results in B-scans that exhibit increased speckle noise patterns and imaging artifacts. These artifacts, such as shadowing and specularity, pose a challenge during the analysis of the acquired volumes as they substantially obfuscate the location of tissue interfaces. To deal with the artifacts and speckle noise patterns and accurately segment the shallowest tissue interface, we propose a cascaded neural network framework, which comprises of a conditional Generative Adversarial Network (cGAN) and a Tissue Interface Segmentation Network (TISN). The cGAN pre-segments OCT B-scans by removing undesired specular artifacts and speckle noise patterns just above the shallowest tissue interface, and the TISN combines the original OCT image with the pre-segmentation to segment the shallowest interface. We show the applicability of the cascaded framework to corneal datasets, demonstrate that it precisely segments the shallowest corneal interface, and also show its generalization capacity to limbal datasets. We also propose a hybrid framework, wherein the cGAN pre-segmentation is passed to a traditional image analysis-based segmentation algorithm, and describe the improved segmentation performance. To the best of our knowledge, this is the first approach to remove severe specular artifacts and speckle noise patterns (prior to the shallowest interface) that affects the interpretation of anterior segment OCT datasets, thereby resulting in the accurate segmentation of the shallowest tissue interface. To the best of our knowledge, this is the first work to show the potential of incorporating a cGAN into larger deep learning frameworks for improved corneal and limbal OCT image segmentation. Our cGAN design directly improves the visualization of corneal and limbal OCT images from OCT scanners, and improves the performance of current OCT segmentation algorithms.

17.
Med Phys ; 46(8): 3555-3564, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31131901

RESUMO

PURPOSE: Our goal was to use a generative adversarial network (GAN) with feature matching and task-specific perceptual loss to synthesize standard-dose amyloid Positron emission tomography (PET) images of high quality and including accurate pathological features from ultra-low-dose PET images only. METHODS: Forty PET datasets from 39 participants were acquired with a simultaneous PET/MRI scanner following injection of 330 ± 30 MBq of the amyloid radiotracer 18F-florbetaben. The raw list-mode PET data were reconstructed as the standard-dose ground truth and were randomly undersampled by a factor of 100 to reconstruct 1% low-dose PET scans. A 2D encoder-decoder network was implemented as the generator to synthesize a standard-dose image and a discriminator was used to evaluate them. The two networks contested with each other to achieve high-visual quality PET from the ultra-low-dose PET. Multi-slice inputs were used to reduce noise by providing the network with 2.5D information. Feature matching was applied to reduce hallucinated structures. Task-specific perceptual loss was designed to maintain the correct pathological features. The image quality was evaluated by peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE) metrics with and without each of these modules. Two expert radiologists were asked to score image quality on a 5-point scale and identified the amyloid status (positive or negative). RESULTS: With only low-dose PET as input, the proposed method significantly outperformed Chen et al.'s method (Chen et al. Radiology. 2018;290:649-656) (which shows the best performance in this task) with the same input (PET-only model) by 1.87 dB in PSNR, 2.04% in SSIM, and 24.75% in RMSE. It also achieved comparable results to Chen et al.'s method which used additional magnetic resonance imaging (MRI) inputs (PET-MR model). Experts' reading results showed that the proposed method could achieve better overall image quality and maintain better pathological features indicating amyloid status than both PET-only and PET-MR models proposed by Chen et al. CONCLUSION: Standard-dose amyloid PET images can be synthesized from ultra-low-dose images using GAN. Applying adversarial learning, feature matching, and task-specific perceptual loss are essential to ensure image quality and the preservation of pathological features.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Tomografia por Emissão de Pósitrons , Doses de Radiação , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA