Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
1.
Eur Radiol ; 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38536464

RESUMO

BACKGROUND: Accurate mortality risk quantification is crucial for the management of hepatocellular carcinoma (HCC); however, most scoring systems are subjective. PURPOSE: To develop and independently validate a machine learning mortality risk quantification method for HCC patients using standard-of-care clinical data and liver radiomics on baseline magnetic resonance imaging (MRI). METHODS: This retrospective study included all patients with multiphasic contrast-enhanced MRI at the time of diagnosis treated at our institution. Patients were censored at their last date of follow-up, end-of-observation, or liver transplantation date. The data were randomly sampled into independent cohorts, with 85% for development and 15% for independent validation. An automated liver segmentation framework was adopted for radiomic feature extraction. A random survival forest combined clinical and radiomic variables to predict overall survival (OS), and performance was evaluated using Harrell's C-index. RESULTS: A total of 555 treatment-naïve HCC patients (mean age, 63.8 years ± 8.9 [standard deviation]; 118 females) with MRI at the time of diagnosis were included, of which 287 (51.7%) died after a median time of 14.40 (interquartile range, 22.23) months, and had median followed up of 32.47 (interquartile range, 61.5) months. The developed risk prediction framework required 1.11 min on average and yielded C-indices of 0.8503 and 0.8234 in the development and independent validation cohorts, respectively, outperforming conventional clinical staging systems. Predicted risk scores were significantly associated with OS (p < .00001 in both cohorts). CONCLUSIONS: Machine learning reliably, rapidly, and reproducibly predicts mortality risk in patients with hepatocellular carcinoma from data routinely acquired in clinical practice. CLINICAL RELEVANCE STATEMENT: Precision mortality risk prediction using routinely available standard-of-care clinical data and automated MRI radiomic features could enable personalized follow-up strategies, guide management decisions, and improve clinical workflow efficiency in tumor boards. KEY POINTS: • Machine learning enables hepatocellular carcinoma mortality risk prediction using standard-of-care clinical data and automated radiomic features from multiphasic contrast-enhanced MRI. • Automated mortality risk prediction achieved state-of-the-art performances for mortality risk quantification and outperformed conventional clinical staging systems. • Patients were stratified into low, intermediate, and high-risk groups with significantly different survival times, generalizable to an independent evaluation cohort.

2.
bioRxiv ; 2024 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-38260465

RESUMO

Amyloid accumulation in Alzheimer's disease (AD) is associated with synaptic damage and altered connectivity in brain networks. While measures of amyloid accumulation and biochemical changes in mouse models have utility for translational studies of certain therapeutics, preclinical analysis of altered brain connectivity using clinically relevant fMRI measures has not been well developed for agents intended to improve neural networks. Here, we conduct a longitudinal study in a double knock-in mouse model for AD ( App NL-G-F /hMapt ), monitoring brain connectivity by means of resting-state fMRI. While the 4-month-old AD mice are indistinguishable from wild-type controls (WT), decreased connectivity in the default-mode network is significant for the AD mice relative to WT mice by 6 months of age and is pronounced by 9 months of age. In a second cohort of 20-month-old mice with persistent functional connectivity deficits for AD relative to WT, we assess the impact of two-months of oral treatment with a silent allosteric modulator of mGluR5 (BMS-984923) known to rescue synaptic density. Functional connectivity deficits in the aged AD mice are reversed by the mGluR5-directed treatment. The longitudinal application of fMRI has enabled us to define the preclinical time trajectory of AD-related changes in functional connectivity, and to demonstrate a translatable metric for monitoring disease emergence, progression, and response to synapse-rescuing treatment.

3.
Eur Radiol ; 2024 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-38217704

RESUMO

OBJECTIVES: To develop and evaluate a deep convolutional neural network (DCNN) for automated liver segmentation, volumetry, and radiomic feature extraction on contrast-enhanced portal venous phase magnetic resonance imaging (MRI). MATERIALS AND METHODS: This retrospective study included hepatocellular carcinoma patients from an institutional database with portal venous MRI. After manual segmentation, the data was randomly split into independent training, validation, and internal testing sets. From a collaborating institution, de-identified scans were used for external testing. The public LiverHccSeg dataset was used for further external validation. A 3D DCNN was trained to automatically segment the liver. Segmentation accuracy was quantified by the Dice similarity coefficient (DSC) with respect to manual segmentation. A Mann-Whitney U test was used to compare the internal and external test sets. Agreement of volumetry and radiomic features was assessed using the intraclass correlation coefficient (ICC). RESULTS: In total, 470 patients met the inclusion criteria (63.9±8.2 years; 376 males) and 20 patients were used for external validation (41±12 years; 13 males). DSC segmentation accuracy of the DCNN was similarly high between the internal (0.97±0.01) and external (0.96±0.03) test sets (p=0.28) and demonstrated robust segmentation performance on public testing (0.93±0.03). Agreement of liver volumetry was satisfactory in the internal (ICC, 0.99), external (ICC, 0.97), and public (ICC, 0.85) test sets. Radiomic features demonstrated excellent agreement in the internal (mean ICC, 0.98±0.04), external (mean ICC, 0.94±0.10), and public (mean ICC, 0.91±0.09) datasets. CONCLUSION: Automated liver segmentation yields robust and generalizable segmentation performance on MRI data and can be used for volumetry and radiomic feature extraction. CLINICAL RELEVANCE STATEMENT: Liver volumetry, anatomic localization, and extraction of quantitative imaging biomarkers require accurate segmentation, but manual segmentation is time-consuming. A deep convolutional neural network demonstrates fast and accurate segmentation performance on T1-weighted portal venous MRI. KEY POINTS: • This deep convolutional neural network yields robust and generalizable liver segmentation performance on internal, external, and public testing data. • Automated liver volumetry demonstrated excellent agreement with manual volumetry. • Automated liver segmentations can be used for robust and reproducible radiomic feature extraction.

4.
IEEE Trans Med Imaging ; 43(1): 203-215, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37432807

RESUMO

Automated volumetric meshing of patient-specific heart geometry can help expedite various biomechanics studies, such as post-intervention stress estimation. Prior meshing techniques often neglect important modeling characteristics for successful downstream analyses, especially for thin structures like the valve leaflets. In this work, we present DeepCarve (Deep Cardiac Volumetric Mesh): a novel deformation-based deep learning method that automatically generates patient-specific volumetric meshes with high spatial accuracy and element quality. The main novelty in our method is the use of minimally sufficient surface mesh labels for precise spatial accuracy and the simultaneous optimization of isotropic and anisotropic deformation energies for volumetric mesh quality. Mesh generation takes only 0.13 seconds/scan during inference, and each mesh can be directly used for finite element analyses without any manual post-processing. Calcification meshes can also be subsequently incorporated for increased simulation accuracy. Numerous stent deployment simulations validate the viability of our approach for large-batch analyses. Our code is available at https://github.com/danpak94/Deep-Cardiac-Volumetric-Mesh.


Assuntos
Aprendizado Profundo , Humanos , Fenômenos Biomecânicos , Simulação por Computador , Modelagem Computacional Específica para o Paciente , Coração/diagnóstico por imagem
5.
IEEE Trans Biomed Eng ; 71(3): 1084-1091, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37874731

RESUMO

OBJECTIVE: To compute a dense prostate cancer risk map for the individual patient post-biopsy from magnetic resonance imaging (MRI) and to provide a more reliable evaluation of its fitness in prostate regions that were not identified as suspicious for cancer by a human-reader in pre- and intra-biopsy imaging analysis. METHODS: Low-level pre-biopsy MRI biomarkers from targeted and non-targeted biopsy locations were extracted and statistically tested for representativeness against biomarkers from non-biopsied prostate regions. A probabilistic machine learning classifier was optimized to map biomarkers to their core-level pathology, followed by extrapolation of pathology scores to non-biopsied prostate regions. Goodness-of-fit was assessed at targeted and non-targeted biopsy locations for the post-biopsy individual patient. RESULTS: Our experiments showed high predictability of imaging biomarkers in differentiating histopathology scores in thousands of non-targeted core-biopsy locations (ROC-AUCs: 0.85-0.88), but also high variability between patients (Median ROC-AUC [IQR]: 0.81-0.89 [0.29-0.40]). CONCLUSION: The sparseness of prostate biopsy data makes the validation of a whole gland risk mapping a non-trivial task. Previous studies i) focused on targeted-biopsy locations although biopsy-specimens drawn from systematically scattered locations across the prostate constitute a more representative sample to non-biopsied regions, and ii) estimated prediction-power across predicted instances (e.g., biopsy specimens) with no patient distinction, which may lead to unreliable estimation of model fitness to the individual patient due to variation between patients in instance count, imaging characteristics, and pathologies. SIGNIFICANCE: This study proposes a personalized whole-gland prostate cancer risk mapping post-biopsy to allow clinicians to better stage and personalize focal therapy treatment plans.


Assuntos
Neoplasias da Próstata , Masculino , Humanos , Biópsia com Agulha de Grande Calibre/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Biópsia Guiada por Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Biomarcadores
6.
Artigo em Inglês | MEDLINE | ID: mdl-38090633

RESUMO

Prostate cancer lesion segmentation in multi-parametric magnetic resonance imaging (mpMRI) is crucial for pre-biopsy diagnosis and targeted biopsy guidance. Deep convolution neural networks have been widely utilized for lesion segmentation. However, these methods fail to achieve a high Dice coefficient because of the large variations in lesion size and location within the gland. To address this problem, we integrate the clinically-meaningful prostate specific antigen density (PSAD) biomarker into the deep learning model using feature-wise transformations to condition the features in latent space, and thus control the size of lesion prediction. We tested our models on a public dataset with 214 annotated mpMRI scans and compared the segmentation performance to a baseline 3D U-Net model. Results demonstrate that integrating the PSAD biomarker significantly improves segmentation performance in both Dice coefficient and centroid distance metric.

7.
Artigo em Inglês | MEDLINE | ID: mdl-38111738

RESUMO

Head motion occurring during brain positron emission tomography images acquisition leads to a decrease in image quality and induces quantification errors. We have previously introduced a Deep Learning Head Motion Correction (DL-HMC) method based on supervised learning of gold-standard Polaris Vicra motion tracking device and showed the potential of this method. In this study, we upgrade our network to a multi-task architecture in order to include image appearance prediction in the learning process. This multi-task Deep Learning Head Motion Correction (mtDL-HMC) model was trained on 21 subjects and showed enhanced motion prediction performance compared to our previous DL-HMC method on both quantitative and qualitative results for 5 testing subjects. We also evaluate the trustworthiness of network predictions by performing Monte Carlo Dropout at inference on testing subjects. We discard the data associated with a great motion prediction uncertainty and show that this does not harm the quality of reconstructed images, and can even improve it.

8.
Data Brief ; 51: 109662, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37869619

RESUMO

Accurate segmentation of liver and tumor regions in medical imaging is crucial for the diagnosis, treatment, and monitoring of hepatocellular carcinoma (HCC) patients. However, manual segmentation is time-consuming and subject to inter- and intra-rater variability. Therefore, automated methods are necessary but require rigorous validation of high-quality segmentations based on a consensus of raters. To address the need for reliable and comprehensive data in this domain, we present LiverHccSeg, a dataset that provides liver and tumor segmentations on multiphasic contrast-enhanced magnetic resonance imaging from two board-approved abdominal radiologists, along with an analysis of inter-rater agreement. LiverHccSeg provides a curated resource for liver and HCC tumor segmentation tasks. The dataset includes a scientific reading and co-registered contrast-enhanced multiphasic magnetic resonance imaging (MRI) scans with corresponding manual segmentations by two board-approved abdominal radiologists and relevant metadata and offers researchers a comprehensive foundation for external validation, and benchmarking of liver and tumor segmentation algorithms. The dataset also provides an analysis of the agreement between the two sets of liver and tumor segmentations. Through the calculation of appropriate segmentation metrics, we provide insights into the consistency and variability in liver and tumor segmentations among the radiologists. A total of 17 cases were included for liver segmentation and 14 cases for HCC tumor segmentation. Liver segmentations demonstrates high segmentation agreement (mean Dice, 0.95 ± 0.01 [standard deviation]) and HCC tumor segmentations showed higher variation (mean Dice, 0.85 ± 0.16 [standard deviation]). The applications of LiverHccSeg can be manifold, ranging from testing machine learning algorithms on public external data to radiomic feature analyses. Leveraging the inter-rater agreement analysis within the dataset, researchers can investigate the impact of variability on segmentation performance and explore methods to enhance the accuracy and robustness of liver and tumor segmentation algorithms in HCC patients. By making this dataset publicly available, LiverHccSeg aims to foster collaborations, facilitate innovative solutions, and ultimately improve patient outcomes in the diagnosis and treatment of HCC.

9.
IEEE Trans Radiat Plasma Med Sci ; 7(4): 344-353, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37842204

RESUMO

Whole-body dynamic FDG-PET imaging through continuous-bed-motion (CBM) mode multi-pass acquisition protocol is a promising metabolism measurement. However, inter-pass misalignment originating from body movement could degrade parametric quantification. We aim to apply a non-rigid registration method for inter-pass motion correction in whole-body dynamic PET. 27 subjects underwent a 90-min whole-body FDG CBM PET scan on a Biograph mCT (Siemens Healthineers), acquiring 9 over-the-heart single-bed passes and subsequently 19 CBM passes (frames). The inter-pass motion correction was executed using non-rigid image registration with multi-resolution, B-spline free-form deformations. The parametric images were then generated by Patlak analysis. The overlaid Patlak slope Ki and y-intercept Vb images were visualized to qualitatively evaluate motion impact and correction effect. The normalized weighted mean squared Patlak fitting errors (NFE) were compared in the whole body, head, and hypermetabolic regions of interest (ROI). In Ki images, ROI statistics were collected and malignancy discrimination capacity was estimated by the area under the receiver operating characteristic curve (AUC). After the inter-pass motion correction was applied, the spatial misalignment appearance between Ki and Vb images was successfully reduced. Voxel-wise normalized fitting error maps showed global error reduction after motion correction. The NFE in the whole body (p = 0.0013), head (p = 0.0021), and ROIs (p = 0.0377) significantly decreased. The visual performance of each hypermetabolic ROI in Ki images was enhanced, while 3.59% and 3.67% average absolute percentage changes were observed in mean and maximum Ki values, respectively, across all evaluated ROIs. The estimated mean Ki values had substantial changes with motion correction (p = 0.0021). The AUC of both mean Ki and maximum Ki after motion correction increased, possibly suggesting the potential of enhancing oncological discrimination capacity through inter-pass motion correction.

10.
Sci Rep ; 13(1): 7579, 2023 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-37165035

RESUMO

Tumor recurrence affects up to 70% of early-stage hepatocellular carcinoma (HCC) patients, depending on treatment option. Deep learning algorithms allow in-depth exploration of imaging data to discover imaging features that may be predictive of recurrence. This study explored the use of convolutional neural networks (CNN) to predict HCC recurrence in patients with early-stage HCC from pre-treatment magnetic resonance (MR) images. This retrospective study included 120 patients with early-stage HCC. Pre-treatment MR images were fed into a machine learning pipeline (VGG16 and XGBoost) to predict recurrence within six different time frames (range 1-6 years). Model performance was evaluated with the area under the receiver operating characteristic curves (AUC-ROC). After prediction, the model's clinical relevance was evaluated using Kaplan-Meier analysis with recurrence-free survival (RFS) as the endpoint. Of 120 patients, 44 had disease recurrence after therapy. Six different models performed with AUC values between 0.71 to 0.85. In Kaplan-Meier analysis, five of six models obtained statistical significance when predicting RFS (log-rank p < 0.05). Our proof-of-concept study indicates that deep learning algorithms can be utilized to predict early-stage HCC recurrence. Successful identification of high-risk recurrence candidates may help optimize follow-up imaging and improve long-term outcomes post-treatment.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/patologia , Neoplasias Hepáticas/patologia , Recidiva Local de Neoplasia/diagnóstico por imagem , Estudos Retrospectivos , Imageamento por Ressonância Magnética , Aprendizado de Máquina
11.
Med Image Anal ; 88: 102840, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37216735

RESUMO

Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. Attenuation maps (µ-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve the diagnostic accuracy of cardiac SPECT. However, in clinical practice, SPECT and CT scans are acquired sequentially, potentially inducing misregistration between the two images and further producing AC artifacts. Conventional intensity-based registration methods show poor performance in the cross-modality registration of SPECT and CT-derived µ-maps since the two imaging modalities might present totally different intensity patterns. Deep learning has shown great potential in medical imaging registration. However, existing deep learning strategies for medical image registration encoded the input images by simply concatenating the feature maps of different convolutional layers, which might not fully extract or fuse the input information. In addition, deep-learning-based cross-modality registration of cardiac SPECT and CT-derived µ-maps has not been investigated before. In this paper, we propose a novel Dual-Channel Squeeze-Fusion-Excitation (DuSFE) co-attention module for the cross-modality rigid registration of cardiac SPECT and CT-derived µ-maps. DuSFE is designed based on the co-attention mechanism of two cross-connected input data streams. The channel-wise or spatial features of SPECT and µ-maps are jointly encoded, fused, and recalibrated in the DuSFE module. DuSFE can be flexibly embedded at multiple convolutional layers to enable gradual feature fusion in different spatial dimensions. Our studies using clinical patient MPI studies demonstrated that the DuSFE-embedded neural network generated significantly lower registration errors and more accurate AC SPECT images than existing methods. We also showed that the DuSFE-embedded network did not over-correct or degrade the registration performance of motion-free cases. The source code of this work is available at https://github.com/XiongchaoChen/DuSFE_CrossRegistration.


Assuntos
Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Coração , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos
12.
Eur Radiol ; 33(9): 6599-6607, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36988714

RESUMO

OBJECTIVES: The objective of this study was to translate a deep learning (DL) approach for semiautomated analysis of body composition (BC) measures from standard of care CT images to investigate the prognostic value of BC in pediatric, adolescent, and young adult (AYA) patients with lymphoma. METHODS: This 10-year retrospective, single-site study of 110 pediatric and AYA patients with lymphoma involved manual segmentation of fat and muscle tissue from 260 CT imaging datasets obtained as part of routine imaging at initial staging and first therapeutic follow-up. A DL model was trained to perform semiautomated image segmentation of adipose and muscle tissue. The association between BC measures and the occurrence of 3-year late effects was evaluated using Cox proportional hazards regression analyses. RESULTS: DL-guided measures of BC were in close agreement with those obtained by a human rater, as demonstrated by high Dice scores (≥ 0.95) and correlations (r > 0.99) for each tissue of interest. Cox proportional hazards regression analyses revealed that patients with elevated subcutaneous adipose tissue at baseline and first follow-up, along with patients who possessed lower volumes of skeletal muscle at first follow-up, have increased risk of late effects compared to their peers. CONCLUSIONS: DL provides rapid and accurate quantification of image-derived measures of BC that are associated with risk for treatment-related late effects in pediatric and AYA patients with lymphoma. Image-based monitoring of BC measures may enhance future opportunities for personalized medicine for children with lymphoma by identifying patients at the highest risk for late effects of treatment. KEY POINTS: • Deep learning-guided CT image analysis of body composition measures achieved high agreement level with manual image analysis. • Pediatric patients with more fat and less muscle during the course of cancer treatment were more likely to experience a serious adverse event compared to their clinical counterparts. • Deep learning of body composition may add value to routine CT imaging by offering real-time monitoring of pediatric, adolescent, and young adults at high risk for late effects of cancer treatment.


Assuntos
Composição Corporal , Aprendizado Profundo , Linfoma , Adolescente , Criança , Humanos , Progressão da Doença , Linfoma/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X , Masculino , Feminino , Modelos de Riscos Proporcionais , Valor Preditivo dos Testes
13.
Med Image Anal ; 84: 102711, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36525845

RESUMO

Myocardial ischemia/infarction causes wall-motion abnormalities in the left ventricle. Therefore, reliable motion estimation and strain analysis using 3D+time echocardiography for localization and characterization of myocardial injury is valuable for early detection and targeted interventions. Previous unsupervised cardiac motion tracking methods rely on heavily-weighted regularization functions to smooth out the noisy displacement fields in echocardiography. In this work, we present a Co-Attention Spatial Transformer Network (STN) for improved motion tracking and strain analysis in 3D echocardiography. Co-Attention STN aims to extract inter-frame dependent features between frames to improve the motion tracking in otherwise noisy 3D echocardiography images. We also propose a novel temporal constraint to further regularize the motion field to produce smooth and realistic cardiac displacement paths over time without prior assumptions on cardiac motion. Our experimental results on both synthetic and in vivo 3D echocardiography datasets demonstrate that our Co-Attention STN provides superior performance compared to existing methods. Strain analysis from Co-Attention STNs also correspond well with the matched SPECT perfusion maps, demonstrating the clinical utility for using 3D echocardiography for infarct localization.


Assuntos
Ecocardiografia Tridimensional , Infarto do Miocárdio , Disfunção Ventricular Esquerda , Humanos , Coração , Ecocardiografia Tridimensional/métodos , Ecocardiografia/métodos
14.
Phys Med Biol ; 68(3)2023 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-36584395

RESUMO

Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (µ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (µ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingµ-DL fromλ-MLAA andµ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Imagem Multimodal/métodos , Processamento de Imagem Assistida por Computador/métodos , Fluordesoxiglucose F18 , Imageamento por Ressonância Magnética/métodos , Algoritmos , Tomografia por Emissão de Pósitrons/métodos
15.
Med Image Comput Comput Assist Interv ; 14229: 710-719, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38174207

RESUMO

Head motion correction is an essential component of brain PET imaging, in which even motion of small magnitude can greatly degrade image quality and introduce artifacts. Building upon previous work, we propose a new head motion correction framework taking fast reconstructions as input. The main characteristics of the proposed method are: (i) the adoption of a high-resolution short-frame fast reconstruction workflow; (ii) the development of a novel encoder for PET data representation extraction; and (iii) the implementation of data augmentation techniques. Ablation studies are conducted to assess the individual contributions of each of these design choices. Furthermore, multi-subject studies are conducted on an 18F-FPEB dataset, and the method performance is qualitatively and quantitatively evaluated by MOLAR reconstruction study and corresponding brain Region of Interest (ROI) Standard Uptake Values (SUV) evaluation. Additionally, we also compared our method with a conventional intensity-based registration method. Our results demonstrate that the proposed method outperforms other methods on all subjects, and can accurately estimate motion for subjects out of the training set. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023.

16.
Mach Learn Clin Neuroimaging (2023) ; 14312: 34-45, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38174216

RESUMO

Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners: HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub: https://github.com/OnofreyLab/dl_hmc_attention_mlcn2023.

17.
Neuroimage ; 264: 119678, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36261057

RESUMO

Head motion presents a continuing problem in brain PET studies. A wealth of motion correction (MC) algorithms had been proposed in the past, including both hardware-based methods and data-driven methods. However, in most real brain PET studies, in the absence of ground truth or gold standard of motion information, it is challenging to objectively evaluate MC quality. For MC evaluation, image-domain metrics, e.g., standardized uptake value (SUV) change before and after MC are commonly used, but this measure lacks objectivity because 1) other factors, e.g., attenuation correction, scatter correction and parameters used in the reconstruction, will confound MC effectiveness; 2) SUV only reflects final image quality, and it cannot precisely inform when an MC method performed well or poorly during the scan time period; 3) SUV is tracer-dependent and head motion may cause increases or decreases in SUV for different tracers, so evaluating MC effectiveness is complicated. Here, we present a new algorithm, i.e., motion corrected centroid-of-distribution (MCCOD) to perform objective quality control for measured or estimated rigid motion information. MCCOD is a three-dimensional surrogate trace of the center of tracer distribution after performing rigid MC using the existing motion information. MCCOD is used to inform whether the motion information is accurate, using the PET raw data only, i.e., without PET image reconstruction, where inaccurate motion information typically leads to abrupt changes in the MCCOD trace. MCCOD was validated using simulation studies and was tested on real studies acquired from both time-of-flight (TOF) and non-TOF scanners. A deep learning-based brain mask segmentation was implemented, which is shown to be necessary for non-TOF MCCOD generation. MCCOD is shown to be effective in detecting abrupt translation motion errors in slowly varying tracer distribution caused by the motion tracking hardware and can be used to compare different motion estimation methods as well as to improve existing motion information.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Movimento (Física) , Algoritmos , Encéfalo/diagnóstico por imagem
18.
JCO Clin Cancer Inform ; 6: e2200016, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36179281

RESUMO

PURPOSE: There is ongoing clinical need to improve estimates of disease outcome in prostate cancer. Machine learning (ML) approaches to pathologic diagnosis and prognosis are a promising and increasingly used strategy. In this study, we use an ML algorithm for prediction of adverse outcomes at radical prostatectomy (RP) using whole-slide images (WSIs) of prostate biopsies with Grade Group (GG) 2 or 3 disease. METHODS: We performed a retrospective review of prostate biopsies collected at our institution which had corresponding RP, GG 2 or 3 disease one or more cores, and no biopsies with higher than GG 3 disease. A hematoxylin and eosin-stained core needle biopsy from each site with GG 2 or 3 disease was scanned and used as the sole input for the algorithm. The ML pipeline had three phases: image preprocessing, feature extraction, and adverse outcome prediction. First, patches were extracted from each biopsy scan. Subsequently, the pre-trained Visual Geometry Group-16 convolutional neural network was used for feature extraction. A representative feature vector was then used as input to an Extreme Gradient Boosting classifier for predicting the binary adverse outcome. We subsequently assessed patient clinical risk using CAPRA score for comparison with the ML pipeline results. RESULTS: The data set included 361 WSIs from 107 patients (56 with adverse pathology at RP). The area under the receiver operating characteristic curves for the ML classification were 0.72 (95% CI, 0.62 to 0.81), 0.65 (95% CI, 0.53 to 0.79) and 0.89 (95% CI, 0.79 to 1.00) for the entire cohort, and GG 2 and GG 3 patients, respectively, similar to the performance of the CAPRA clinical risk assessment. CONCLUSION: We provide evidence for the potential of ML algorithms to use WSIs of needle core prostate biopsies to estimate clinically relevant prostate cancer outcomes.


Assuntos
Próstata , Neoplasias da Próstata , Biópsia , Biópsia com Agulha de Grande Calibre , Amarelo de Eosina-(YS) , Hematoxilina , Humanos , Aprendizado de Máquina , Masculino , Próstata/patologia , Próstata/cirurgia , Prostatectomia , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Neoplasias da Próstata/cirurgia
19.
J Vasc Interv Radiol ; 33(7): 814-824.e3, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35460887

RESUMO

PURPOSE: To assess the Liver Imaging Reporting and Data System (LI-RADS) and radiomic features in pretreatment magnetic resonance (MR) imaging for predicting progression-free survival (PFS) in patients with nodular hepatocellular carcinoma (HCC) treated with radiofrequency (RF) ablation. MATERIAL AND METHODS: Sixty-five therapy-naïve patients with 85 nodular HCC tumors <5 cm in size were included in this Health Insurance Portability and Accountability Act-compliant, institutional review board-approved, retrospective study. All patients underwent RF ablation as first-line treatment and demonstrated complete response on the first follow-up imaging. Gadolinium-enhanced MR imaging biomarkers were analyzed for LI-RADS features by 2 board-certified radiologists or by analysis of nodular and perinodular radiomic features from 3-dimensional segmentations. A radiomic signature was calculated with the most informative features of a least absolute shrinkage and selection operator Cox regression model using leave-one-out cross-validation. The association between both LI-RADS features and radiomic signatures with PFS was assessed via the Kaplan-Meier analysis and a weighted log-rank test. RESULTS: The median PFS was 19 months (95% confidence interval, 16.1-19.4) for a follow-up period of 24 months. Multifocality (P = .033); the appearance of capsular continuity, compared with an absent or discontinuous capsule (P = .012); and a higher radiomic signature based on nodular and perinodular features (P = .030) were associated with poorer PFS in early-stage HCC. The observation size, presence of arterial hyperenhancement, nonperipheral washout, and appearance of an enhancing "capsule" were not associated with PFS (P > .05). CONCLUSIONS: Although multifocal HCC clearly indicates a more aggressive phenotype even in early-stage disease, the continuity of an enhancing capsule and a higher radiomic signature may add value as MR imaging biomarkers for poor PFS in HCC treated with RF ablation.


Assuntos
Carcinoma Hepatocelular , Ablação por Cateter , Neoplasias Hepáticas , Biomarcadores , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/patologia , Carcinoma Hepatocelular/cirurgia , Meios de Contraste , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Neoplasias Hepáticas/cirurgia , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos
20.
Neuroimage ; 252: 119031, 2022 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-35257856

RESUMO

Head motion during PET scans causes image quality degradation, decreased concentration in regions with high uptake and incorrect outcome measures from kinetic analysis of dynamic datasets. Previously, we proposed a data-driven method, center of tracer distribution (COD), to detect head motion without an external motion tracking device. There, motion was detected using one dimension of the COD trace with a semiautomatic detection algorithm, requiring multiple user defined parameters and manual intervention. In this study, we developed a new data-driven motion detection algorithm, which is automatic, self-adaptive to noise level, does not require user-defined parameters and uses all three dimensions of the COD trace (3DCOD). 3DCOD was first validated and tested using 30 simulation studies (18F-FDG, N = 15; 11C-raclopride (RAC), N = 15) with large motion. The proposed motion correction method was tested on 22 real human datasets, with 20 acquired from a high resolution research tomograph (HRRT) scanner (18F-FDG, N = 10; 11C-RAC, N = 10) and 2 acquired from the Siemens Biograph mCT scanner. Real-time hardware-based motion tracking information (Vicra) was available for all real studies and was used as the gold standard. 3DCOD was compared to Vicra, no motion correction (NMC), one-direction COD (our previous method called 1DCOD) and two conventional frame-based image registration (FIR) algorithms, i.e., FIR1 (based on predefined frames reconstructed with attenuation correction) and FIR2 (without attenuation correction) for both simulation and real studies. For the simulation studies, 3DCOD yielded -2.3 ± 1.4% (mean ± standard deviation across all subjects and 11 brain regions) error in region of interest (ROI) uptake for 18F-FDG (-3.4 ± 1.7% for 11C-RAC across all subjects and 2 regions) as compared to Vicra (perfect correction) while NMC, FIR1, FIR2 and 1DCOD yielded -25.4 ± 11.1% (-34.5 ± 16.1% for 11C- RAC), -13.4 ± 3.5% (-16.1 ± 4.6%), -5.7 ± 3.6% (-8.0 ± 4.5%) and -2.6 ± 1.5% (-5.1 ± 2.7%), respectively. For real HRRT studies, 3DCOD yielded -0.3 ± 2.8% difference for 18F-FDG (-0.4 ± 3.2% for 11C-RAC) as compared to Vicra while NMC, FIR1, FIR2 and 1DCOD yielded -14.9 ± 9.0% (-24.5 ± 14.6%), -3.6 ± 4.9% (-13.4 ± 14.3%), -0.6 ± 3.4% (-6.7 ± 5.3%) and -1.5 ± 4.2% (-2.2 ± 4.1%), respectively. In summary, the proposed motion correction method yielded comparable performance to the hardware-based motion tracking method for multiple tracers, including very challenging cases with large frequent head motion, in studies performed on a non-TOF scanner.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Cinética , Movimento (Física) , Movimento , Tomografia por Emissão de Pósitrons/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA