RESUMO
Noninvasive extracellular pH (pHe) mapping with Biosensor Imaging of Redundant Deviation in Shifts (BIRDS) using MR spectroscopic imaging (MRSI) has been demonstrated on 3T clinical MR scanners at 8 × 8 × 10 mm3 spatial resolution and applied to study various liver cancer treatments. Although pHe imaging at higher resolution can be achieved by extending the acquisition time, a postprocessing method to increase the resolution is preferable, to minimize the duration spent by the subject in the MR scanner. In this work, we propose to improve the spatial resolution of pHe mapping with BIRDS by incorporating anatomical information in the form of multiparametric MRI and using an unsupervised deep-learning technique, Deep Image Prior (DIP). Specifically, we used high-resolution T 1 , T 2 , and diffusion-weighted imaging (DWI) MR images of rabbits with VX2 liver tumors as inputs to a U-Net architecture to provide anatomical information. U-Net parameters were optimized to minimize the difference between the output super-resolution image and the experimentally acquired low-resolution pHe image using the mean-absolute error. In this way, the super-resolution pHe image would be consistent with both anatomical MR images and the low-resolution pHe measurement from the scanner. The method was developed based on data from 49 rabbits implanted with VX2 liver tumors. For evaluation, we also acquired high-resolution pHe images from two rabbits, which were used as ground truth. The results indicate a good match between the spatial characteristics of the super-resolution images and the high-resolution ground truth, supported by the low pixelwise absolute error.
Assuntos
Neoplasias Hepáticas , Imageamento por Ressonância Magnética Multiparamétrica , Animais , Concentração de Íons de Hidrogênio , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Coelhos , Aprendizado Profundo , Espaço Extracelular/diagnóstico por imagem , Espaço Extracelular/metabolismo , Imagem de Difusão por Ressonância MagnéticaRESUMO
Magnetic Resonance Spectroscopic Imaging (MRSI) is a non-invasive imaging technique for studying metabolism and has become a crucial tool for understanding neurological diseases, cancers and diabetes. High spatial resolution MRSI is needed to characterize lesions, but in practice MRSI is acquired at low resolution due to time and sensitivity restrictions caused by the low metabolite concentrations. Therefore, there is an imperative need for a post-processing approach to generate high-resolution MRSI from low-resolution data that can be acquired fast and with high sensitivity. Deep learning-based super-resolution methods provided promising results for improving the spatial resolution of MRSI, but they still have limited capability to generate accurate and high-quality images. Recently, diffusion models have demonstrated superior learning capability than other generative models in various tasks, but sampling from diffusion models requires iterating through a large number of diffusion steps, which is time-consuming. This work introduces a Flow-based Truncated Denoising Diffusion Model (FTDDM) for super-resolution MRSI, which shortens the diffusion process by truncating the diffusion chain, and the truncated steps are estimated using a normalizing flow-based network. The network is conditioned on upscaling factors to enable multi-scale super-resolution. To train and evaluate the deep learning models, we developed a 1H-MRSI dataset acquired from 25 high-grade glioma patients. We demonstrate that FTDDM outperforms existing generative models while speeding up the sampling process by over 9-fold compared to the baseline diffusion model. Neuroradiologists' evaluations confirmed the clinical advantages of our method, which also supports uncertainty estimation and sharpness adjustment, extending its potential clinical applications.
RESUMO
Automatic detection of thin-cap fibroatheroma (TCFA) on intravascular optical coherence tomography images is essential for the prevention of acute coronary syndrome. However, existing methods need to mark the exact location of TCFAs on each frame as supervision, which is extremely time-consuming and expensive. Hence, a new weakly supervised framework is proposed to detect TCFAs using only image-level tags as supervision. The framework comprises cut, feature extraction, relation, and detection modules. First, based on prior knowledge, a cut module was designed to generate a small number of specific region proposals. Then, to learn global information, a relation module was designed to learn the spatial adjacency and order relationships at the feature level, and an attention-based strategy was introduced in the detection module to effectively aggregate the classification results of region proposals as the image-level predicted score. The results demonstrate that the proposed method surpassed the state-of-the-art weakly supervised detection methods.
Assuntos
Placa Aterosclerótica , Humanos , Placa Aterosclerótica/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Aprendizado de Máquina SupervisionadoRESUMO
Automated analysis of the vessel structure in intravascular optical coherence tomography (IVOCT) images is critical to assess the health status of vessels and monitor coronary artery disease progression. However, deep learning-based methods usually require well-annotated large datasets, which are difficult to obtain in the field of medical image analysis. Hence, an automatic layers segmentation method based on meta-learning was proposed, which can simultaneously extract the surfaces of the lumen, intima, media, and adventitia using a handful of annotated samples. Specifically, we leverage a bi-level gradient strategy to train a meta-learner for capturing the shared meta-knowledge among different anatomical layers and quickly adapting to unknown anatomical layers. Then, a Claw-type network and a contrast consistency loss were designed to better learn the meta-knowledge according to the characteristic of annotation of the lumen and anatomical layers. Experimental results on the two cardiovascular IVOCT datasets show that the proposed method achieved state-of-art performance.
Assuntos
Doença da Artéria Coronariana , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , PulmãoRESUMO
Head motion correction is an essential component of brain PET imaging, in which even motion of small magnitude can greatly degrade image quality and introduce artifacts. Building upon previous work, we propose a new head motion correction framework taking fast reconstructions as input. The main characteristics of the proposed method are: (i) the adoption of a high-resolution short-frame fast reconstruction workflow; (ii) the development of a novel encoder for PET data representation extraction; and (iii) the implementation of data augmentation techniques. Ablation studies are conducted to assess the individual contributions of each of these design choices. Furthermore, multi-subject studies are conducted on an 18F-FPEB dataset, and the method performance is qualitatively and quantitatively evaluated by MOLAR reconstruction study and corresponding brain Region of Interest (ROI) Standard Uptake Values (SUV) evaluation. Additionally, we also compared our method with a conventional intensity-based registration method. Our results demonstrate that the proposed method outperforms other methods on all subjects, and can accurately estimate motion for subjects out of the training set. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023.
RESUMO
Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners: HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub: https://github.com/OnofreyLab/dl_hmc_attention_mlcn2023.
RESUMO
Clinically, the Fundus Fluorescein Angiography (FA) is a more common mean for Diabetic Retinopathy (DR) detection since the DR appears in FA much more contrasty than in Color Fundus Image (CF). However, acquiring FA has a risk of death due to the fluorescent allergy. Thus, in this paper, we explore a novel unpaired CycleGAN-based model for the FA synthesis from CF, where some strict structure similarity constraints are employed to guarantee the perfectly mapping from one domain to another one. First, a triple multi-scale network architecture with multi-scale inputs, multi-scale discriminators and multi-scale cycle consistency losses is proposed to enhance the similarity between two retinal modalities from different scales. Second, the self-attention mechanism is introduced to improve the adaptive domain mapping ability of the model. Third, to further improve strict constraints in the feather level, quality loss is employed between each process of generation and reconstruction. Qualitative examples, as well as quantitative evaluation, are provided to support the robustness and the accuracy of our proposed method.