Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
NMR Biomed ; : e5145, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38488205

ABSTRACT

Noninvasive extracellular pH (pHe ) mapping with Biosensor Imaging of Redundant Deviation in Shifts (BIRDS) using MR spectroscopic imaging (MRSI) has been demonstrated on 3T clinical MR scanners at 8 × 8 × 10 $$ \times 8\times 10 $$ mm3 spatial resolution and applied to study various liver cancer treatments. Although pHe imaging at higher resolution can be achieved by extending the acquisition time, a postprocessing method to increase the resolution is preferable, to minimize the duration spent by the subject in the MR scanner. In this work, we propose to improve the spatial resolution of pHe mapping with BIRDS by incorporating anatomical information in the form of multiparametric MRI and using an unsupervised deep-learning technique, Deep Image Prior (DIP). Specifically, we used high-resolution T 1 $$ {\mathrm{T}}_1 $$ , T 2 $$ {\mathrm{T}}_2 $$ , and diffusion-weighted imaging (DWI) MR images of rabbits with VX2 liver tumors as inputs to a U-Net architecture to provide anatomical information. U-Net parameters were optimized to minimize the difference between the output super-resolution image and the experimentally acquired low-resolution pHe image using the mean-absolute error. In this way, the super-resolution pHe image would be consistent with both anatomical MR images and the low-resolution pHe measurement from the scanner. The method was developed based on data from 49 rabbits implanted with VX2 liver tumors. For evaluation, we also acquired high-resolution pHe images from two rabbits, which were used as ground truth. The results indicate a good match between the spatial characteristics of the super-resolution images and the high-resolution ground truth, supported by the low pixelwise absolute error.

2.
J Biophotonics ; 16(9): e202300059, 2023 09.
Article in English | MEDLINE | ID: mdl-37289201

ABSTRACT

Automated analysis of the vessel structure in intravascular optical coherence tomography (IVOCT) images is critical to assess the health status of vessels and monitor coronary artery disease progression. However, deep learning-based methods usually require well-annotated large datasets, which are difficult to obtain in the field of medical image analysis. Hence, an automatic layers segmentation method based on meta-learning was proposed, which can simultaneously extract the surfaces of the lumen, intima, media, and adventitia using a handful of annotated samples. Specifically, we leverage a bi-level gradient strategy to train a meta-learner for capturing the shared meta-knowledge among different anatomical layers and quickly adapting to unknown anatomical layers. Then, a Claw-type network and a contrast consistency loss were designed to better learn the meta-knowledge according to the characteristic of annotation of the lumen and anatomical layers. Experimental results on the two cardiovascular IVOCT datasets show that the proposed method achieved state-of-art performance.


Subject(s)
Coronary Artery Disease , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Lung
3.
J Biophotonics ; 16(5): e202200343, 2023 05.
Article in English | MEDLINE | ID: mdl-36635865

ABSTRACT

Automatic detection of thin-cap fibroatheroma (TCFA) on intravascular optical coherence tomography images is essential for the prevention of acute coronary syndrome. However, existing methods need to mark the exact location of TCFAs on each frame as supervision, which is extremely time-consuming and expensive. Hence, a new weakly supervised framework is proposed to detect TCFAs using only image-level tags as supervision. The framework comprises cut, feature extraction, relation, and detection modules. First, based on prior knowledge, a cut module was designed to generate a small number of specific region proposals. Then, to learn global information, a relation module was designed to learn the spatial adjacency and order relationships at the feature level, and an attention-based strategy was introduced in the detection module to effectively aggregate the classification results of region proposals as the image-level predicted score. The results demonstrate that the proposed method surpassed the state-of-the-art weakly supervised detection methods.


Subject(s)
Plaque, Atherosclerotic , Humans , Plaque, Atherosclerotic/diagnostic imaging , Tomography, Optical Coherence/methods , Supervised Machine Learning
4.
Med Image Comput Comput Assist Interv ; 14229: 710-719, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38174207

ABSTRACT

Head motion correction is an essential component of brain PET imaging, in which even motion of small magnitude can greatly degrade image quality and introduce artifacts. Building upon previous work, we propose a new head motion correction framework taking fast reconstructions as input. The main characteristics of the proposed method are: (i) the adoption of a high-resolution short-frame fast reconstruction workflow; (ii) the development of a novel encoder for PET data representation extraction; and (iii) the implementation of data augmentation techniques. Ablation studies are conducted to assess the individual contributions of each of these design choices. Furthermore, multi-subject studies are conducted on an 18F-FPEB dataset, and the method performance is qualitatively and quantitatively evaluated by MOLAR reconstruction study and corresponding brain Region of Interest (ROI) Standard Uptake Values (SUV) evaluation. Additionally, we also compared our method with a conventional intensity-based registration method. Our results demonstrate that the proposed method outperforms other methods on all subjects, and can accurately estimate motion for subjects out of the training set. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023.

5.
Mach Learn Clin Neuroimaging (2023) ; 14312: 34-45, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38174216

ABSTRACT

Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners: HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub: https://github.com/OnofreyLab/dl_hmc_attention_mlcn2023.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1592-1595, 2020 07.
Article in English | MEDLINE | ID: mdl-33018298

ABSTRACT

Clinically, the Fundus Fluorescein Angiography (FA) is a more common mean for Diabetic Retinopathy (DR) detection since the DR appears in FA much more contrasty than in Color Fundus Image (CF). However, acquiring FA has a risk of death due to the fluorescent allergy. Thus, in this paper, we explore a novel unpaired CycleGAN-based model for the FA synthesis from CF, where some strict structure similarity constraints are employed to guarantee the perfectly mapping from one domain to another one. First, a triple multi-scale network architecture with multi-scale inputs, multi-scale discriminators and multi-scale cycle consistency losses is proposed to enhance the similarity between two retinal modalities from different scales. Second, the self-attention mechanism is introduced to improve the adaptive domain mapping ability of the model. Third, to further improve strict constraints in the feather level, quality loss is employed between each process of generation and reconstruction. Qualitative examples, as well as quantitative evaluation, are provided to support the robustness and the accuracy of our proposed method.


Subject(s)
Diabetic Retinopathy , Retina , Attention , Diabetic Retinopathy/diagnosis , Fluorescein Angiography , Fundus Oculi , Humans , Retina/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...