Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 41
Filtrer
1.
Biomed Opt Express ; 15(5): 3000-3017, 2024 May 01.
Article de Anglais | MEDLINE | ID: mdl-38855668

RÉSUMÉ

An ultrahigh-speed, wide-field OCT system for the imaging of anterior, posterior, and ocular biometers is crucial for obtaining comprehensive ocular parameters and quantifying ocular pathology size. Here, we demonstrate a multi-parametric ophthalmic OCT system with a speed of up to 1 MHz for wide-field imaging of the retina and 50 kHz for anterior chamber and ocular biometric measurement. A spectrum correction algorithm is proposed to ensure the accurate pairing of adjacent A-lines and elevate the A-scan speed from 500 kHz to 1 MHz for retinal imaging. A registration method employing position feedback signals was introduced, reducing pixel offsets between forward and reverse galvanometer scanning by 2.3 times. Experimental validation on glass sheets and the human eye confirms feasibility and efficacy. Meanwhile, we propose a revised formula to determine the "true" fundus size using all-axial length parameters from different fields of view. The efficient algorithms and compact design enhance system compatibility with clinical requirements, showing promise for widespread commercialization.

2.
Biomed Opt Express ; 15(5): 2958-2976, 2024 May 01.
Article de Anglais | MEDLINE | ID: mdl-38855701

RÉSUMÉ

Optical coherence tomography (OCT), owing to its non-invasive nature, has demonstrated tremendous potential in clinical practice and has become a prevalent diagnostic method. Nevertheless, the inherent speckle noise and low sampling rate in OCT imaging often limit the quality of OCT images. In this paper, we propose a lightweight Transformer to efficiently reconstruct high-quality images from noisy and low-resolution OCT images acquired by short scans. Our method, PSCAT, parallelly employs spatial window self-attention and channel attention in the Transformer block to aggregate features from both spatial and channel dimensions. It explores the potential of the Transformer in denoising and super-resolution for OCT, reducing computational costs and enhancing the speed of image processing. To effectively assist in restoring high-frequency details, we introduce a hybrid loss function in both spatial and frequency domains. Extensive experiments demonstrate that our PSCAT has fewer network parameters and lower computational costs compared to state-of-the-art methods while delivering a competitive performance both qualitatively and quantitatively.

3.
Article de Anglais | MEDLINE | ID: mdl-38848235

RÉSUMÉ

Weakly supervised object localization (WSOL), adopting only image-level annotations to learn the pixel-level localization model, can release human resources in the annotation process. Most one-stage WSOL methods learn the localization model with multi-instance learning, making them only activate discriminative object parts rather than the whole object. In our work, we attribute this problem to the domain shift between the training and test process of WSOL and provide a novel perspective that views WSOL as a domain adaption (DA) task. Under this perspective, a DA-WSOL pipeline is elaborated to better assist WSOL with DA approaches by considering the specificities for the adaption of WSOL. Our DA-WSOL pipeline can discern the source-related and the Universum samples from other target samples based on a proposed target sampling strategy and then utilize them to solve the sample unbalancing and label unmatching between the source and target domain of WSOL. Experiments show that our pipeline outperforms SOTA methods on three WSOL benchmarks and can improve the performance of downstream weakly supervised semantic segmentation tasks. Codes are available at https://github.com/zh460045050/dawsol.

4.
Br J Ophthalmol ; 2024 May 02.
Article de Anglais | MEDLINE | ID: mdl-38697799

RÉSUMÉ

BACKGROUND/AIMS: To investigate the comprehensive prediction ability for cognitive impairment in a general elder population using the combination of the multimodal ophthalmic imaging and artificial neural networks. METHODS: Patients with cognitive impairment and cognitively healthy individuals were recruited. All subjects underwent medical history, blood pressure measurement, the Montreal Cognitive Assessment, medical optometry, intraocular pressure and custom-built multimodal ophthalmic imaging, which integrated pupillary light reaction, multispectral imaging, laser speckle contrast imaging and retinal oximetry. Multidimensional parameters were analysed by Student's t-test. Logistic regression analysis and back-propagation neural network (BPNN) were used to identify the predictive capability for cognitive impairment. RESULTS: This study included 104 cognitive impairment patients (61.5% female; mean (SD) age, 68.3 (9.4) years), and 94 cognitively healthy age-matched and sex-matched subjects (56.4% female; mean (SD) age, 65.9 (7.6) years). The variation of most parameters including decreased pupil constriction amplitude (CA), relative CA, average constriction velocity, venous diameter, venous blood flow and increased centred retinal reflectance in 548 nm (RC548) in cognitive impairment was consistent with previous studies while the reduced flow acceleration index and oxygen metabolism were reported for the first time. Compared with the logistic regression model, BPNN had better predictive performance (accuracy: 0.91 vs 0.69; sensitivity: 93.3% vs 61.70%; specificity: 90.0% vs 68.66%). CONCLUSIONS: This study demonstrates retinal spectral signature alteration, neurodegeneration and angiopathy occur concurrently in cognitive impairment. The combination of multimodal ophthalmic imaging and BPNN can be a useful tool for predicting cognitive impairment with high performance for community screening.

5.
IEEE Trans Med Imaging ; PP2024 Apr 30.
Article de Anglais | MEDLINE | ID: mdl-38687654

RÉSUMÉ

Accurate segmentation of anatomical structures in Computed Tomography (CT) images is crucial for clinical diagnosis, treatment planning, and disease monitoring. The present deep learning segmentation methods are hindered by factors such as data scale and model size. Inspired by how doctors identify tissues, we propose a novel approach, the Prior Category Network (PCNet), that boosts segmentation performance by leveraging prior knowledge between different categories of anatomical structures. Our PCNet comprises three key components: prior category prompt (PCP), hierarchy category system (HCS), and hierarchy category loss (HCL). PCP utilizes Contrastive Language-Image Pretraining (CLIP), along with attention modules, to systematically define the relationships between anatomical categories as identified by clinicians. HCS guides the segmentation model in distinguishing between specific organs, anatomical structures, and functional systems through hierarchical relationships. HCL serves as a consistency constraint, fortifying the directional guidance provided by HCS to enhance the segmentation model's accuracy and robustness. We conducted extensive experiments to validate the effectiveness of our approach, and the results indicate that PCNet can generate a high-performance, universal model for CT segmentation. The PCNet framework also demonstrates a significant transferability on multiple downstream tasks. The ablation experiments show that the methodology employed in constructing the HCS is of critical importance. The prompt and HCS can be accessed at https://github.com/YixinChen-AI/PCNet.

6.
Transl Vis Sci Technol ; 13(3): 18, 2024 Mar 01.
Article de Anglais | MEDLINE | ID: mdl-38512284

RÉSUMÉ

Purpose: To investigate the choroidal vascularity index (CVI) and choroidal structural changes in children with nephrotic syndrome. Methods: This was a cross-sectional study involving 45 children with primary nephrotic syndrome and 40 normal controls. All participants underwent enhanced depth imaging-optical coherence tomography examinations. An automatic segmentation method based on deep learning was used to segment the choroidal vessels and stroma, and the choroidal volume (CV), vascular volume (VV), and CVI within a 4.5 mm diameter circular area centered around the macular fovea were obtained. Clinical data, including blood lipids, serum proteins, renal function, and renal injury indicators, were collected from the patients. Results: Compared with normal controls, children with nephrotic syndrome had a significant increase in CV (nephrotic syndrome: 4.132 ± 0.464 vs. normal controls: 3.873 ± 0.574; P = 0.024); no significant change in VV (nephrotic syndrome: 1.276 ± 0.173 vs. normal controls: 1.277 ± 0.165; P = 0.971); and a significant decrease in the CVI (nephrotic syndrome: 0.308 [range, 0.270-0.386] vs. normal controls: 0.330 [range, 0.288-0.387]; P < 0.001). In the correlation analysis, the CVI was positively correlated with serum total protein, serum albumin, serum prealbumin, ratio of serum albumin to globulin, and 24-hour urine volume and was negatively correlated with total cholesterol, low-density lipoprotein cholesterol, urinary protein concentration, and ratio of urinary transferrin to creatinine (all P < 0.05). Conclusions: The CVI is significantly reduced in children with nephrotic syndrome, and the decrease in the CVI parallels the severity of kidney disease, indicating choroidal involvement in the process of nephrotic syndrome. Translational Relevance: Our findings contribute to a deeper understanding of how nephrotic syndrome affects the choroid.


Sujet(s)
Syndrome néphrotique , Enfant , Humains , Syndrome néphrotique/complications , Études transversales , Choroïde/imagerie diagnostique , Fossette centrale , Cholestérol
7.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14175-14191, 2023 Dec.
Article de Anglais | MEDLINE | ID: mdl-37643092

RÉSUMÉ

Weakly supervised object localization (WSOL) relaxes the requirement of dense annotations for object localization by using image-level annotation to supervise the learning process. However, most WSOL methods only focus on forcing the object classifier to produce high activation score on object parts without considering the influence of background locations, causing excessive background activations and ill-pose background score searching. Based on this point, our work proposes a novel mechanism called the background-aware classification activation map (B-CAM) to add background awareness for WSOL training. Besides aggregating an object image-level feature for supervision, our B-CAM produces an additional background image-level feature to represent the pure-background sample. This additional feature can provide background cues for the object classifier to suppress the background activations on object localization maps. Moreover, our B-CAM also trained a background classifier with image-level annotation to produce adaptive background scores when determining the binary localization mask. Experiments indicate the effectiveness of the proposed B-CAM on four different types of WSOL benchmarks, including CUB-200, ILSVRC, OpenImages, and VOC2012 datasets.

8.
Med Image Anal ; 89: 102884, 2023 10.
Article de Anglais | MEDLINE | ID: mdl-37459674

RÉSUMÉ

Deep neural networks (DNNs) have been widely applied in the medical image community, contributing to automatic ophthalmic screening systems for some common diseases. However, the incidence of fundus diseases patterns exhibits a typical long-tailed distribution. In clinic, a small number of common fundus diseases have sufficient observed cases for large-scale analysis while most of the fundus diseases are infrequent. For these rare diseases with extremely low-data regimes, it is challenging to train DNNs to realize automatic diagnosis. In this work, we develop an automatic diagnosis system for rare fundus diseases, based on the meta-learning framework. The system incorporates a co-regularization loss and the ensemble-learning strategy into the meta-learning framework, fully leveraging the advantage of multi-scale hierarchical feature embedding. We initially conduct comparative experiments on our newly-constructed lightweight multi-disease fundus images dataset for the few-shot recognition task (namely, FundusData-FS). Moreover, we verify the cross-domain transferability from miniImageNet to FundusData-FS, and further confirm our method's good repeatability. Rigorous experiments demonstrate that our method can detect rare fundus diseases, and is superior to the state-of-the-art methods. These investigations demonstrate that the potential of our method for the real clinical practice is promising.


Sujet(s)
, Maladies rares , Humains , Maladies rares/imagerie diagnostique , Fond de l'oeil , Apprentissage
9.
Front Oncol ; 13: 1129918, 2023.
Article de Anglais | MEDLINE | ID: mdl-37025592

RÉSUMÉ

Purpose: To propose and evaluate a comprehensive modeling approach combing radiomics, dosiomics and clinical components, for more accurate prediction of locoregional recurrence risk after radiotherapy for patients with locoregionally advanced HPSCC. Materials and methods: Clinical data of 77 HPSCC patients were retrospectively investigated, whose median follow-up duration was 23.27 (4.83-81.40) months. From the planning CT and dose distribution, 1321 radiomics and dosiomics features were extracted respectively from planning gross tumor volume (PGTV) region each patient. After stability test, feature dimension was further reduced by Principal Component Analysis (PCA), yielding Radiomic and Dosiomic Principal Components (RPCs and DPCs) respectively. Multiple Cox regression models were constructed using various combinations of RPC, DPC and clinical variables as the predictors. Akaike information criterion (AIC) and C-index were used to evaluate the performance of Cox regression models. Results: PCA was performed on 338 radiomic and 873 dosiomic features that were tested as stable (ICC1 > 0.7 and ICC2 > 0.95), yielding 5 RPCs and DPCs respectively. Three comprehensive features (RPC0, P<0.01, DPC0, P<0.01 and DPC3, P<0.05) were found to be significant in the individual Radiomic or Dosiomic Cox regression models. The model combining the above features and clinical variable (total stage IVB) provided best risk stratification of locoregional recurrence (C-index, 0.815; 95%CI, 0.770-0.859) and prevailing balance between predictive accuracy and complexity (AIC, 143.65) than any other investigated models using either single factors or two combined components. Conclusion: This study provided quantitative tools and additional evidence for the personalized treatment selection and protocol optimization for HPSCC, a relatively rare cancer. By combining complementary information from radiomics, dosiomics, and clinical variables, the proposed comprehensive model provided more accurate prediction of locoregional recurrence risk after radiotherapy.

10.
Mol Ther Oncolytics ; 28: 182-196, 2023 Mar 16.
Article de Anglais | MEDLINE | ID: mdl-36820302

RÉSUMÉ

Endogenous microRNAs (miRNA) in tumors are currently under exhaustive investigation as potential therapeutic agents for cancer treatment. Nevertheless, RNase degradation, inefficient and untargeted delivery, limited biological effect, and currently unclear side effects remain unsettled issues that frustrate clinical application. To address this, a versatile targeted delivery system for multiple therapeutic and diagnostic agents should be adapted for miRNA. In this study, we developed membrane-coated PLGA-b-PEG DC-chol nanoparticles (m-PPDCNPs) co-encapsulating doxorubicin (Dox) and miRNA-190-Cy7. Such a system showed low biotoxicity, high loading efficiency, and superior targeting ability. Systematic delivery of m-PPDCNPs in mouse models showed exceptionally specific tumor accumulation. Sustained release of miR-190 inhibited tumor angiogenesis, tumor growth, and migration by regulating a large group of angiogenic effectors. Moreover, m-PPDCNPs also enhanced the sensitivity of Dox by suppressing TGF-ß signal in colorectal cancer cell lines and mouse models. Together, our results demonstrate a stimulating and promising m-PPDCNPs nanoplatform for colorectal cancer theranostics.

11.
Comput Med Imaging Graph ; 103: 102164, 2023 01.
Article de Anglais | MEDLINE | ID: mdl-36563513

RÉSUMÉ

Hemodynamics imaging of the retinal microcirculation has been demonstrated to be potential access to evaluating ophthalmic diseases, cardio-cerebrovascular diseases, and metabolic diseases. However, existing structural and functional imaging techniques are insufficient in spatial or temporal resolution. The sphygmus gated laser speckle angiography (SGLSA) is proposed for structural and functional imaging with high spatiotemporal resolution. Compared with classic LSCI algorithms, SGLSA presents a much clearer perfusion image and higher signal-to-noise ratio pulsatility. The SGLSA algorithm also shows better performance on patients than traditional LSCI methods. The high spatiotemporal resolution provided by the SGLSA algorithm greatly enhances the ability of retinal microcirculation analysis, which makes up for the deficiency of the LSCI technology, and attaches great significance to retinal hemodynamic imaging, biomarker research, and clinical diagnosis.


Sujet(s)
Angiographie , Hémodynamique , Humains , Vitesse du flux sanguin , Microcirculation , Lasers
12.
Biomed Opt Express ; 13(10): 5400-5417, 2022 Oct 01.
Article de Anglais | MEDLINE | ID: mdl-36425629

RÉSUMÉ

The retina is one of the most metabolically active tissues in the body. The dysfunction of oxygen kinetics in the retina is closely related to the disease and has important clinical value. Dynamic imaging and comprehensive analyses of oxygen kinetics in the retina depend on the fusion of structural and functional imaging and high spatiotemporal resolution. But it's currently not clinically available, particularly via a single imaging device. Therefore, this work aims to develop a retinal oxygen kinetics imaging and analysis (ROKIA) technology by integrating dual-wavelength imaging with laser speckle contrast imaging modalities, which achieves structural and functional analysis with high spatial resolution and dynamic measurement, taking both external and lumen vessel diameters into account. The ROKIA systematically evaluated eight vascular metrics, four blood flow metrics, and fifteen oxygenation metrics. The single device scheme overcomes the incompatibility of optical design, harmonizes the field of view and resolution of different modalities, and reduces the difficulty of registration and image processing algorithms. More importantly, many of the metrics (such as oxygen delivery, oxygen metabolism, vessel wall thickness, etc.) derived from the fusion of structural and functional information, are unique to ROKIA. The oxygen kinetic analysis technology proposed in this paper, to our knowledge, is the first demonstration of the vascular metrics, blood flow metrics, and oxygenation metrics via a single system, which will potentially become a powerful tool for disease diagnosis and clinical research.

13.
Article de Anglais | MEDLINE | ID: mdl-36099219

RÉSUMÉ

RGB-depth (RGB-D) salient object detection (SOD) recently has attracted increasing research interest, and many deep learning methods based on encoder-decoder architectures have emerged. However, most existing RGB-D SOD models conduct explicit and controllable cross-modal feature fusion either in the single encoder or decoder stage, which hardly guarantees sufficient cross-modal fusion ability. To this end, we make the first attempt in addressing RGB-D SOD through 3-D convolutional neural networks. The proposed model, named, aims at prefusion in the encoder stage and in-depth fusion in the decoder stage to effectively promote the full integration of RGB and depth streams. Specifically, first conducts prefusion across RGB and depth modalities through a 3-D encoder obtained by inflating 2-D ResNet and later provides in-depth feature fusion by designing a 3-D decoder equipped with rich back-projection paths (RBPPs) for leveraging the extensive aggregation ability of 3-D convolutions. Toward an improved model, we propose to disentangle the conventional 3-D convolution into successive spatial and temporal convolutions and, meanwhile, discard unnecessary zero padding. This eventually results in a 2-D convolutional equivalence that facilitates optimization and reduces parameters and computation costs. Thanks to such a progressive-fusion strategy involving both the encoder and the decoder, effective and thorough interactions between the two modalities can be exploited and boost detection accuracy. As an additional boost, we also introduce channel-modality attention and its variant after each path of RBPP to attend to important features. Extensive experiments on seven widely used benchmark datasets demonstrate that and perform favorably against 14 state-of-the-art RGB-D SOD approaches in terms of five key evaluation metrics. Our code will be made publicly available at https://github.com/PPOLYpubki/RD3D.

14.
Comput Med Imaging Graph ; 101: 102110, 2022 10.
Article de Anglais | MEDLINE | ID: mdl-36057184

RÉSUMÉ

Medical image segmentation is a critical step in pathology assessment and monitoring. Extensive methods tend to utilize a deep convolutional neural network for various medical segmentation tasks, such as polyp segmentation, skin lesion segmentation, etc. However, due to the inherent difficulty of medical images and tremendous data variations, they usually perform poorly in some intractable cases. In this paper, we propose an input-specific network called conditional-synergistic convolution and lesion decoupling network (CCLDNet) to solve these issues. First, in contrast to existing CNN-based methods with stationary convolutions, we propose the conditional synergistic convolution (CSConv) that aims to generate a specialist convolution kernel for each lesion. CSConv has the ability of dynamic modeling and could be leveraged as a basic block to construct other networks in a broad range of vision tasks. Second, we devise a lesion decoupling strategy (LDS) to decouple the original lesion segmentation map into two soft labels, i.e., lesion center label and lesion boundary label, for reducing the segmentation difficulty. Besides, we use a transformer network as the backbone, further erasing the fixed structure of the standard CNN and empowering dynamic modeling capability of the whole framework. Our CCLDNet outperforms state-of-the-art approaches by a large margin on a variety of benchmarks, including polyp segmentation (89.22% dice score on EndoScene) and skin lesion segmentation (91.15% dice score on ISIC2018). Our code is available at https://github.com/QianChen98/CCLD-Net.


Sujet(s)
Traitement d'image par ordinateur , Maladies de la peau , Algorithmes , Humains , Traitement d'image par ordinateur/méthodes ,
15.
Med Phys ; 49(9): 5899-5913, 2022 Sep.
Article de Anglais | MEDLINE | ID: mdl-35678232

RÉSUMÉ

PURPOSE: Deep neural networks (DNNs) have been widely applied in medical image classification, benefiting from its powerful mapping capability among medical images. However, these existing deep learning-based methods depend on an enormous amount of carefully labeled images. Meanwhile, noise is inevitably introduced in the labeling process, degrading the performance of models. Hence, it is significant to devise robust training strategies to mitigate label noise in the medical image classification tasks. METHODS: In this work, we propose a novel Bayesian statistics-guided label refurbishment mechanism (BLRM) for DNNs to prevent overfitting noisy images. BLRM utilizes maximum a posteriori probability in the Bayesian statistics and the exponentially time-weighted technique to selectively correct the labels of noisy images. The training images are purified gradually with the training epochs when BLRM is activated, further improving classification performance. RESULTS: Comprehensive experiments on both synthetic noisy images (public OCT & Messidor datasets) and real-world noisy images (ANIMAL-10N) demonstrate that BLRM refurbishes the noisy labels selectively, curbing the adverse effects of noisy data. Also, the anti-noise BLRMs integrated with DNNs are effective at different noise ratio and are independent of backbone DNN architectures. In addition, BLRM is superior to state-of-the-art comparative methods of anti-noise. CONCLUSIONS: These investigations indicate that the proposed BLRM is well capable of mitigating label noise in medical image classification tasks.


Sujet(s)
, Animaux , Théorème de Bayes , Rapport signal-bruit
16.
IEEE Trans Med Imaging ; 41(11): 3357-3372, 2022 11.
Article de Anglais | MEDLINE | ID: mdl-35724282

RÉSUMÉ

Optical coherence tomography (OCT) is a widely-used modality in clinical imaging, which suffers from the speckle noise inevitably. Deep learning has proven its superior capability in OCT image denoising, while the difficulty of acquiring a large number of well-registered OCT image pairs limits the developments of paired learning methods. To solve this problem, some unpaired learning methods have been proposed, where the denoising networks can be trained with unpaired OCT data. However, majority of them are modified from the cycleGAN framework. These cycleGAN-based methods train at least two generators and two discriminators, while only one generator is needed for the inference. The dual-generator and dual-discriminator structures of cycleGAN-based methods demand a large amount of computing resource, which may be redundant for OCT denoising tasks. In this work, we propose a novel triplet cross-fusion learning (TCFL) strategy for unpaired OCT image denoising. The model complexity of our strategy is much lower than those of the cycleGAN-based methods. During training, the clean components and the noise components from the triplet of three unpaired images are cross-fused, helping the network extract more speckle noise information to improve the denoising accuracy. Furthermore, the TCFL-based network which is trained with triplets can deal with limited training data scenarios. The results demonstrate that the TCFL strategy outperforms state-of-the-art unpaired methods both qualitatively and quantitatively, and even achieves denoising performance comparable with paired methods. Code is available at: https://github.com/gengmufeng/TCFL-OCT.


Sujet(s)
Traitement d'image par ordinateur , Tomographie par cohérence optique , Tomographie par cohérence optique/méthodes , Traitement d'image par ordinateur/méthodes , Rapport signal-bruit
17.
Phys Med Biol ; 67(8)2022 04 01.
Article de Anglais | MEDLINE | ID: mdl-35299162

RÉSUMÉ

Objective. The choroid is the most vascularized structure in the human eye, whose layer structure and vessel distribution are both critical for the physiology of the retina, and disease pathogenesis of the eye. Although some works have used graph-based methods or convolutional neural networks to separate the choroid layer from the outer-choroid structure, few works focused on further distinguishing the inner-choroid structure, such as the choroid vessel and choroid stroma.Approach.Inspired by the multi-task learning strategy, in this paper, we propose a segmentation pipeline for choroid analysis which can separate the choroid layer from other structures and segment the choroid vessel synergistically. The key component of this pipeline is the proposed choroidal U-shape network (CUNet), which catches both correlation features and specific features between the choroid layer and the choroid vessel. Then pixel-wise classification is generated based on these two types of features to obtain choroid layer segmentation and vessel segmentation. Besides, the training process of CUNet is supervised by a proposed adaptive multi-task segmentation loss which adds a regularization term that is used to balance the performance of the two tasks.Main results.Experiments show the high performance (4% higher dice score) and less computational complexity (18.85 M lower size) of our proposed strategy.Significance.The high performance and generalization on both choroid layer and vessel segmentation indicate the clinical potential of our proposed pipeline.


Sujet(s)
Apprentissage profond , Choroïde/imagerie diagnostique , Humains , Traitement d'image par ordinateur/méthodes , , Rétine
18.
Med Phys ; 49(6): 3705-3716, 2022 Jun.
Article de Anglais | MEDLINE | ID: mdl-35306668

RÉSUMÉ

PURPOSE: Optical coherence tomography angiography (OCTA) is a premium imaging modality for noninvasive microvasculature studies. Deep learning networks have achieved promising results in the OCTA reconstruction task, benefiting from their powerful modeling capability. However, two limitations exist in the current deep learning-based OCTA reconstruction methods: (a) the angiogram information extraction is only limited to the locally consecutive B-scans; and (b) all reconstruction models are confined to the 2D convolutional network architectures, lacking effective temporal modeling. As a result, the valuable neighborhood information and inherent temporal characteristics of OCTA are not fully utilized. In this paper, we designed a neighborhood information-fused Pseudo-3D U-Net (NI-P3D-U) for OCTA reconstruction. METHODS: The proposed NI-P3D-U was investigated on an in vivo animal dataset by a cross-validation strategy under both fully supervised learning and weakly supervised learning pipelines. To demonstrate the OCTA reconstruction capability of the proposed NI-P3D-U, we compared it with several state-of-the-art methods. RESULTS: The results showed that the proposed network outperformed the state-of-the-art deep learning-based OCTA algorithms in terms of visual quality and quantitative metrics, and demonstrated an effective generalization for different training strategies (fully supervised and weakly supervised) and imaging protocols. Meanwhile, the idea of neighborhood information fusion was also expanded to other network architectures, resulting in significant improvements. CONCLUSIONS: These investigations indicate that the proposed network, which combines the neighborhood information strategy with temporal modeling architecture, is well capable of performing OCTA reconstruction, and has a certain potential for clinical applications.


Sujet(s)
Apprentissage profond , Tomographie par cohérence optique , Algorithmes , Angiographie , Animaux , Angiographie fluorescéinique , Microvaisseaux , Tomographie par cohérence optique/méthodes
19.
IEEE Trans Med Imaging ; 41(2): 407-419, 2022 02.
Article de Anglais | MEDLINE | ID: mdl-34529565

RÉSUMÉ

Medical imaging denoising faces great challenges, yet is in great demand. With its distinctive characteristics, medical imaging denoising in the image domain requires innovative deep learning strategies. In this study, we propose a simple yet effective strategy, the content-noise complementary learning (CNCL) strategy, in which two deep learning predictors are used to learn the respective content and noise of the image dataset complementarily. A medical image denoising pipeline based on the CNCL strategy is presented, and is implemented as a generative adversarial network, where various representative networks (including U-Net, DnCNN, and SRDenseNet) are investigated as the predictors. The performance of these implemented models has been validated on medical imaging datasets including CT, MR, and PET. The results show that this strategy outperforms state-of-the-art denoising algorithms in terms of visual quality and quantitative metrics, and the strategy demonstrates a robust generalization capability. These findings validate that this simple yet effective strategy demonstrates promising potential for medical image denoising tasks, which could exert a clinical impact in the future. Code is available at: https://github.com/gengmufeng/CNCL-denoising.


Sujet(s)
Traitement d'image par ordinateur , Tomodensitométrie , Algorithmes , Traitement d'image par ordinateur/méthodes , Rapport signal-bruit
20.
J Biophotonics ; 15(2): e202100285, 2022 02.
Article de Anglais | MEDLINE | ID: mdl-34726828

RÉSUMÉ

A novel integration of retinal multispectral imaging (MSI), retinal oximetry and laser speckle contrast imaging (LSCI) is presented for functional imaging of retinal blood vessels that could potentially allow early detection or monitoring of functional changes. We designed and built a cost-effective, scalable, retinal imaging instrument that integrates structural and functional retinal imaging techniques, including MSI, retinal oximetry and LSCI. Color fundus imaging was performed with 470 nm, 550 nm and 600 nm wavelength light emitting diode (LED) illumination. Retinal oximetry was performed using 550 nm and 600 nm LED illumination. LSCI of blood flow was performed using 850 nm laser diode illumination at 82 frames per second. LSCI can visualize retinal and choroidal vasculature without requiring exogenous contrast agents and can provide time-resolved information on blood flow, generating a cardiac pulse waveform from retinal vasculature. The technology can rapidly acquire structural MSI images, retinal oximetry and LSCI blood flow information in a simplified clinical workflow without requiring patients to move between instruments. Results from multiple modalities can be combined and registered to provide structural as well as functional information on the retina. These advances can reduce barriers for clinical adoption, accelerating research using MSI, retinal oximetry and LSCI of blood flow for diagnosis, monitoring and elucidating disease pathogenesis.


Sujet(s)
Imagerie diagnostique , Imagerie de contraste à granularité laser , Fond de l'oeil , Humains , Oxymétrie , Vaisseaux rétiniens/imagerie diagnostique
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE