Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 56
Filter
1.
IEEE Trans Biomed Eng ; PP2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38662563

ABSTRACT

OBJECTIVE: Optical Coherence Tomography (OCT) images can provide non-invasive visualization of fundus lesions; however, scanners from different OCT manufacturers largely vary from each other, which often leads to model deterioration to unseen OCT scanners due to domain shift. METHODS: To produce the T-styles of the potential target domain, an Orthogonal Style Space Reparameterization (OSSR) method is proposed to apply orthogonal constraints in the latent orthogonal style space to the sampled marginal styles. To leverage the high-level features of multi-source domains and potential T-styles in the graph semantic space, a Graph Adversarial Network (GAN) is constructed to align the generated samples with the source domain samples. To align features with the same label based on the semantic feature in the graph semantic space, Graph Semantic Alignment (GSA) is performed to focus on the shape and the morphological differences between the lesions and their surrounding regions. RESULTS: Comprehensive experiments have been performed on two OCT image datasets. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. CONCLUSION: The proposed fundus lesion segmentation method can be trained with labeled OCT images from multiple manufacturers' scanners and be tested on an unseen manufacturer's scanner with better domain generalization. SIGNIFICANCE: The proposed method can be used in routine clinical occasions when an unseen manufacturer's OCT image is available for a patient.

2.
IEEE Trans Biomed Eng ; PP2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38512744

ABSTRACT

OBJECTIVE: Multi-modal magnetic resonance (MR) image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to obtain multiple modalities for a single patient in clinical applications. To address these issues, a cross-modal consistency framework is proposed for a single-modal MR image segmentation. METHODS: To enable single-modal MR image segmentation in the inference stage, a weighted cross-entropy loss and a pixel-level feature consistency loss are proposed to train the target network with the guidance of the teacher network and the auxiliary network. To fuse dual-modal MR images in the training stage, the cross-modal consistency is measured according to Dice similarity entropy loss and Dice similarity contrastive loss, so as to maximize the prediction similarity of the teacher network and the auxiliary network. To reduce the difference in image contrast between different MR images for the same organs, a contrast alignment network is proposed to align input images with different contrasts to reference images with a good contrast. RESULTS: Comprehensive experiments have been performed on a publicly available prostate dataset and an in-house pancreas dataset to verify the effectiveness of the proposed method. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. CONCLUSION: The proposed image segmentation method can fuse dual-modal MR images in the training stage and only need one-modal MR images in the inference stage. SIGNIFICANCE: The proposed method can be used in routine clinical occasions when only single-modal MR image with variable contrast is available for a patient.

3.
Phys Med Biol ; 69(7)2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38394676

ABSTRACT

Objective.Neovascular age-related macular degeneration (nAMD) and polypoidal choroidal vasculopathy (PCV) present many similar clinical features. However, there are significant differences in the progression of nAMD and PCV. and it is crucial to make accurate diagnosis for treatment. In this paper, we propose a structure-radiomic fusion network (DRFNet) to differentiate PCV and nAMD in optical coherence tomography (OCT) images.Approach.The subnetwork (RIMNet) is designed to automatically segment the lesion of nAMD and PCV. Another subnetwork (StrEncoder) is designed to extract deep structural features of the segmented lesion. The subnetwork (RadEncoder) is designed to extract radiomic features from the segmented lesions based on radiomics. 305 eyes (155 with nAMD and 150 with PCV) are included and manually annotated CNV region in this study. The proposed method was trained and evaluated by 4-fold cross validation using the collected data and was compared with the advanced differentiation methods.Main results.The proposed method achieved high classification performace of nAMD/PCV differentiation in OCT images, which was an improvement of 4.68 compared with other best method.Significance. The presented structure-radiomic fusion network (DRFNet) has great performance of diagnosing nAMD and PCV and high clinical value by using OCT instead of indocyanine green angiography.


Subject(s)
Choroid , Polypoidal Choroidal Vasculopathy , Humans , Choroid/blood supply , Tomography, Optical Coherence/methods , Radiomics , Fluorescein Angiography/methods , Retrospective Studies
4.
Psychiatry Res Neuroimaging ; 337: 111762, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38043369

ABSTRACT

PURPOSE: This study explores subcortices and their intrinsic functional connectivity (iFC) in autism spectrum disorder (ASD) adults and investigates their relationship with clinical severity. METHODS: Resting-state functional magnetic resonance imaging (rs-fMRI) data were acquired from 74 ASD patients, and 63 gender and age-matched typically developing (TD) adults. Independent component analysis (ICA) was conducted to evaluate subcortical patterns of basal ganglia (BG) and thalamus. These two brain areas were treated as regions of interest to further calculate whole-brain FC. In addition, we employed multivariate machine learning to identify subcortices-based FC brain patterns and clinical scores to classify ASD adults from those TD subjects. RESULTS: In ASD individuals, autism diagnostic observation schedule (ADOS) was negatively correlated with the BG network. Similarly, social responsiveness scale (SRS) was negatively correlated with the thalamus network. The BG-based iFC analysis revealed adults with ASD versus TD had lower FC, and its FC with the right medial temporal lobe (MTL), was positively correlated with SRS and ADOS separately. ASD could be predicted with a balanced accuracy of around 60.0 % using brain patterns and 84.7 % using clinical variables. CONCLUSION: Our results revealed the abnormal subcortical iFC may be related to autism symptoms.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Adult , Humans , Autism Spectrum Disorder/diagnostic imaging , Brain Mapping/methods , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging
5.
Clin Exp Hypertens ; 45(1): 2228518, 2023 Dec 31.
Article in English | MEDLINE | ID: mdl-37366048

ABSTRACT

OBJECTIVE: To explore the association of renal surface nodularity (RSN) with the increased adverse vascular event (AVE) risk in patients with arterial hypertension. METHODS: This cross-sectional study included patients with arterial hypertension aged 18-60 years who underwent contrasted computed tomography (CT) of kidney from January 2012 to December 2020. The subjects were classified into AVE or not (non-AVE) matched with age (≤5 years) and sex. Their CT images were analyzed using both qualitative (semiRSN) and quantitative RSN (qRSN) methods, respectively. Their clinical characteristics included age, sex, systolic blood pressure (SBP), diastolic blood pressure, hypertension course, diabetes history, hyperlipidemia, and estimated glomerular filtration rate (eGFR). RESULTS: Compared with non-AVE group (n = 91), AVE (n = 91) was at lower age, higher SBP, and fewer rate of diabetes and hyperlipidemia history (all P < .01). Rate of positive semiRSN was higher in AVE than non-AVE (49.45% vs 14.29%, P < .001). qRSN was larger in AVE than non-AVE [1.03 (0.85, 1.33) vs 0.86 (0.75,1.03), P < .001]. The increased AVE was associated with semiRSN (odds ratio = 7.04, P < .001) and qRSN (odds ratio = 5.09, P = .003), respectively. For distinguishing AVE from non-AVE, the area under receiver operating characteristic was bigger in the models combining the clinical characteristics with either semiRSN or qRSN than that of semiRSN or qRSN alone (P ≤.01). CONCLUSION: Among the patients with arterial hypertension aged 18-60 years, CT imaging-based RSN was associated with increased AVE risk.


Subject(s)
Hypertension , Humans , Cross-Sectional Studies , Hypertension/complications , Kidney/diagnostic imaging , Blood Pressure , Glomerular Filtration Rate , Risk Factors
6.
IEEE Trans Biomed Eng ; 70(7): 2013-2024, 2023 07.
Article in English | MEDLINE | ID: mdl-37018248

ABSTRACT

Macular hole (MH) and cystoid macular edema (CME) are two common retinal pathologies that cause vision loss. Accurate segmentation of MH and CME in retinal OCT images can greatly aid ophthalmologists to evaluate the relevant diseases. However, it is still challenging as the complicated pathological features of MH and CME in retinal OCT images, such as the diversity of morphologies, low imaging contrast, and blurred boundaries. In addition, the lack of pixel-level annotation data is one of the important factors that hinders the further improvement of segmentation accuracy. Focusing on these challenges, we propose a novel self-guided optimization semi-supervised method termed Semi-SGO for joint segmentation of MH and CME in retinal OCT images. Aiming to improve the model's ability to learn the complicated pathological features of MH and CME, while alleviating the feature learning tendency problem that may be caused by the introduction of skip-connection in U-shaped segmentation architecture, we develop a novel dual decoder dual-task fully convolutional neural network (D3T-FCN). Meanwhile, based on our proposed D3T-FCN, we introduce a knowledge distillation technique to further design a novel semi-supervised segmentation method called Semi-SGO, which can leverage unlabeled data to further improve the segmentation accuracy. Comprehensive experimental results show that our proposed Semi-SGO outperforms other state-of-the-art segmentation networks. Furthermore, we also develop an automatic method for measuring the clinical indicators of MH and CME to validate the clinical significance of our proposed Semi-SGO. The code will be released on Github 1,2.


Subject(s)
Macular Edema , Retinal Perforations , Humans , Macular Edema/diagnostic imaging , Retinal Perforations/complications , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Neural Networks, Computer
7.
IEEE J Biomed Health Inform ; 27(7): 3467-3477, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37099475

ABSTRACT

Skin wound segmentation in photographs allows non-invasive analysis of wounds that supports dermatological diagnosis and treatment. In this paper, we propose a novel feature augment network (FANet) to achieve automatic segmentation of skin wounds, and design an interactive feature augment network (IFANet) to provide interactive adjustment on the automatic segmentation results. The FANet contains the edge feature augment (EFA) module and the spatial relationship feature augment (SFA) module, which can make full use of the notable edge information and the spatial relationship information be-tween the wound and the skin. The IFANet, with FANet as the backbone, takes the user interactions and the initial result as inputs, and outputs the refined segmentation result. The pro-posed networks were tested on a dataset composed of miscellaneous skin wound images, and a public foot ulcer segmentation challenge dataset. The results indicate that the FANet gives good segmentation results while the IFANet can effectively improve them based on simple marking. Comprehensive comparative experiments show that our proposed networks outperform some other existing automatic or interactive segmentation methods, respectively.


Subject(s)
Polysorbates , Skin , Humans , Image Processing, Computer-Assisted , Skin/diagnostic imaging
8.
Phys Med Biol ; 68(9)2023 05 03.
Article in English | MEDLINE | ID: mdl-37054733

ABSTRACT

Objective. Corneal confocal microscopy (CCM) is a rapid and non-invasive ophthalmic imaging technique that can reveal corneal nerve fiber. The automatic segmentation of corneal nerve fiber in CCM images is vital for the subsequent abnormality analysis, which is the main basis for the early diagnosis of degenerative neurological systemic diseases such as diabetic peripheral neuropathy.Approach. In this paper, a U-shape encoder-decoder structure based multi-scale and local feature guidance neural network (MLFGNet) is proposed for the automatic corneal nerve fiber segmentation in CCM images. Three novel modules including multi-scale progressive guidance (MFPG) module, local feature guided attention (LFGA) module, and multi-scale deep supervision (MDS) module are proposed and applied in skip connection, bottom of the encoder and decoder path respectively, which are designed from both multi-scale information fusion and local information extraction perspectives to enhance the network's ability to discriminate the global and local structure of nerve fibers. The proposed MFPG module solves the imbalance between semantic information and spatial information, the LFGA module enables the network to capture attention relationships on local feature maps and the MDS module fully utilizes the relationship between high-level and low-level features for feature reconstruction in the decoder path.Main results. The proposed MLFGNet is evaluated on three CCM image Datasets, the Dice coefficients reach 89.33%, 89.41%, and 88.29% respectively.Significance. The proposed method has excellent segmentation performance for corneal nerve fibers and outperforms other state-of-the-art methods.


Subject(s)
Eye , Face , Information Storage and Retrieval , Nerve Fibers , Neural Networks, Computer , Image Processing, Computer-Assisted
9.
Comput Methods Programs Biomed ; 233: 107454, 2023 May.
Article in English | MEDLINE | ID: mdl-36921468

ABSTRACT

BACKGROUND AND OBJECTIVE: Retinal vessel segmentation plays an important role in the automatic retinal disease screening and diagnosis. How to segment thin vessels and maintain the connectivity of vessels are the key challenges of the retinal vessel segmentation task. Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique that can reveal high-resolution retinal vessels. Aiming at make full use of its characteristic of high resolution, a new end-to-end transformer based network named as OCT2Former (OCT-a Transformer) is proposed to segment retinal vessel accurately in OCTA images. METHODS: The proposed OCT2Former is based on encoder-decoder structure, which mainly includes dynamic transformer encoder and lightweight decoder. Dynamic transformer encoder consists of dynamic token aggregation transformer and auxiliary convolution branch, in which the multi-head dynamic token aggregation attention based dynamic token aggregation transformer is designed to capture the global retinal vessel context information from the first layer throughout the network and the auxiliary convolution branch is proposed to compensate for the lack of inductive bias of the transformer and assist in the efficient feature extraction. A convolution based lightweight decoder is proposed to decode features efficiently and reduce the complexity of the proposed OCT2Former. RESULTS: The proposed OCT2Former is validated on three publicly available datasets i.e. OCTA-SS, ROSE-1, OCTA-500 (subset OCTA-6M and OCTA-3M). The Jaccard indexes of the proposed OCT2Former on these datasets are 0.8344, 0.7855, 0.8099 and 0.8513, respectively, outperforming the best convolution based network 1.43, 1.32, 0.75 and 1.46%, respectively. CONCLUSION: The experimental results have demonstrated that the proposed OCT2Former can achieve competitive performance on retinal OCTA vessel segmentation tasks.


Subject(s)
Mass Screening , Retinal Vessels , Retinal Vessels/diagnostic imaging , Fluorescein Angiography/methods , Tomography, Optical Coherence/methods
10.
IEEE J Biomed Health Inform ; 27(3): 1237-1248, 2023 03.
Article in English | MEDLINE | ID: mdl-35759605

ABSTRACT

Lung tumor segmentation in PET-CT images plays an important role to assist physicians in clinical application to accurately diagnose and treat lung cancer. However, it is still a challenging task in medical image processing field. Due to respiration and movement, the lung tumor varies largely in PET images and CT images. Even the two images are almost simultaneously collected and registered, the shape and size of lung tumors in PET-CT images are different from each other. To address these issues, a modality-specific segmentation network (MoSNet) is proposed for lung tumor segmentation in PET-CT images. MoSNet can simultaneously segment the modality-specific lung tumor in PET images and CT images. MoSNet learns a modality-specific representation to describe the inconsistency between PET images and CT images and a modality-fused representation to encode the common feature of lung tumor in PET images and CT images. An adversarial method is proposed to minimize an approximate modality discrepancy through an adversarial objective with respect to a modality discriminator and reserve modality-common representation. This improves the representation power of the network for modality-specific lung tumor segmentation in PET images and CT images. The novelty of MoSNet is its ability to produce a modality-specific map that explicitly quantifies the modality-specific weights for the features in each modality. To demonstrate the superiority of our method, MoSNet is validated in 126 PET-CT images with NSCLC. Experimental results show that MoSNet outperforms state-of-the-art lung tumor segmentation methods.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Humans , Positron Emission Tomography Computed Tomography/methods , Lung Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods
11.
IEEE Trans Med Imaging ; 42(3): 713-725, 2023 03.
Article in English | MEDLINE | ID: mdl-36260572

ABSTRACT

Accurate segmentation of retinal images can assist ophthalmologists to determine the degree of retinopathy and diagnose other systemic diseases. However, the structure of the retina is complex, and different anatomical structures often affect the segmentation of fundus lesions. In this paper, a new segmentation strategy called a dual stream segmentation network embedded into a conditional generative adversarial network is proposed to improve the accuracy of retinal lesion segmentation. First, a dual stream encoder is proposed to utilize the capabilities of two different networks and extract more feature information. Second, a multiple level fuse block is proposed to decode the richer and more effective features from the two different parallel encoders. Third, the proposed network is further trained in a semi-supervised adversarial manner to leverage from labeled images and unlabeled images with high confident pseudo labels, which are selected by the dual stream Bayesian segmentation network. An annotation discriminator is further proposed to reduce the negativity that prediction tends to become increasingly similar to the inaccurate predictions of unlabeled images. The proposed method is cross-validated in 384 clinical fundus fluorescein angiography images and 1040 optical coherence tomography images. Compared to state-of-the-art methods, the proposed method can achieve better segmentation of retinal capillary non-perfusion region and choroidal neovascularization.


Subject(s)
Retina , Retinal Diseases , Humans , Bayes Theorem , Fundus Oculi , Retina/diagnostic imaging , Tomography, Optical Coherence
12.
Med Phys ; 50(3): 1586-1600, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36345139

ABSTRACT

BACKGROUND: Medical image segmentation is an important task in the diagnosis and treatment of cancers. The low contrast and highly flexible anatomical structure make it challenging to accurately segment the organs or lesions. PURPOSE: To improve the segmentation accuracy of the organs or lesions in magnetic resonance (MR) images, which can be useful in clinical diagnosis and treatment of cancers. METHODS: First, a selective feature interaction (SFI) module is designed to selectively extract the similar features of the sequence images based on the similarity interaction. Second, a multi-scale guided feature reconstruction (MGFR) module is designed to reconstruct low-level semantic features and focus on small targets and the edges of the pancreas. Third, to reduce manual annotation of large amounts of data, a semi-supervised training method is also proposed. Uncertainty estimation is used to further improve the segmentation accuracy. RESULTS: Three hundred ninety-five 3D MR images from 395 patients with pancreatic cancer, 259 3D MR images from 259 patients with brain tumors, and four-fold cross-validation strategy are used to evaluate the proposed method. Compared to state-of-the-art deep learning segmentation networks, the proposed method can achieve better segmentation of pancreas or tumors in MR images. CONCLUSIONS: SFI-Net can fuse dual sequence MR images for abnormal pancreas or tumor segmentation. The proposed semi-supervised strategy can further improve the performance of SFI-Net.


Subject(s)
Brain Neoplasms , Pancreatic Neoplasms , Humans , Magnetic Resonance Imaging/methods , Pancreatic Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods
13.
Phys Med Biol ; 67(22)2022 11 07.
Article in English | MEDLINE | ID: mdl-36220014

ABSTRACT

Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Humans , Positron Emission Tomography Computed Tomography/methods , Lung Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods
14.
Comput Biol Med ; 151(Pt A): 106228, 2022 12.
Article in English | MEDLINE | ID: mdl-36306579

ABSTRACT

The morphology of tissues in pathological images has been used routinely by pathologists to assess the degree of malignancy of pancreatic ductal adenocarcinoma (PDAC). Automatic and accurate segmentation of tumor cells and their surrounding tissues is often a crucial step to obtain reliable morphological statistics. Nonetheless, it is still a challenge due to the great variation of appearance and morphology. In this paper, a selected multi-scale attention network (SMANet) is proposed to segment tumor cells, blood vessels, nerves, islets and ducts in pancreatic pathological images. The selected multi-scale attention module is proposed to enhance effective information, supplement useful information and suppress redundant information at different scales from the encoder and decoder. It includes selection unit (SU) module and multi-scale attention (MA) module. The selection unit module can effectively filter features. The multi-scale attention module enhances effective information through spatial attention and channel attention, and combines different level features to supplement useful information. This helps learn the information of different receptive fields to improve the segmentation of tumor cells, blood vessels and nerves. An original-feature fusion unit is also proposed to supplement the original image information to reduce the under-segmentation of small tissues such as islets and ducts. The proposed method outperforms state-of-the-arts deep learning algorithms on our PDAC pathological images and achieves competitive results on the GlaS challenge dataset. The mDice and mIoU have reached 0.769 and 0.665 in our PDAC dataset.


Subject(s)
Pancreatic Neoplasms , Humans , Pancreatic Neoplasms/diagnostic imaging , Cell Count , Algorithms , Image Processing, Computer-Assisted , Pancreatic Neoplasms
15.
16.
Phys Med Biol ; 67(12)2022 06 15.
Article in English | MEDLINE | ID: mdl-35613604

ABSTRACT

Objective. Retinal fluid mainly includes intra-retinal fluid (IRF), sub-retinal fluid (SRF) and pigment epithelial detachment (PED), whose accurate segmentation in optical coherence tomography (OCT) image is of great importance to the diagnosis and treatment of the relative fundus diseases.Approach. In this paper, a novel two-stage multi-class retinal fluid joint segmentation framework based on cascaded convolutional neural networks is proposed. In the pre-segmentation stage, a U-shape encoder-decoder network is adopted to acquire the retinal mask and generate a retinal relative distance map, which can provide the spatial prior information for the next fluid segmentation. In the fluid segmentation stage, an improved context attention and fusion network based on context shrinkage encode module and multi-scale and multi-category semantic supervision module (named as ICAF-Net) is proposed to jointly segment IRF, SRF and PED.Main results. the proposed segmentation framework was evaluated on the dataset of RETOUCH challenge. The average Dice similarity coefficient, intersection over union and accuracy (Acc) reach 76.39%, 64.03% and 99.32% respectively.Significance. The proposed framework can achieve good performance in the joint segmentation of multi-class fluid in retinal OCT images and outperforms some state-of-the-art segmentation networks.


Subject(s)
Neural Networks, Computer , Retina , Image Processing, Computer-Assisted/methods , Retina/diagnostic imaging , Tomography, Optical Coherence/methods
17.
IEEE Trans Med Imaging ; 41(9): 2273-2284, 2022 09.
Article in English | MEDLINE | ID: mdl-35324437

ABSTRACT

Learning how to capture long-range dependencies and restore spatial information of down-sampled feature maps are the basis of the encoder-decoder structure networks in medical image segmentation. U-Net based methods use feature fusion to alleviate these two problems, but the global feature extraction ability and spatial information recovery ability of U-Net are still insufficient. In this paper, we propose a Global Feature Reconstruction (GFR) module to efficiently capture global context features and a Local Feature Reconstruction (LFR) module to dynamically up-sample features, respectively. For the GFR module, we first extract the global features with category representation from the feature map, then use the different level global features to reconstruct features at each location. The GFR module establishes a connection for each pair of feature elements in the entire space from a global perspective and transfers semantic information from the deep layers to the shallow layers. For the LFR module, we use low-level feature maps to guide the up-sampling process of high-level feature maps. Specifically, we use local neighborhoods to reconstruct features to achieve the transfer of spatial information. Based on the encoder-decoder architecture, we propose a Global and Local Feature Reconstruction Network (GLFRNet), in which the GFR modules are applied as skip connections and the LFR modules constitute the decoder path. The proposed GLFRNet is applied to four different medical image segmentation tasks and achieves state-of-the-art performance.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Semantics
18.
Int J Hypertens ; 2022: 1553700, 2022.
Article in English | MEDLINE | ID: mdl-35284141

ABSTRACT

Background: This study sought to explore the association between quantitative classification of renal surface nodularity (qRSN) based on computed tomography (CT) imaging and early renal injury (ERI) in patients with arterial hypertension. Methods: A total of 143 patients with a history of hypertension were retrospectively enrolled; clinical information (age, sex, hypertension grade, and hypertension course), laboratory tests, and qRSN were collected or assessed. The subjects were divided into an ERI group (n = 60) or a control group (CP, n = 83) according to ERI diagnosis based on the following criteria: cystatin C > 1.02 mg/L. Univariate analysis and multiple logistic regression were used to assess the association between ERI and qRSN. A receiver operating characteristic curve (ROC) was performed to compare multiple logistic regression models with or without qRSN for differentiating the ERI group from the control group. Results: In univariate analysis, hypertension grade, hypertension course, triglycerides (TG), and qRSN were related to ERI in patients with arterial hypertension (all P < 0.1), with strong interrater agreement of qRSN. Multiple logistic regression analysis showed an area under the ROC curve of 0.697 in the model without qRSN and 0.790 in the model with qRSN, which was significantly different (Z = 2.314, P=0.021). Conclusion: CT imaging-based qRSN was associated with ERI in patients with arterial hypertension and may be an imaging biomarker of early renal injury.

19.
IEEE J Biomed Health Inform ; 26(2): 648-659, 2022 02.
Article in English | MEDLINE | ID: mdl-34242175

ABSTRACT

Quantitative measurements of corneal sub-basal nerves are biomarkers for many ocular surface disorders and are also important for early diagnosis and assessment of progression of neurodegenerative diseases. This paper aims to develop an automatic method for nerve fiber segmentation from in vivo corneal confocal microscopy (CCM) images, which is fundamental for nerve morphology quantification. A novel multi-discriminator adversarial convolutional network (MDACN) is proposed, where both the generator and the two discriminators emphasize multi-scale feature representations. The generator is a U-shaped fully convolutional network with multi-scale split and concatenate blocks, and the two discriminators have different effective receptive fields, sensitive to features of different scales. A novel loss function is also proposed which enables the network to pay more attention to thin fibers. The MDACN framework was evaluated on four datasets. Experiment results show that our method has excellent segmentation performance for corneal nerve fibers and outperforms some state-of-the-art methods.


Subject(s)
Image Processing, Computer-Assisted , Nerve Fibers , Humans , Image Processing, Computer-Assisted/methods , Microscopy, Confocal
20.
Biomed Opt Express ; 12(11): 7185-7198, 2021 Nov 01.
Article in English | MEDLINE | ID: mdl-34858709

ABSTRACT

Anti-vascular endothelial growth factor (anti-VEGF) therapy is effective for reducing the severity level of diabetic retinopathy (DR). However, it is difficult to determine the in vivo spatial and temporal expression of VEGF in the DR retina at an early stage. Here, we report a quantitatively fluorescence molecular imaging and image analysis method by creating a VEGF targeted fluorescence imaging probe, which can potentially detect and predict anti-VEGF treatment response. Moreover, the ex vivo multiscale fluorescence imaging demonstrated the spatial correlation between VEGF relative expression and vascular abnormalities in two and three dimensions. It revealed that VEGF was mainly abnormally expressed at the bifurcation of the microvessels, which advances the knowledge of the DR progression by molecular fluorescence imaging. Our study has the potential to achieve early detection of DR disease, provide more insight into understanding anti-VEGF treatment, and may help stratify patients based on the molecular imaging of retinal VEGF.

SELECTION OF CITATIONS
SEARCH DETAIL