Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 88
Filter
1.
IEEE Trans Biomed Eng ; PP2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38662563

ABSTRACT

OBJECTIVE: Optical Coherence Tomography (OCT) images can provide non-invasive visualization of fundus lesions; however, scanners from different OCT manufacturers largely vary from each other, which often leads to model deterioration to unseen OCT scanners due to domain shift. METHODS: To produce the T-styles of the potential target domain, an Orthogonal Style Space Reparameterization (OSSR) method is proposed to apply orthogonal constraints in the latent orthogonal style space to the sampled marginal styles. To leverage the high-level features of multi-source domains and potential T-styles in the graph semantic space, a Graph Adversarial Network (GAN) is constructed to align the generated samples with the source domain samples. To align features with the same label based on the semantic feature in the graph semantic space, Graph Semantic Alignment (GSA) is performed to focus on the shape and the morphological differences between the lesions and their surrounding regions. RESULTS: Comprehensive experiments have been performed on two OCT image datasets. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. CONCLUSION: The proposed fundus lesion segmentation method can be trained with labeled OCT images from multiple manufacturers' scanners and be tested on an unseen manufacturer's scanner with better domain generalization. SIGNIFICANCE: The proposed method can be used in routine clinical occasions when an unseen manufacturer's OCT image is available for a patient.

2.
Med Phys ; 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38426594

ABSTRACT

BACKGROUND: Deep learning based optical coherence tomography (OCT) segmentation methods have achieved excellent results, allowing quantitative analysis of large-scale data. However, OCT images are often acquired by different devices or under different imaging protocols, which leads to serious domain shift problem. This in turn results in performance degradation of segmentation models. PURPOSE: Aiming at the domain shift problem, we propose a two-stage adversarial learning based network (TSANet) that accomplishes unsupervised cross-domain OCT segmentation. METHODS: In the first stage, a Fourier transform based approach is adopted to reduce image style differences from the image level. Then, adversarial learning networks, including a segmenter and a discriminator, are designed to achieve inter-domain consistency in the segmentation output. In the second stage, pseudo labels of selected unlabeled target domain training data are used to fine-tune the segmenter, which further improves its generalization capability. The proposed method was tested on cross-domain datasets for choroid or retinoschisis segmentation tasks. For choroid segmentation, the model was trained on 400 images and validated on 100 images from the source domain, and then trained on 1320 unlabeled images and tested on 330 images from target domain I, and also trained on 400 unlabeled images and tested on 200 images from target domain II. For retinoschisis segmentation, the model was trained on 1284 images and validated on 312 images from the source domain, and then trained on 1024 unlabeled images and tested on 200 images from the target domain. RESULTS: The proposed method achieved significantly improved results over that without domain adaptation, with improvement of 8.34%, 55.82% and 3.53% in intersection over union (IoU) respectively for the three test sets. The performance is better than some state-of-the-art domain adaptation methods. CONCLUSIONS: The proposed TSANet, with image level adaptation, feature level adaptation and pseudo-label based fine-tuning, achieved excellent cross-domain generalization. This alleviates the burden of obtaining additional manual labels when adapting the deep learning model to new OCT data.

3.
IEEE Trans Biomed Eng ; PP2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38512744

ABSTRACT

OBJECTIVE: Multi-modal magnetic resonance (MR) image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to obtain multiple modalities for a single patient in clinical applications. To address these issues, a cross-modal consistency framework is proposed for a single-modal MR image segmentation. METHODS: To enable single-modal MR image segmentation in the inference stage, a weighted cross-entropy loss and a pixel-level feature consistency loss are proposed to train the target network with the guidance of the teacher network and the auxiliary network. To fuse dual-modal MR images in the training stage, the cross-modal consistency is measured according to Dice similarity entropy loss and Dice similarity contrastive loss, so as to maximize the prediction similarity of the teacher network and the auxiliary network. To reduce the difference in image contrast between different MR images for the same organs, a contrast alignment network is proposed to align input images with different contrasts to reference images with a good contrast. RESULTS: Comprehensive experiments have been performed on a publicly available prostate dataset and an in-house pancreas dataset to verify the effectiveness of the proposed method. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. CONCLUSION: The proposed image segmentation method can fuse dual-modal MR images in the training stage and only need one-modal MR images in the inference stage. SIGNIFICANCE: The proposed method can be used in routine clinical occasions when only single-modal MR image with variable contrast is available for a patient.

4.
Biomed Opt Express ; 15(2): 725-742, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38404326

ABSTRACT

Retinopathy of prematurity (ROP) usually occurs in premature or low birth weight infants and has been an important cause of childhood blindness worldwide. Diagnosis and treatment of ROP are mainly based on stage, zone and disease, where the zone is more important than the stage for serious ROP. However, due to the great subjectivity and difference of ophthalmologists in the diagnosis of ROP zoning, it is challenging to achieve accurate and objective ROP zoning diagnosis. To address it, we propose a new key area location (KAL) system to achieve automatic and objective ROP zoning based on its definition, which consists of a key point location network and an object detection network. Firstly, to achieve the balance between real-time and high-accuracy, a lightweight residual heatmap network (LRH-Net) is designed to achieve the location of the optic disc (OD) and macular center, which transforms the location problem into a pixel-level regression problem based on the heatmap regression method and maximum likelihood estimation theory. In addition, to meet the needs of clinical accuracy and real-time detection, we use the one-stage object detection framework Yolov3 to achieve ROP lesion location. Finally, the experimental results have demonstrated that the proposed KAL system has achieved better performance on key point location (6.13 and 17.03 pixels error for OD and macular center location) and ROP lesion location (93.05% for AP50), and the ROP zoning results based on it have good consistency with the results manually labeled by clinicians, which can support clinical decision-making and help ophthalmologists correctly interpret ROP zoning, reducing subjective differences of diagnosis and increasing the interpretability of zoning results.

5.
Phys Med Biol ; 69(7)2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38394676

ABSTRACT

Objective.Neovascular age-related macular degeneration (nAMD) and polypoidal choroidal vasculopathy (PCV) present many similar clinical features. However, there are significant differences in the progression of nAMD and PCV. and it is crucial to make accurate diagnosis for treatment. In this paper, we propose a structure-radiomic fusion network (DRFNet) to differentiate PCV and nAMD in optical coherence tomography (OCT) images.Approach.The subnetwork (RIMNet) is designed to automatically segment the lesion of nAMD and PCV. Another subnetwork (StrEncoder) is designed to extract deep structural features of the segmented lesion. The subnetwork (RadEncoder) is designed to extract radiomic features from the segmented lesions based on radiomics. 305 eyes (155 with nAMD and 150 with PCV) are included and manually annotated CNV region in this study. The proposed method was trained and evaluated by 4-fold cross validation using the collected data and was compared with the advanced differentiation methods.Main results.The proposed method achieved high classification performace of nAMD/PCV differentiation in OCT images, which was an improvement of 4.68 compared with other best method.Significance. The presented structure-radiomic fusion network (DRFNet) has great performance of diagnosing nAMD and PCV and high clinical value by using OCT instead of indocyanine green angiography.


Subject(s)
Choroid , Polypoidal Choroidal Vasculopathy , Humans , Choroid/blood supply , Tomography, Optical Coherence/methods , Radiomics , Fluorescein Angiography/methods , Retrospective Studies
8.
Nat Commun ; 14(1): 6757, 2023 10 24.
Article in English | MEDLINE | ID: mdl-37875484

ABSTRACT

Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies. We establish an uncertainty-inspired open set (UIOS) model, which is trained with fundus images of 9 retinal conditions. Besides assessing the probability of each category, UIOS also calculates an uncertainty score to express its confidence. Our UIOS model with thresholding strategy achieves an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set, external target categories (TC)-JSIEC dataset and TC-unseen testing set, respectively, compared to the F1 score of 92.20%, 80.69% and 64.74% by the standard AI model. Furthermore, UIOS correctly predicts high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images. UIOS provides a robust method for real-world screening of retinal anomalies.


Subject(s)
Eye Abnormalities , Retinal Diseases , Humans , Artificial Intelligence , Algorithms , Uncertainty , Retina/diagnostic imaging , Fundus Oculi , Retinal Diseases/diagnostic imaging
9.
World J Clin Cases ; 11(25): 6025-6030, 2023 Sep 06.
Article in English | MEDLINE | ID: mdl-37727494

ABSTRACT

BACKGROUND: Since May 2022, outbreaks of monkeypox have occurred in many countries around the world, and several cases have been reported in China. CASE SUMMARY: A 38-year-old man presented with a small, painless, shallow ulcer on the coronary groove for 8 d. One day after the rash appeared, the patient developed inguinal lymphadenopathy with fever. The patient had a history of male-male sexual activity and denied a recent history of travel abroad. Monkeypox virus was detected by quantitative polymerase chain reaction from the rash site and throat swab. Based on the epidemiological history, clinical manifestations and nucleic acid test results, the patient was diagnosed with monkeypox. CONCLUSION: Monkeypox is an emerging infectious disease in China. Monkeypox presenting as a chancre-like rash is easily misdiagnosed. Diagnosis can be made based on exposure history, clinical manifestations and nucleic acid test results.

10.
Phys Med Biol ; 68(11)2023 05 30.
Article in English | MEDLINE | ID: mdl-37137316

ABSTRACT

Retinal detachment (RD) and retinoschisis (RS) are the main complications leading to vision loss in high myopia. Accurate segmentation of RD and RS, including its subcategories (outer, middle, and inner retinoschisis) in optical coherence tomography images is of great clinical significance in the diagnosis and management of high myopia. For this multi-class segmentation task, we propose a novel framework named complementary multi-class segmentation networks. Based on domain knowledge, a three-class segmentation path (TSP) and a five-class segmentation path (FSP) are designed, and their outputs are integrated through additional decision fusion layers to achieve improved segmentation in a complementary manner. In TSP, a cross-fusion global feature module is adopted to achieve global receptive field. In FSP, a novel three-dimensional contextual information perception module is proposed to capture long-range contexts, and a classification branch is designed to provide useful features for segmentation. A new category loss is also proposed in FSP to help better identify the lesion categories. Experiment results show that the proposed method achieves superior performance for joint segmentation of RD and the three subcategories of RS, with an average Dice coefficient of 84.83%.


Subject(s)
Myopia , Retinal Detachment , Retinoschisis , Humans , Retinoschisis/diagnostic imaging , Retinoschisis/complications , Retinal Detachment/diagnostic imaging , Retinal Detachment/complications , Retina/diagnostic imaging , Tomography, Optical Coherence/methods , Myopia/complications , Myopia/pathology , Image Processing, Computer-Assisted
11.
IEEE Trans Biomed Eng ; 70(7): 2013-2024, 2023 07.
Article in English | MEDLINE | ID: mdl-37018248

ABSTRACT

Macular hole (MH) and cystoid macular edema (CME) are two common retinal pathologies that cause vision loss. Accurate segmentation of MH and CME in retinal OCT images can greatly aid ophthalmologists to evaluate the relevant diseases. However, it is still challenging as the complicated pathological features of MH and CME in retinal OCT images, such as the diversity of morphologies, low imaging contrast, and blurred boundaries. In addition, the lack of pixel-level annotation data is one of the important factors that hinders the further improvement of segmentation accuracy. Focusing on these challenges, we propose a novel self-guided optimization semi-supervised method termed Semi-SGO for joint segmentation of MH and CME in retinal OCT images. Aiming to improve the model's ability to learn the complicated pathological features of MH and CME, while alleviating the feature learning tendency problem that may be caused by the introduction of skip-connection in U-shaped segmentation architecture, we develop a novel dual decoder dual-task fully convolutional neural network (D3T-FCN). Meanwhile, based on our proposed D3T-FCN, we introduce a knowledge distillation technique to further design a novel semi-supervised segmentation method called Semi-SGO, which can leverage unlabeled data to further improve the segmentation accuracy. Comprehensive experimental results show that our proposed Semi-SGO outperforms other state-of-the-art segmentation networks. Furthermore, we also develop an automatic method for measuring the clinical indicators of MH and CME to validate the clinical significance of our proposed Semi-SGO. The code will be released on Github 1,2.


Subject(s)
Macular Edema , Retinal Perforations , Humans , Macular Edema/diagnostic imaging , Retinal Perforations/complications , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Neural Networks, Computer
12.
IEEE Trans Med Imaging ; 42(11): 3140-3154, 2023 11.
Article in English | MEDLINE | ID: mdl-37022267

ABSTRACT

Choroidal neovascularization (CNV) is a typical symptom of age-related macular degeneration (AMD) and is one of the leading causes for blindness. Accurate segmentation of CNV and detection of retinal layers are critical for eye disease diagnosis and monitoring. In this paper, we propose a novel graph attention U-Net (GA-UNet) for retinal layer surface detection and CNV segmentation in optical coherence tomography (OCT) images. Due to retinal layer deformation caused by CNV, it is challenging for existing models to segment CNV and detect retinal layer surfaces with the correct topological order. We propose two novel modules to address the challenge. The first module is a graph attention encoder (GAE) in a U-Net model that automatically integrates topological and pathological knowledge of retinal layers into the U-Net structure to achieve effective feature embedding. The second module is a graph decorrelation module (GDM) that takes reconstructed features by the decoder of the U-Net as inputs, it then decorrelates and removes information unrelated to retinal layer for improved retinal layer surface detection. In addition, we propose a new loss function to maintain the correct topological order of retinal layers and the continuity of their boundaries. The proposed model learns graph attention maps automatically during training and performs retinal layer surface detection and CNV segmentation simultaneously with the attention maps during inference. We evaluated the proposed model on our private AMD dataset and another public dataset. Experiment results show that the proposed model outperformed the competing methods for retinal layer surface detection and CNV segmentation and achieved new state of the arts on the datasets.


Subject(s)
Choroidal Neovascularization , Macular Degeneration , Humans , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Choroidal Neovascularization/diagnostic imaging , Choroidal Neovascularization/pathology , Macular Degeneration/diagnostic imaging , Diagnostic Techniques, Ophthalmological
13.
IEEE J Biomed Health Inform ; 27(7): 3467-3477, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37099475

ABSTRACT

Skin wound segmentation in photographs allows non-invasive analysis of wounds that supports dermatological diagnosis and treatment. In this paper, we propose a novel feature augment network (FANet) to achieve automatic segmentation of skin wounds, and design an interactive feature augment network (IFANet) to provide interactive adjustment on the automatic segmentation results. The FANet contains the edge feature augment (EFA) module and the spatial relationship feature augment (SFA) module, which can make full use of the notable edge information and the spatial relationship information be-tween the wound and the skin. The IFANet, with FANet as the backbone, takes the user interactions and the initial result as inputs, and outputs the refined segmentation result. The pro-posed networks were tested on a dataset composed of miscellaneous skin wound images, and a public foot ulcer segmentation challenge dataset. The results indicate that the FANet gives good segmentation results while the IFANet can effectively improve them based on simple marking. Comprehensive comparative experiments show that our proposed networks outperform some other existing automatic or interactive segmentation methods, respectively.


Subject(s)
Polysorbates , Skin , Humans , Image Processing, Computer-Assisted , Skin/diagnostic imaging
14.
Phys Med Biol ; 68(9)2023 05 03.
Article in English | MEDLINE | ID: mdl-37054733

ABSTRACT

Objective. Corneal confocal microscopy (CCM) is a rapid and non-invasive ophthalmic imaging technique that can reveal corneal nerve fiber. The automatic segmentation of corneal nerve fiber in CCM images is vital for the subsequent abnormality analysis, which is the main basis for the early diagnosis of degenerative neurological systemic diseases such as diabetic peripheral neuropathy.Approach. In this paper, a U-shape encoder-decoder structure based multi-scale and local feature guidance neural network (MLFGNet) is proposed for the automatic corneal nerve fiber segmentation in CCM images. Three novel modules including multi-scale progressive guidance (MFPG) module, local feature guided attention (LFGA) module, and multi-scale deep supervision (MDS) module are proposed and applied in skip connection, bottom of the encoder and decoder path respectively, which are designed from both multi-scale information fusion and local information extraction perspectives to enhance the network's ability to discriminate the global and local structure of nerve fibers. The proposed MFPG module solves the imbalance between semantic information and spatial information, the LFGA module enables the network to capture attention relationships on local feature maps and the MDS module fully utilizes the relationship between high-level and low-level features for feature reconstruction in the decoder path.Main results. The proposed MLFGNet is evaluated on three CCM image Datasets, the Dice coefficients reach 89.33%, 89.41%, and 88.29% respectively.Significance. The proposed method has excellent segmentation performance for corneal nerve fibers and outperforms other state-of-the-art methods.


Subject(s)
Eye , Face , Information Storage and Retrieval , Nerve Fibers , Neural Networks, Computer , Image Processing, Computer-Assisted
15.
Biomed Opt Express ; 14(2): 799-814, 2023 Feb 01.
Article in English | MEDLINE | ID: mdl-36874500

ABSTRACT

Keratoconus (KC) is a noninflammatory ectatic disease characterized by progressive thinning and an apical cone-shaped protrusion of the cornea. In recent years, more and more researchers have been committed to automatic and semi-automatic KC detection based on corneal topography. However, there are few studies about the severity grading of KC, which is particularly important for the treatment of KC. In this work, we propose a lightweight KC grading network (LKG-Net) for 4-level KC grading (Normal, Mild, Moderate, and Severe). First of all, we use depth-wise separable convolution to design a novel feature extraction block based on the self-attention mechanism, which can not only extract rich features but also reduce feature redundancy and greatly reduce the number of parameters. Then, to improve the model performance, a multi-level feature fusion module is proposed to fuse features from the upper and lower levels to obtain more abundant and effective features. The proposed LKG-Net was evaluated on the corneal topography of 488 eyes from 281 people with 4-fold cross-validation. Compared with other state-of-the-art classification methods, the proposed method achieves 89.55% for weighted recall (W_R), 89.98% for weighted precision (W_P), 89.50% for weighted F1 score (W_F1) and 94.38% for Kappa, respectively. In addition, the LKG-Net is also evaluated on KC screening, and the experimental results show the effectiveness.

16.
Comput Methods Programs Biomed ; 233: 107454, 2023 May.
Article in English | MEDLINE | ID: mdl-36921468

ABSTRACT

BACKGROUND AND OBJECTIVE: Retinal vessel segmentation plays an important role in the automatic retinal disease screening and diagnosis. How to segment thin vessels and maintain the connectivity of vessels are the key challenges of the retinal vessel segmentation task. Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique that can reveal high-resolution retinal vessels. Aiming at make full use of its characteristic of high resolution, a new end-to-end transformer based network named as OCT2Former (OCT-a Transformer) is proposed to segment retinal vessel accurately in OCTA images. METHODS: The proposed OCT2Former is based on encoder-decoder structure, which mainly includes dynamic transformer encoder and lightweight decoder. Dynamic transformer encoder consists of dynamic token aggregation transformer and auxiliary convolution branch, in which the multi-head dynamic token aggregation attention based dynamic token aggregation transformer is designed to capture the global retinal vessel context information from the first layer throughout the network and the auxiliary convolution branch is proposed to compensate for the lack of inductive bias of the transformer and assist in the efficient feature extraction. A convolution based lightweight decoder is proposed to decode features efficiently and reduce the complexity of the proposed OCT2Former. RESULTS: The proposed OCT2Former is validated on three publicly available datasets i.e. OCTA-SS, ROSE-1, OCTA-500 (subset OCTA-6M and OCTA-3M). The Jaccard indexes of the proposed OCT2Former on these datasets are 0.8344, 0.7855, 0.8099 and 0.8513, respectively, outperforming the best convolution based network 1.43, 1.32, 0.75 and 1.46%, respectively. CONCLUSION: The experimental results have demonstrated that the proposed OCT2Former can achieve competitive performance on retinal OCTA vessel segmentation tasks.


Subject(s)
Mass Screening , Retinal Vessels , Retinal Vessels/diagnostic imaging , Fluorescein Angiography/methods , Tomography, Optical Coherence/methods
17.
Med Phys ; 50(8): 4839-4853, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36789971

ABSTRACT

BACKGROUND: Choroid neovascularization (CNV) has no obvious symptoms in the early stage, but with its gradual expansion, leakage, rupture, and bleeding, it can cause vision loss and central scotoma. In some severe cases, it will lead to permanent visual impairment. PURPOSE: Accurate prediction of disease progression can greatly help ophthalmologists to formulate appropriate treatment plans and prevent further deterioration of the disease. Therefore, we aim to predict the growth trend of CNV to help the attending physician judge the effectiveness of treatment. METHODS: In this paper, we develop a CNN-based method for CNV growth prediction. To achieve this, we first design a registration network to rigidly register the spectral domain optical coherence tomography (SD-OCT) B-scans of each subject at different time points to eliminate retinal displacements of longitudinal data. Then, considering the correlation of longitudinal data, we propose a co-segmentation network with a correlation attention guidance (CAG) module to cooperatively segment CNV lesions of a group of follow-up images and use them as input for growth prediction. Finally, based on the above registration and segmentation networks, an encoder-recurrent-decoder framework is developed for CNV growth prediction, in which an attention-based gated recurrent unit (AGRU) is embedded as the recurrent neural network to recurrently learn robust representations. RESULTS: The registration network rigidly registers the follow-up images of patients to the reference images with a root mean square error (RMSE) of 6.754 pixels. And compared with other state-of-the-art segmentation methods, the proposed segmentation network achieves high performance with the Dice similarity coefficients (Dsc) of 85.27%. Based on the above experiments, the proposed growth prediction network can play a role in predicting the future CNV morphology, and the predicted CNV has a Dsc of 83.69% with the ground truth, which is significantly consistent with the actual follow-up visit. CONCLUSION: The proposed registration and segmentation networks provide the possibility for growth prediction. In addition, accurately predicting the growth of CNV enables us to know the efficacy of the drug against individuals in advance, creating opportunities for formulating appropriate treatment plans.


Subject(s)
Choroid , Choroidal Neovascularization , Humans , Choroid/pathology , Tomography, Optical Coherence/methods , Choroidal Neovascularization/diagnostic imaging , Choroidal Neovascularization/drug therapy , Retina/pathology , Disease Progression
18.
Med Phys ; 50(3): 1586-1600, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36345139

ABSTRACT

BACKGROUND: Medical image segmentation is an important task in the diagnosis and treatment of cancers. The low contrast and highly flexible anatomical structure make it challenging to accurately segment the organs or lesions. PURPOSE: To improve the segmentation accuracy of the organs or lesions in magnetic resonance (MR) images, which can be useful in clinical diagnosis and treatment of cancers. METHODS: First, a selective feature interaction (SFI) module is designed to selectively extract the similar features of the sequence images based on the similarity interaction. Second, a multi-scale guided feature reconstruction (MGFR) module is designed to reconstruct low-level semantic features and focus on small targets and the edges of the pancreas. Third, to reduce manual annotation of large amounts of data, a semi-supervised training method is also proposed. Uncertainty estimation is used to further improve the segmentation accuracy. RESULTS: Three hundred ninety-five 3D MR images from 395 patients with pancreatic cancer, 259 3D MR images from 259 patients with brain tumors, and four-fold cross-validation strategy are used to evaluate the proposed method. Compared to state-of-the-art deep learning segmentation networks, the proposed method can achieve better segmentation of pancreas or tumors in MR images. CONCLUSIONS: SFI-Net can fuse dual sequence MR images for abnormal pancreas or tumor segmentation. The proposed semi-supervised strategy can further improve the performance of SFI-Net.


Subject(s)
Brain Neoplasms , Pancreatic Neoplasms , Humans , Magnetic Resonance Imaging/methods , Pancreatic Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods
19.
Comput Biol Med ; 151(Pt A): 106228, 2022 12.
Article in English | MEDLINE | ID: mdl-36306579

ABSTRACT

The morphology of tissues in pathological images has been used routinely by pathologists to assess the degree of malignancy of pancreatic ductal adenocarcinoma (PDAC). Automatic and accurate segmentation of tumor cells and their surrounding tissues is often a crucial step to obtain reliable morphological statistics. Nonetheless, it is still a challenge due to the great variation of appearance and morphology. In this paper, a selected multi-scale attention network (SMANet) is proposed to segment tumor cells, blood vessels, nerves, islets and ducts in pancreatic pathological images. The selected multi-scale attention module is proposed to enhance effective information, supplement useful information and suppress redundant information at different scales from the encoder and decoder. It includes selection unit (SU) module and multi-scale attention (MA) module. The selection unit module can effectively filter features. The multi-scale attention module enhances effective information through spatial attention and channel attention, and combines different level features to supplement useful information. This helps learn the information of different receptive fields to improve the segmentation of tumor cells, blood vessels and nerves. An original-feature fusion unit is also proposed to supplement the original image information to reduce the under-segmentation of small tissues such as islets and ducts. The proposed method outperforms state-of-the-arts deep learning algorithms on our PDAC pathological images and achieves competitive results on the GlaS challenge dataset. The mDice and mIoU have reached 0.769 and 0.665 in our PDAC dataset.


Subject(s)
Pancreatic Neoplasms , Humans , Pancreatic Neoplasms/diagnostic imaging , Cell Count , Algorithms , Image Processing, Computer-Assisted , Pancreatic Neoplasms
20.
Biomed Opt Express ; 13(8): 4087-4101, 2022 Aug 01.
Article in English | MEDLINE | ID: mdl-36032570

ABSTRACT

Retinopathy of prematurity (ROP) is a proliferative vascular disease, which is one of the most dangerous and severe ocular complications in premature infants. Automatic ROP detection system can assist ophthalmologists in the diagnosis of ROP, which is safe, objective, and cost-effective. Unfortunately, due to the large local redundancy and the complex global dependencies in medical image processing, it is challenging to learn the discriminative representation from ROP-related fundus images. To bridge this gap, a novel attention-awareness and deep supervision based network (ADS-Net) is proposed to detect the existence of ROP (Normal or ROP) and 3-level ROP grading (Mild, Moderate, or Severe). First, to balance the problems of large local redundancy and complex global dependencies in images, we design a multi-semantic feature aggregation (MsFA) module based on self-attention mechanism to take full advantage of convolution and self-attention, generating attention-aware expressive features. Then, to solve the challenge of difficult training of deep model and further improve ROP detection performance, we propose an optimization strategy with deeply supervised loss. Finally, the proposed ADS-Net is evaluated on ROP screening and grading tasks with per-image and per-examination strategies, respectively. In terms of per-image classification pattern, the proposed ADS-Net achieves 0.9552 and 0.9037 for Kappa index in ROP screening and grading, respectively. Experimental results demonstrate that the proposed ADS-Net generally outperforms other state-of-the-art classification networks, showing the effectiveness of the proposed method.

SELECTION OF CITATIONS
SEARCH DETAIL