Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Comput Biol Med ; 178: 108639, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38878394

ABSTRACT

The optic cup (OC) and optic disc (OD) are two critical structures in retinal fundus images, and their relative positions and sizes are essential for effectively diagnosing eye diseases. With the success of deep learning in computer vision, deep learning-based segmentation models have been widely used for joint optic cup and disc segmentation. However, there are three prominent issues that impact the segmentation performance. First, significant differences among datasets collecting from various institutions, protocols, and devices lead to performance degradation of models. Second, we find that images with only RGB information struggle to counteract the interference caused by brightness variations, affecting color representation capability. Finally, existing methods typically ignored the edge perception, facing the challenges in obtaining clear and smooth edge segmentation results. To address these drawbacks, we propose a novel framework based on Style Alignment and Multi-Color Fusion (SAMCF) for joint OC and OD segmentation. Initially, we introduce a domain generalization method to generate uniformly styled images without damaged image content for mitigating domain shift issues. Next, based on multiple color spaces, we propose a feature extraction and fusion network aiming to handle brightness variation interference and improve color representation capability. Lastly, an edge aware loss is designed to generate fine edge segmentation results. Our experiments conducted on three public datasets, DGS, RIM, and REFUGE, demonstrate that our proposed SAMCF achieves superior performance to existing state-of-the-art methods. Moreover, SAMCF exhibits remarkable generalization ability across multiple retinal fundus image datasets, showcasing its outstanding generality.

2.
Comput Biol Med ; 171: 108184, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38417386

ABSTRACT

How to fuse low-level and high-level features effectively is crucial to improving the accuracy of medical image segmentation. Most CNN-based segmentation models on this topic usually adopt attention mechanisms to achieve the fusion of different level features, but they have not effectively utilized the guided information of high-level features, which is often highly beneficial to improve the performance of the segmentation model, to guide the extraction of low-level features. To address this problem, we design multiple guided modules and develop a boundary-guided filter network (BGF-Net) to obtain more accurate medical image segmentation. To the best of our knowledge, this is the first time that boundary guided information is introduced into the medical image segmentation task. Specifically, we first propose a simple yet effective channel boundary guided module to make the segmentation model pay more attention to the relevant channel weights. We further design a novel spatial boundary guided module to complement the channel boundary guided module and aware of the most important spatial positions. Finally, we propose a boundary guided filter to preserve the structural information from the previous feature map and guide the model to learn more important feature information. Moreover, we conduct extensive experiments on skin lesion, polyp, and gland segmentation datasets including ISIC 2016, CVC-EndoSceneStil and GlaS to test the proposed BGF-Net. The experimental results demonstrate that BGF-Net performs better than other state-of-the-art methods.


Subject(s)
Image Processing, Computer-Assisted , Learning
3.
Math Biosci Eng ; 21(1): 49-74, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38303413

ABSTRACT

Retinal vessel segmentation is very important for diagnosing and treating certain eye diseases. Recently, many deep learning-based retinal vessel segmentation methods have been proposed; however, there are still many shortcomings (e.g., they cannot obtain satisfactory results when dealing with cross-domain data or segmenting small blood vessels). To alleviate these problems and avoid overly complex models, we propose a novel network based on a multi-scale feature and style transfer (MSFST-NET) for retinal vessel segmentation. Specifically, we first construct a lightweight segmentation module named MSF-Net, which introduces the selective kernel (SK) module to increase the multi-scale feature extraction ability of the model to achieve improved small blood vessel segmentation. Then, to alleviate the problem of model performance degradation when segmenting cross-domain datasets, we propose a style transfer module and a pseudo-label learning strategy. The style transfer module is used to reduce the style difference between the source domain image and the target domain image to improve the segmentation performance for the target domain image. The pseudo-label learning strategy is designed to be combined with the style transfer module to further boost the generalization ability of the model. Moreover, we trained and tested our proposed MSFST-NET in experiments on the DRIVE and CHASE_DB1 datasets. The experimental results demonstrate that MSFST-NET can effectively improve the generalization ability of the model on cross-domain datasets and achieve improved retinal vessel segmentation results than other state-of-the-art methods.


Subject(s)
Image Processing, Computer-Assisted , Retinal Vessels , Retinal Vessels/diagnostic imaging , Algorithms
4.
Article in English | MEDLINE | ID: mdl-38090822

ABSTRACT

Segmentation of the Optic Disc (OD) and Optic Cup (OC) is crucial for the early detection and treatment of glaucoma. Despite the strides made in deep neural networks, incorporating trained segmentation models for clinical application remains challenging due to domain shifts arising from disparities in fundus images across different healthcare institutions. To tackle this challenge, this study introduces an innovative unsupervised domain adaptation technique called Multi-scale Adaptive Adversarial Learning (MAAL), which consists of three key components. The Multi-scale Wasserstein Patch Discriminator (MWPD) module is designed to extract domain-specific features at multiple scales, enhancing domain classification performance and offering valuable guidance for the segmentation network. To further enhance model generalizability and explore domain-invariant features, we introduce the Adaptive Weighted Domain Constraint (AWDC) module. During training, this module dynamically assigns varying weights to different scales, allowing the model to adaptively focus on informative features. Furthermore, the Pixel-level Feature Enhancement (PFE) module enhances low-level features extracted at shallow network layers by incorporating refined high-level features. This integration ensures the preservation of domain-invariant information, effectively addressing domain variation and mitigating the loss of global features. Two publicly accessible fundus image databases are employed to demonstrate the effectiveness of our MAAL method in mitigating model degradation and improving segmentation performance. The achieved results outperform current state-of-the-art (SOTA) methods in both OD and OC segmentation. Codes are available at https://github.com/M4cheal/MAAL.

5.
Comput Biol Med ; 164: 107269, 2023 09.
Article in English | MEDLINE | ID: mdl-37562323

ABSTRACT

There has been steady progress in the field of deep learning-based blood vessel segmentation. However, several challenging issues still continue to limit its progress, including inadequate sample sizes, the neglect of contextual information, and the loss of microvascular details. To address these limitations, we propose a dual-path deep learning framework for blood vessel segmentation. In our framework, the fundus images are divided into concentric patches with different scales to alleviate the overfitting problem. Then, a Multi-scale Context Dense Aggregation Network (MCDAU-Net) is proposed to accurately extract the blood vessel boundaries from these patches. In MCDAU-Net, a Cascaded Dilated Spatial Pyramid Pooling (CDSPP) module is designed and incorporated into intermediate layers of the model, enhancing the receptive field and producing feature maps enriched with contextual information. To improve segmentation performance for low-contrast vessels, we propose an InceptionConv (IConv) module, which can explore deeper semantic features and suppress the propagation of non-vessel information. Furthermore, we design a Multi-scale Adaptive Feature Aggregation (MAFA) module to fuse the multi-scale feature by assigning adaptive weight coefficients to different feature maps through skip connections. Finally, to explore the complementary contextual information and enhance the continuity of microvascular structures, a fusion module is designed to combine the segmentation results obtained from patches of different sizes, achieving fine microvascular segmentation performance. In order to assess the effectiveness of our approach, we conducted evaluations on three widely-used public datasets: DRIVE, CHASE-DB1, and STARE. Our findings reveal a remarkable advancement over the current state-of-the-art (SOTA) techniques, with the mean values of Se and F1 scores being an increase of 7.9% and 4.7%, respectively. The code is available at https://github.com/bai101315/MCDAU-Net.


Subject(s)
Retinal Vessels , Semantics , Retinal Vessels/diagnostic imaging , Fundus Oculi , Sample Size , Image Processing, Computer-Assisted , Algorithms
6.
Comput Biol Med ; 164: 107215, 2023 09.
Article in English | MEDLINE | ID: mdl-37481947

ABSTRACT

Glaucoma is a leading cause of worldwide blindness and visual impairment, making early screening and diagnosis is crucial to prevent vision loss. Cup-to-Disk Ratio (CDR) evaluation serves as a widely applied approach for effective glaucoma screening. At present, deep learning methods have exhibited outstanding performance in optic disk (OD) and optic cup (OC) segmentation and maturely deployed in CAD system. However, owning to the complexity of clinical data, these techniques could be constrained. Therefore, an original Coarse-to-Fine Transformer Network (C2FTFNet) is designed to segment OD and OC jointly , which is composed of two stages. In the coarse stage, to eliminate the effects of irrelevant organization on the segmented OC and OD regions, we employ U-Net and Circular Hough Transform (CHT) to segment the Region of Interest (ROI) of OD. Meanwhile, a TransUnet3+ model is designed in the fine segmentation stage to extract the OC and OD regions more accurately from ROI. In this model, to alleviate the limitation of the receptive field caused by traditional convolutional methods, a Transformer module is introduced into the backbone to capture long-distance dependent features for retaining more global information. Then, a Multi-Scale Dense Skip Connection (MSDC) module is proposed to fuse the low-level and high-level features from different layers for reducing the semantic gap among different level features. Comprehensive experiments conducted on DRIONS-DB, Drishti-GS, and REFUGE datasets validate the superior effectiveness of the proposed C2FTFNet compared to existing state-of-the-art approaches.


Subject(s)
Glaucoma , Optic Disk , Humans , Optic Disk/diagnostic imaging , Glaucoma/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Diagnostic Techniques, Ophthalmological , Mass Screening , Fundus Oculi , Image Processing, Computer-Assisted/methods
7.
Front Neurosci ; 17: 1139181, 2023.
Article in English | MEDLINE | ID: mdl-36968487

ABSTRACT

Background: Glaucoma is the leading cause of irreversible vision loss. Accurate Optic Disc (OD) and Optic Cup (OC) segmentation is beneficial for glaucoma diagnosis. In recent years, deep learning has achieved remarkable performance in OD and OC segmentation. However, OC segmentation is more challenging than OD segmentation due to its large shape variability and cryptic boundaries that leads to performance degradation when applying the deep learning models to segment OC. Moreover, the OD and OC are segmented independently, or pre-requirement is necessary to extract the OD centered region with pre-processing procedures. Methods: In this paper, we suggest a one-stage network named EfficientNet and Attention-based Residual Depth-wise Separable Convolution (EARDS) for joint OD and OC segmentation. In EARDS, EfficientNet-b0 is regarded as an encoder to capture more effective boundary representations. To suppress irrelevant regions and highlight features of fine OD and OC regions, Attention Gate (AG) is incorporated into the skip connection. Also, Residual Depth-wise Separable Convolution (RDSC) block is developed to improve the segmentation performance and computational efficiency. Further, a novel decoder network is proposed by combining AG, RDSC block and Batch Normalization (BN) layer, which is utilized to eliminate the vanishing gradient problem and accelerate the convergence speed. Finally, the focal loss and dice loss as a weighted combination is designed to guide the network for accurate OD and OC segmentation. Results and discussion: Extensive experimental results on the Drishti-GS and REFUGE datasets indicate that the proposed EARDS outperforms the state-of-the-art approaches. The code is available at https://github.com/M4cheal/EARDS.

8.
Front Public Health ; 10: 1056226, 2022.
Article in English | MEDLINE | ID: mdl-36483248

ABSTRACT

Background: High precision segmentation of retinal blood vessels from retinal images is a significant step for doctors to diagnose many diseases such as glaucoma and cardiovascular diseases. However, at the peripheral region of vessels, previous U-Net-based segmentation methods failed to significantly preserve the low-contrast tiny vessels. Methods: For solving this challenge, we propose a novel network model called Bi-directional ConvLSTM Residual U-Net (BCR-UNet), which takes full advantage of U-Net, Dropblock, Residual convolution and Bi-directional ConvLSTM (BConvLSTM). In this proposed BCR-UNet model, we propose a novel Structured Dropout Residual Block (SDRB) instead of using the original U-Net convolutional block, to construct our network skeleton for improving the robustness of the network. Furthermore, to improve the discriminative ability of the network and preserve more original semantic information of tiny vessels, we adopt BConvLSTM to integrate the feature maps captured from the first residual block and the last up-convolutional layer in a nonlinear manner. Results and discussion: We conduct experiments on four public retinal blood vessel datasets, and the results show that the proposed BCR-UNet can preserve more tiny blood vessels at the low-contrast peripheral regions, even outperforming previous state-of-the-art methods.


Subject(s)
Delayed Emergence from Anesthesia , Physicians , Humans , Retinal Vessels/diagnostic imaging
9.
Comput Intell Neurosci ; 2022: 5596676, 2022.
Article in English | MEDLINE | ID: mdl-35463259

ABSTRACT

The time series is a kind of complex structure data, which contains some special characteristics such as high dimension, dynamic, and high noise. Moreover, multivariate time series (MTS) has become a crucial study in data mining. The MTS utilizes the historical data to forecast its variation trend and has turned into one of the hotspots. In the era of rapid information development and big data, accurate prediction of MTS has attracted much attention. In this paper, a novel deep learning architecture based on the encoder-decoder framework is proposed for MTS forecasting. In this architecture, firstly, the gated recurrent unit (GRU) is taken as the main unit structure of both the procedures in encoding and decoding to extract the useful successive feature information. Then, different from the existing models, the attention mechanism (AM) is introduced to exploit the importance of different historical data for reconstruction at the decoding stage. Meanwhile, feature reuse is realized by skip connections based on the residual network for alleviating the influence of previous features on data reconstruction. Finally, in order to enhance the performance and the discriminative ability of the new MTS, the convolutional structure and fully connected module are established. Furthermore, to better validate the effectiveness of MTS forecasting, extensive experiments are executed on two different types of MTS such as stock data and shared bicycle data, respectively. The experimental results adequately demonstrate the effectiveness and the feasibility of the proposed method.


Subject(s)
Data Mining , Neural Networks, Computer , Big Data , Forecasting , Time Factors
10.
Med Phys ; 49(5): 3144-3158, 2022 May.
Article in English | MEDLINE | ID: mdl-35172016

ABSTRACT

PURPOSE: Accurately segmenting curvilinear structures, for example, retinal blood vessels or nerve fibers, in the medical image is essential to the clinical diagnosis of many diseases. Recently, deep learning has become a popular technology to deal with the image segmentation task, and it has obtained remarkable achievement. However, the existing methods still have many problems when segmenting the curvilinear structures in medical images, such as losing the details of curvilinear structures, producing many false-positive segmentation results. To mitigate these problems, we propose a novel end-to-end curvilinear structure segmentation network called Curv-Net. METHODS: Curv-Net is an effective encoder-decoder architecture constructed based on selective kernel (SK) and multibidirectional convolutional LSTM (multi-Bi-ConvLSTM). To be specific, we first employ the SK module in the convolutional layer to adaptively extract the multi-scale features of the input image, and then we design a multi-Bi-ConvLSTM as the skip concatenation to fuse the information learned in the same stage and propagate the feature information from the deep stages to the shallow stages, which can enable the feature captured by Curv-Net to contain more detail information and high-level semantic information simultaneously to improve the segmentation performance. RESULTS: The effectiveness and reliability of our proposed Curv-Net are verified on three public datasets: two color fundus datasets (DRIVE and CHASE_DB1) and one corneal nerve fiber dataset (CCM-2). We calculate the accuracy (ACC), sensitivity (SE), specificity (SP), Dice similarity coefficient (Dice), and area under the receiver (AUC) for the DRIVE and CHASE_DB1 datasets. The ACC, SE, SP, Dice, and AUC of the DRIVE dataset are 0.9629, 0.8175, 0.9858, 0.8352, and 0.9810, respectively. For the CHASE_DB1 dataset, the values are 0.9810, 0.8564, 0.9899, 0.8143, and 0.9832, respectively. To validate the corneal nerve fiber segmentation performance of the proposed Curv-Net, we test it on the CCM-2 dataset and calculate Dice, SE, and false discovery rate (FDR) metrics. The Dice, SE, and FDR achieved by Curv-Net are 0.8114 ± 0.0062, 0.8903 ± 0.0113, and 0.2547 ± 0.0104, respectively. CONCLUSIONS: Curv-Net is evaluated on three public datasets. Extensive experimental results demonstrate that Curv-Net outperforms the other superior curvilinear structure segmentation methods.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Fundus Oculi , Reproducibility of Results , Retinal Vessels
11.
Comput Intell Neurosci ; 2021: 5486328, 2021.
Article in English | MEDLINE | ID: mdl-34912446

ABSTRACT

The demand forecast of shared bicycles directly determines the utilization rate of vehicles and projects operation benefits. Accurate prediction based on the existing operating data can reduce unnecessary delivery. Since the use of shared bicycles is susceptible to time dependence and external factors, most of the existing works only consider some of the attributes of shared bicycles, resulting in insufficient modeling and unsatisfactory prediction performance. In order to address the aforementioned limitations, this paper establishes a novelty prediction model based on convolutional recurrent neural network with the attention mechanism named as CNN-GRU-AM. There are four parts in the proposed CNN-GRU-AM model. First, a convolutional neural network (CNN) with two layers is used to extract local features from the multiple sources data. Second, the gated recurrent unit (GRU) is employed to capture the time-series relationships of the output data of CNN. Third, the attention mechanism (AM) is introduced to mining the potential relationships of the series features, in which different weights will be assigned to the corresponding features according to their importance. At last, a fully connected layer with three layers is added to learn features and output the prediction results. To evaluate the performance of the proposed method, we conducted massive experiments on two datasets including a real mobile bicycle data and a public shared bicycle data. The experimental results show that the prediction performance of the proposed model is better than other prediction models, indicating the significance of the social benefits.


Subject(s)
Bicycling , Neural Networks, Computer , Forecasting
12.
Comput Intell Neurosci ; 2021: 4026132, 2021.
Article in English | MEDLINE | ID: mdl-34777492

ABSTRACT

Anomaly detection (AD) aims to distinguish the data points that are inconsistent with the overall pattern of the data. Recently, unsupervised anomaly detection methods have aroused huge attention. Among these methods, feature representation (FR) plays an important role, which can directly affect the performance of anomaly detection. Sparse representation (SR) can be regarded as one of matrix factorization (MF) methods, which is a powerful tool for FR. However, there are some limitations in the original SR. On the one hand, it just learns the shallow feature representations, which leads to the poor performance for anomaly detection. On the other hand, the local geometry structure information of data is ignored. To address these shortcomings, a graph regularized deep sparse representation (GRDSR) approach is proposed for unsupervised anomaly detection in this work. In GRDSR, a deep representation framework is first designed by extending the single layer MF to a multilayer MF for extracting hierarchical structure from the original data. Next, a graph regularization term is introduced to capture the intrinsic local geometric structure information of the original data during the process of FR, making the deep features preserve the neighborhood relationship well. Then, a L1-norm-based sparsity constraint is added to enhance the discriminant ability of the deep features. Finally, a reconstruction error is applied to distinguish anomalies. In order to demonstrate the effectiveness of the proposed approach, we conduct extensive experiments on ten datasets. Compared with the state-of-the-art methods, the proposed approach can achieve the best performance.


Subject(s)
Unsupervised Machine Learning
13.
Sensors (Basel) ; 20(10)2020 May 18.
Article in English | MEDLINE | ID: mdl-32443591

ABSTRACT

As the Internet of Things (IoT) is predicted to deal with different problems based on big data, its applications have become increasingly dependent on visual data and deep learning technology, and it is a big challenge to find a suitable method for IoT systems to analyze image data. Traditional deep learning methods have never explicitly taken the color differences of data into account, but from the experience of human vision, colors play differently significant roles in recognizing things. This paper proposes a weight initialization method for deep learning in image recognition problems based on RGB influence proportion, aiming to improve the training process of the learning algorithms. In this paper, we try to extract the RGB proportion and utilize it in the weight initialization process. We conduct several experiments on different datasets to evaluate the effectiveness of our proposal, and it is proven to be effective on small datasets. In addition, as for the access to the RGB influence proportion, we also provide an expedient approach to get the early proportion for the following usage. We assume that the proposed method can be used for IoT sensors to securely analyze complex data in the future.

14.
Comput Math Methods Med ; 2019: 8973287, 2019.
Article in English | MEDLINE | ID: mdl-31827591

ABSTRACT

Accurate optic disc and optic cup segmentation plays an important role for diagnosing glaucoma. However, most existing segmentation approaches suffer from the following limitations. On the one hand, image devices or illumination variations always lead to intensity inhomogeneity in the fundus image. On the other hand, the spatial prior knowledge of optic disc and optic cup, e.g., the optic cup is always contained inside the optic disc region, is ignored. Therefore, the effectiveness of segmentation approaches is greatly reduced. Different from most previous approaches, we present a novel locally statistical active contour model with the structure prior (LSACM-SP) approach to jointly and robustly segment the optic disc and optic cup structures. First, some preprocessing techniques are used to automatically extract initial contour of object. Then, we introduce the locally statistical active contour model (LSACM) to optic disc and optic cup segmentation in the presence of intensity inhomogeneity. Finally, taking the specific morphology of optic disc and optic cup into consideration, a novel structure prior is proposed to guide the model to generate accurate segmentation results. Experimental results demonstrate the advantage and superiority of our approach on two publicly available databases, i.e., DRISHTI-GS and RIM-ONE r2, by comparing with some well-known algorithms.


Subject(s)
Diagnosis, Computer-Assisted/methods , Glaucoma/diagnostic imaging , Retina/diagnostic imaging , Algorithms , Cluster Analysis , False Positive Reactions , Fundus Oculi , Glaucoma/pathology , Humans , Image Processing, Computer-Assisted/methods , Models, Statistical , Normal Distribution , Ophthalmology/methods , Optic Disk/diagnostic imaging , Pattern Recognition, Automated , ROC Curve , Reproducibility of Results , Software
15.
Med Biol Eng Comput ; 57(9): 2055-2067, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31352661

ABSTRACT

Glaucoma is a sight-threading disease which can lead to irreversible blindness. Currently, extracting the vertical cup-to-disc ratio (CDR) from 2D retinal fundus images is promising for automatic glaucoma diagnosis. In this paper, we present a novel sparse coding approach for glaucoma diagnosis called adaptive weighted locality-constrained sparse coding (AWLCSC). Different from the existing reconstruction-based glaucoma diagnosis approaches, the weighted matrix in AWLCSC is constructed by adaptively fusing multiple distance measurement information between the reference images and the testing image, making our approach more robust and effective to glaucoma diagnosis. In our approach, the disc image is firstly extracted and reconstructed according to the proposed AWLCSC technique. Then, with the usage of the obtained reconstruction coefficients and a series of reference disc images with known CDRs, the CDR of the testing disc image can be automated estimation for glaucoma diagnosis. The performance of the proposed AWLCSC is evaluated on two publicly available DRISHTI-GS1 and RIM-ONE r2 databases. The experimental results indicate that the proposed approach outperforms the state-of-the-art approaches. Graphical abstract The flowchart of the proposed approach for glaucoma diagnosis.


Subject(s)
Algorithms , Glaucoma/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Databases, Factual , Fundus Oculi , Humans , Image Processing, Computer-Assisted/methods
16.
Sensors (Basel) ; 19(2)2019 Jan 14.
Article in English | MEDLINE | ID: mdl-30646611

ABSTRACT

Riding the wave of visual sensor equipment (e.g., personal smartphones, home security cameras, vehicle cameras, and camcorders), image retrieval (IR) technology has received increasing attention due to its potential applications in e-commerce, visual surveillance, and intelligent traffic. However, determining how to design an effective feature descriptor has been proven to be the main bottleneck for retrieving a set of images of interest. In this paper, we first construct a six-layer color quantizer to extract a color map. Then, motivated by the human visual system, we design a local parallel cross pattern (LPCP) in which the local binary pattern (LBP) map is amalgamated with the color map in "parallel" and "cross" manners. Finally, to reduce the computational complexity and improve the robustness to image rotation, the LPCP is extended to the uniform local parallel cross pattern (ULPCP) and the rotation-invariant local parallel cross pattern (RILPCP), respectively. Extensive experiments are performed on eight benchmark datasets. The experimental results validate the effectiveness, efficiency, robustness, and computational complexity of the proposed descriptors against eight state-of-the-art color texture descriptors to produce an in-depth comparison. Additionally, compared with a series of Convolutional Neural Network (CNN)-based models, the proposed descriptors still achieve competitive results.

17.
Comput Math Methods Med ; 2018: 1942582, 2018.
Article in English | MEDLINE | ID: mdl-30013614

ABSTRACT

The optic disc is a key anatomical structure in retinal images. The ability to detect optic discs in retinal images plays an important role in automated screening systems. Inspired by the fact that humans can find optic discs in retinal images by observing some local features, we propose a local feature spectrum analysis (LFSA) that eliminates the influence caused by the variable spatial positions of local features. In LFSA, a dictionary of local features is used to reconstruct new optic disc candidate images, and the utilization frequencies of every atom in the dictionary are considered as a type of "spectrum" that can be used for classification. We also employ the sparse dictionary selection approach to construct a compact and representative dictionary. Unlike previous approaches, LFSA does not require the segmentation of vessels, and its method of considering the varying information in the retinal images is both simple and robust, making it well-suited for automated screening systems. Experimental results on the largest publicly available dataset indicate the effectiveness of our proposed approach.


Subject(s)
Image Interpretation, Computer-Assisted , Optic Disk/diagnostic imaging , Algorithms , Color , Humans , Spectrum Analysis
18.
Sensors (Basel) ; 18(6)2018 Jun 15.
Article in English | MEDLINE | ID: mdl-29914068

ABSTRACT

Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.

19.
Comput Math Methods Med ; 2017: 9854825, 2017.
Article in English | MEDLINE | ID: mdl-28512511

ABSTRACT

Red lesions can be regarded as one of the earliest lesions in diabetic retinopathy (DR) and automatic detection of red lesions plays a critical role in diabetic retinopathy diagnosis. In this paper, a novel superpixel Multichannel Multifeature (MCMF) classification approach is proposed for red lesion detection. In this paper, firstly, a new candidate extraction method based on superpixel is proposed. Then, these candidates are characterized by multichannel features, as well as the contextual feature. Next, FDA classifier is introduced to classify the red lesions among the candidates. Finally, a postprocessing technique based on multiscale blood vessels detection is modified for removing nonlesions appearing as red. Experiments on publicly available DiaretDB1 database are conducted to verify the effectiveness of our proposed method.


Subject(s)
Diabetic Retinopathy/diagnostic imaging , Image Interpretation, Computer-Assisted , Databases, Factual , Humans , Image Interpretation, Computer-Assisted/standards , Pattern Recognition, Automated
20.
Comput Math Methods Med ; 2017: 2483137, 2017.
Article in English | MEDLINE | ID: mdl-28421125

ABSTRACT

Recently, microaneurysm (MA) detection has attracted a lot of attention in the medical image processing community. Since MAs can be seen as the earliest lesions in diabetic retinopathy, their detection plays a critical role in diabetic retinopathy diagnosis. In this paper, we propose a novel MA detection approach named multifeature fusion dictionary learning (MFFDL). The proposed method consists of four steps: preprocessing, candidate extraction, multifeature dictionary learning, and classification. The novelty of our proposed approach lies in incorporating the semantic relationships among multifeatures and dictionary learning into a unified framework for automatic detection of MAs. We evaluate the proposed algorithm by comparing it with the state-of-the-art approaches and the experimental results validate the effectiveness of our algorithm.


Subject(s)
Algorithms , Diabetic Retinopathy/diagnostic imaging , Diagnostic Techniques, Ophthalmological , Microaneurysm/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...