Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Sensors (Basel) ; 20(10)2020 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-32443591

RESUMEN

As the Internet of Things (IoT) is predicted to deal with different problems based on big data, its applications have become increasingly dependent on visual data and deep learning technology, and it is a big challenge to find a suitable method for IoT systems to analyze image data. Traditional deep learning methods have never explicitly taken the color differences of data into account, but from the experience of human vision, colors play differently significant roles in recognizing things. This paper proposes a weight initialization method for deep learning in image recognition problems based on RGB influence proportion, aiming to improve the training process of the learning algorithms. In this paper, we try to extract the RGB proportion and utilize it in the weight initialization process. We conduct several experiments on different datasets to evaluate the effectiveness of our proposal, and it is proven to be effective on small datasets. In addition, as for the access to the RGB influence proportion, we also provide an expedient approach to get the early proportion for the following usage. We assume that the proposed method can be used for IoT sensors to securely analyze complex data in the future.

2.
Sensors (Basel) ; 19(2)2019 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-30646611

RESUMEN

Riding the wave of visual sensor equipment (e.g., personal smartphones, home security cameras, vehicle cameras, and camcorders), image retrieval (IR) technology has received increasing attention due to its potential applications in e-commerce, visual surveillance, and intelligent traffic. However, determining how to design an effective feature descriptor has been proven to be the main bottleneck for retrieving a set of images of interest. In this paper, we first construct a six-layer color quantizer to extract a color map. Then, motivated by the human visual system, we design a local parallel cross pattern (LPCP) in which the local binary pattern (LBP) map is amalgamated with the color map in "parallel" and "cross" manners. Finally, to reduce the computational complexity and improve the robustness to image rotation, the LPCP is extended to the uniform local parallel cross pattern (ULPCP) and the rotation-invariant local parallel cross pattern (RILPCP), respectively. Extensive experiments are performed on eight benchmark datasets. The experimental results validate the effectiveness, efficiency, robustness, and computational complexity of the proposed descriptors against eight state-of-the-art color texture descriptors to produce an in-depth comparison. Additionally, compared with a series of Convolutional Neural Network (CNN)-based models, the proposed descriptors still achieve competitive results.

3.
Sensors (Basel) ; 18(6)2018 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-29914068

RESUMEN

Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.

4.
ScientificWorldJournal ; 2014: 972125, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24688449

RESUMEN

Feature selection is a key issue in the domain of machine learning and related fields. The results of feature selection can directly affect the classifier's classification accuracy and generalization performance. Recently, a statistical feature selection method named effective range based gene selection (ERGS) is proposed. However, ERGS only considers the overlapping area (OA) among effective ranges of each class for every feature; it fails to handle the problem of the inclusion relation of effective ranges. In order to overcome this limitation, a novel efficient statistical feature selection approach called improved feature selection based on effective range (IFSER) is proposed in this paper. In IFSER, an including area (IA) is introduced to characterize the inclusion relation of effective ranges. Moreover, the samples' proportion for each feature of every class in both OA and IA is also taken into consideration. Therefore, IFSER outperforms the original ERGS and some other state-of-the-art algorithms. Experiments on several well-known databases are performed to demonstrate the effectiveness of the proposed method.


Asunto(s)
Modelos Teóricos , Algoritmos
5.
Comput Biol Med ; 171: 108184, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38417386

RESUMEN

How to fuse low-level and high-level features effectively is crucial to improving the accuracy of medical image segmentation. Most CNN-based segmentation models on this topic usually adopt attention mechanisms to achieve the fusion of different level features, but they have not effectively utilized the guided information of high-level features, which is often highly beneficial to improve the performance of the segmentation model, to guide the extraction of low-level features. To address this problem, we design multiple guided modules and develop a boundary-guided filter network (BGF-Net) to obtain more accurate medical image segmentation. To the best of our knowledge, this is the first time that boundary guided information is introduced into the medical image segmentation task. Specifically, we first propose a simple yet effective channel boundary guided module to make the segmentation model pay more attention to the relevant channel weights. We further design a novel spatial boundary guided module to complement the channel boundary guided module and aware of the most important spatial positions. Finally, we propose a boundary guided filter to preserve the structural information from the previous feature map and guide the model to learn more important feature information. Moreover, we conduct extensive experiments on skin lesion, polyp, and gland segmentation datasets including ISIC 2016, CVC-EndoSceneStil and GlaS to test the proposed BGF-Net. The experimental results demonstrate that BGF-Net performs better than other state-of-the-art methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje
6.
Math Biosci Eng ; 21(1): 49-74, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38303413

RESUMEN

Retinal vessel segmentation is very important for diagnosing and treating certain eye diseases. Recently, many deep learning-based retinal vessel segmentation methods have been proposed; however, there are still many shortcomings (e.g., they cannot obtain satisfactory results when dealing with cross-domain data or segmenting small blood vessels). To alleviate these problems and avoid overly complex models, we propose a novel network based on a multi-scale feature and style transfer (MSFST-NET) for retinal vessel segmentation. Specifically, we first construct a lightweight segmentation module named MSF-Net, which introduces the selective kernel (SK) module to increase the multi-scale feature extraction ability of the model to achieve improved small blood vessel segmentation. Then, to alleviate the problem of model performance degradation when segmenting cross-domain datasets, we propose a style transfer module and a pseudo-label learning strategy. The style transfer module is used to reduce the style difference between the source domain image and the target domain image to improve the segmentation performance for the target domain image. The pseudo-label learning strategy is designed to be combined with the style transfer module to further boost the generalization ability of the model. Moreover, we trained and tested our proposed MSFST-NET in experiments on the DRIVE and CHASE_DB1 datasets. The experimental results demonstrate that MSFST-NET can effectively improve the generalization ability of the model on cross-domain datasets and achieve improved retinal vessel segmentation results than other state-of-the-art methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagen , Algoritmos
7.
Comput Biol Med ; 178: 108639, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38878394

RESUMEN

The optic cup (OC) and optic disc (OD) are two critical structures in retinal fundus images, and their relative positions and sizes are essential for effectively diagnosing eye diseases. With the success of deep learning in computer vision, deep learning-based segmentation models have been widely used for joint optic cup and disc segmentation. However, there are three prominent issues that impact the segmentation performance. First, significant differences among datasets collecting from various institutions, protocols, and devices lead to performance degradation of models. Second, we find that images with only RGB information struggle to counteract the interference caused by brightness variations, affecting color representation capability. Finally, existing methods typically ignored the edge perception, facing the challenges in obtaining clear and smooth edge segmentation results. To address these drawbacks, we propose a novel framework based on Style Alignment and Multi-Color Fusion (SAMCF) for joint OC and OD segmentation. Initially, we introduce a domain generalization method to generate uniformly styled images without damaged image content for mitigating domain shift issues. Next, based on multiple color spaces, we propose a feature extraction and fusion network aiming to handle brightness variation interference and improve color representation capability. Lastly, an edge aware loss is designed to generate fine edge segmentation results. Our experiments conducted on three public datasets, DGS, RIM, and REFUGE, demonstrate that our proposed SAMCF achieves superior performance to existing state-of-the-art methods. Moreover, SAMCF exhibits remarkable generalization ability across multiple retinal fundus image datasets, showcasing its outstanding generality.


Asunto(s)
Aprendizaje Profundo , Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagen , Color , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
8.
Comput Biol Med ; 164: 107269, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37562323

RESUMEN

There has been steady progress in the field of deep learning-based blood vessel segmentation. However, several challenging issues still continue to limit its progress, including inadequate sample sizes, the neglect of contextual information, and the loss of microvascular details. To address these limitations, we propose a dual-path deep learning framework for blood vessel segmentation. In our framework, the fundus images are divided into concentric patches with different scales to alleviate the overfitting problem. Then, a Multi-scale Context Dense Aggregation Network (MCDAU-Net) is proposed to accurately extract the blood vessel boundaries from these patches. In MCDAU-Net, a Cascaded Dilated Spatial Pyramid Pooling (CDSPP) module is designed and incorporated into intermediate layers of the model, enhancing the receptive field and producing feature maps enriched with contextual information. To improve segmentation performance for low-contrast vessels, we propose an InceptionConv (IConv) module, which can explore deeper semantic features and suppress the propagation of non-vessel information. Furthermore, we design a Multi-scale Adaptive Feature Aggregation (MAFA) module to fuse the multi-scale feature by assigning adaptive weight coefficients to different feature maps through skip connections. Finally, to explore the complementary contextual information and enhance the continuity of microvascular structures, a fusion module is designed to combine the segmentation results obtained from patches of different sizes, achieving fine microvascular segmentation performance. In order to assess the effectiveness of our approach, we conducted evaluations on three widely-used public datasets: DRIVE, CHASE-DB1, and STARE. Our findings reveal a remarkable advancement over the current state-of-the-art (SOTA) techniques, with the mean values of Se and F1 scores being an increase of 7.9% and 4.7%, respectively. The code is available at https://github.com/bai101315/MCDAU-Net.


Asunto(s)
Vasos Retinianos , Semántica , Vasos Retinianos/diagnóstico por imagen , Fondo de Ojo , Tamaño de la Muestra , Procesamiento de Imagen Asistido por Computador , Algoritmos
9.
Front Neurosci ; 17: 1139181, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36968487

RESUMEN

Background: Glaucoma is the leading cause of irreversible vision loss. Accurate Optic Disc (OD) and Optic Cup (OC) segmentation is beneficial for glaucoma diagnosis. In recent years, deep learning has achieved remarkable performance in OD and OC segmentation. However, OC segmentation is more challenging than OD segmentation due to its large shape variability and cryptic boundaries that leads to performance degradation when applying the deep learning models to segment OC. Moreover, the OD and OC are segmented independently, or pre-requirement is necessary to extract the OD centered region with pre-processing procedures. Methods: In this paper, we suggest a one-stage network named EfficientNet and Attention-based Residual Depth-wise Separable Convolution (EARDS) for joint OD and OC segmentation. In EARDS, EfficientNet-b0 is regarded as an encoder to capture more effective boundary representations. To suppress irrelevant regions and highlight features of fine OD and OC regions, Attention Gate (AG) is incorporated into the skip connection. Also, Residual Depth-wise Separable Convolution (RDSC) block is developed to improve the segmentation performance and computational efficiency. Further, a novel decoder network is proposed by combining AG, RDSC block and Batch Normalization (BN) layer, which is utilized to eliminate the vanishing gradient problem and accelerate the convergence speed. Finally, the focal loss and dice loss as a weighted combination is designed to guide the network for accurate OD and OC segmentation. Results and discussion: Extensive experimental results on the Drishti-GS and REFUGE datasets indicate that the proposed EARDS outperforms the state-of-the-art approaches. The code is available at https://github.com/M4cheal/EARDS.

10.
Artículo en Inglés | MEDLINE | ID: mdl-38090822

RESUMEN

Segmentation of the Optic Disc (OD) and Optic Cup (OC) is crucial for the early detection and treatment of glaucoma. Despite the strides made in deep neural networks, incorporating trained segmentation models for clinical application remains challenging due to domain shifts arising from disparities in fundus images across different healthcare institutions. To tackle this challenge, this study introduces an innovative unsupervised domain adaptation technique called Multi-scale Adaptive Adversarial Learning (MAAL), which consists of three key components. The Multi-scale Wasserstein Patch Discriminator (MWPD) module is designed to extract domain-specific features at multiple scales, enhancing domain classification performance and offering valuable guidance for the segmentation network. To further enhance model generalizability and explore domain-invariant features, we introduce the Adaptive Weighted Domain Constraint (AWDC) module. During training, this module dynamically assigns varying weights to different scales, allowing the model to adaptively focus on informative features. Furthermore, the Pixel-level Feature Enhancement (PFE) module enhances low-level features extracted at shallow network layers by incorporating refined high-level features. This integration ensures the preservation of domain-invariant information, effectively addressing domain variation and mitigating the loss of global features. Two publicly accessible fundus image databases are employed to demonstrate the effectiveness of our MAAL method in mitigating model degradation and improving segmentation performance. The achieved results outperform current state-of-the-art (SOTA) methods in both OD and OC segmentation. Codes are available at https://github.com/M4cheal/MAAL.

11.
Comput Biol Med ; 164: 107215, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37481947

RESUMEN

Glaucoma is a leading cause of worldwide blindness and visual impairment, making early screening and diagnosis is crucial to prevent vision loss. Cup-to-Disk Ratio (CDR) evaluation serves as a widely applied approach for effective glaucoma screening. At present, deep learning methods have exhibited outstanding performance in optic disk (OD) and optic cup (OC) segmentation and maturely deployed in CAD system. However, owning to the complexity of clinical data, these techniques could be constrained. Therefore, an original Coarse-to-Fine Transformer Network (C2FTFNet) is designed to segment OD and OC jointly , which is composed of two stages. In the coarse stage, to eliminate the effects of irrelevant organization on the segmented OC and OD regions, we employ U-Net and Circular Hough Transform (CHT) to segment the Region of Interest (ROI) of OD. Meanwhile, a TransUnet3+ model is designed in the fine segmentation stage to extract the OC and OD regions more accurately from ROI. In this model, to alleviate the limitation of the receptive field caused by traditional convolutional methods, a Transformer module is introduced into the backbone to capture long-distance dependent features for retaining more global information. Then, a Multi-Scale Dense Skip Connection (MSDC) module is proposed to fuse the low-level and high-level features from different layers for reducing the semantic gap among different level features. Comprehensive experiments conducted on DRIONS-DB, Drishti-GS, and REFUGE datasets validate the superior effectiveness of the proposed C2FTFNet compared to existing state-of-the-art approaches.


Asunto(s)
Glaucoma , Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagen , Glaucoma/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Técnicas de Diagnóstico Oftalmológico , Tamizaje Masivo , Fondo de Ojo , Procesamiento de Imagen Asistido por Computador/métodos
12.
Comput Intell Neurosci ; 2022: 5596676, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35463259

RESUMEN

The time series is a kind of complex structure data, which contains some special characteristics such as high dimension, dynamic, and high noise. Moreover, multivariate time series (MTS) has become a crucial study in data mining. The MTS utilizes the historical data to forecast its variation trend and has turned into one of the hotspots. In the era of rapid information development and big data, accurate prediction of MTS has attracted much attention. In this paper, a novel deep learning architecture based on the encoder-decoder framework is proposed for MTS forecasting. In this architecture, firstly, the gated recurrent unit (GRU) is taken as the main unit structure of both the procedures in encoding and decoding to extract the useful successive feature information. Then, different from the existing models, the attention mechanism (AM) is introduced to exploit the importance of different historical data for reconstruction at the decoding stage. Meanwhile, feature reuse is realized by skip connections based on the residual network for alleviating the influence of previous features on data reconstruction. Finally, in order to enhance the performance and the discriminative ability of the new MTS, the convolutional structure and fully connected module are established. Furthermore, to better validate the effectiveness of MTS forecasting, extensive experiments are executed on two different types of MTS such as stock data and shared bicycle data, respectively. The experimental results adequately demonstrate the effectiveness and the feasibility of the proposed method.


Asunto(s)
Minería de Datos , Redes Neurales de la Computación , Macrodatos , Predicción , Factores de Tiempo
13.
Front Public Health ; 10: 1056226, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36483248

RESUMEN

Background: High precision segmentation of retinal blood vessels from retinal images is a significant step for doctors to diagnose many diseases such as glaucoma and cardiovascular diseases. However, at the peripheral region of vessels, previous U-Net-based segmentation methods failed to significantly preserve the low-contrast tiny vessels. Methods: For solving this challenge, we propose a novel network model called Bi-directional ConvLSTM Residual U-Net (BCR-UNet), which takes full advantage of U-Net, Dropblock, Residual convolution and Bi-directional ConvLSTM (BConvLSTM). In this proposed BCR-UNet model, we propose a novel Structured Dropout Residual Block (SDRB) instead of using the original U-Net convolutional block, to construct our network skeleton for improving the robustness of the network. Furthermore, to improve the discriminative ability of the network and preserve more original semantic information of tiny vessels, we adopt BConvLSTM to integrate the feature maps captured from the first residual block and the last up-convolutional layer in a nonlinear manner. Results and discussion: We conduct experiments on four public retinal blood vessel datasets, and the results show that the proposed BCR-UNet can preserve more tiny blood vessels at the low-contrast peripheral regions, even outperforming previous state-of-the-art methods.


Asunto(s)
Retraso en el Despertar Posanestésico , Médicos , Humanos , Vasos Retinianos/diagnóstico por imagen
14.
Med Phys ; 49(5): 3144-3158, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35172016

RESUMEN

PURPOSE: Accurately segmenting curvilinear structures, for example, retinal blood vessels or nerve fibers, in the medical image is essential to the clinical diagnosis of many diseases. Recently, deep learning has become a popular technology to deal with the image segmentation task, and it has obtained remarkable achievement. However, the existing methods still have many problems when segmenting the curvilinear structures in medical images, such as losing the details of curvilinear structures, producing many false-positive segmentation results. To mitigate these problems, we propose a novel end-to-end curvilinear structure segmentation network called Curv-Net. METHODS: Curv-Net is an effective encoder-decoder architecture constructed based on selective kernel (SK) and multibidirectional convolutional LSTM (multi-Bi-ConvLSTM). To be specific, we first employ the SK module in the convolutional layer to adaptively extract the multi-scale features of the input image, and then we design a multi-Bi-ConvLSTM as the skip concatenation to fuse the information learned in the same stage and propagate the feature information from the deep stages to the shallow stages, which can enable the feature captured by Curv-Net to contain more detail information and high-level semantic information simultaneously to improve the segmentation performance. RESULTS: The effectiveness and reliability of our proposed Curv-Net are verified on three public datasets: two color fundus datasets (DRIVE and CHASE_DB1) and one corneal nerve fiber dataset (CCM-2). We calculate the accuracy (ACC), sensitivity (SE), specificity (SP), Dice similarity coefficient (Dice), and area under the receiver (AUC) for the DRIVE and CHASE_DB1 datasets. The ACC, SE, SP, Dice, and AUC of the DRIVE dataset are 0.9629, 0.8175, 0.9858, 0.8352, and 0.9810, respectively. For the CHASE_DB1 dataset, the values are 0.9810, 0.8564, 0.9899, 0.8143, and 0.9832, respectively. To validate the corneal nerve fiber segmentation performance of the proposed Curv-Net, we test it on the CCM-2 dataset and calculate Dice, SE, and false discovery rate (FDR) metrics. The Dice, SE, and FDR achieved by Curv-Net are 0.8114 ± 0.0062, 0.8903 ± 0.0113, and 0.2547 ± 0.0104, respectively. CONCLUSIONS: Curv-Net is evaluated on three public datasets. Extensive experimental results demonstrate that Curv-Net outperforms the other superior curvilinear structure segmentation methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Fondo de Ojo , Reproducibilidad de los Resultados , Vasos Retinianos
15.
Comput Intell Neurosci ; 2021: 4026132, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34777492

RESUMEN

Anomaly detection (AD) aims to distinguish the data points that are inconsistent with the overall pattern of the data. Recently, unsupervised anomaly detection methods have aroused huge attention. Among these methods, feature representation (FR) plays an important role, which can directly affect the performance of anomaly detection. Sparse representation (SR) can be regarded as one of matrix factorization (MF) methods, which is a powerful tool for FR. However, there are some limitations in the original SR. On the one hand, it just learns the shallow feature representations, which leads to the poor performance for anomaly detection. On the other hand, the local geometry structure information of data is ignored. To address these shortcomings, a graph regularized deep sparse representation (GRDSR) approach is proposed for unsupervised anomaly detection in this work. In GRDSR, a deep representation framework is first designed by extending the single layer MF to a multilayer MF for extracting hierarchical structure from the original data. Next, a graph regularization term is introduced to capture the intrinsic local geometric structure information of the original data during the process of FR, making the deep features preserve the neighborhood relationship well. Then, a L1-norm-based sparsity constraint is added to enhance the discriminant ability of the deep features. Finally, a reconstruction error is applied to distinguish anomalies. In order to demonstrate the effectiveness of the proposed approach, we conduct extensive experiments on ten datasets. Compared with the state-of-the-art methods, the proposed approach can achieve the best performance.


Asunto(s)
Aprendizaje Automático no Supervisado
16.
Comput Intell Neurosci ; 2021: 5486328, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34912446

RESUMEN

The demand forecast of shared bicycles directly determines the utilization rate of vehicles and projects operation benefits. Accurate prediction based on the existing operating data can reduce unnecessary delivery. Since the use of shared bicycles is susceptible to time dependence and external factors, most of the existing works only consider some of the attributes of shared bicycles, resulting in insufficient modeling and unsatisfactory prediction performance. In order to address the aforementioned limitations, this paper establishes a novelty prediction model based on convolutional recurrent neural network with the attention mechanism named as CNN-GRU-AM. There are four parts in the proposed CNN-GRU-AM model. First, a convolutional neural network (CNN) with two layers is used to extract local features from the multiple sources data. Second, the gated recurrent unit (GRU) is employed to capture the time-series relationships of the output data of CNN. Third, the attention mechanism (AM) is introduced to mining the potential relationships of the series features, in which different weights will be assigned to the corresponding features according to their importance. At last, a fully connected layer with three layers is added to learn features and output the prediction results. To evaluate the performance of the proposed method, we conducted massive experiments on two datasets including a real mobile bicycle data and a public shared bicycle data. The experimental results show that the prediction performance of the proposed model is better than other prediction models, indicating the significance of the social benefits.


Asunto(s)
Ciclismo , Redes Neurales de la Computación , Predicción
17.
Comput Math Methods Med ; 2019: 8973287, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31827591

RESUMEN

Accurate optic disc and optic cup segmentation plays an important role for diagnosing glaucoma. However, most existing segmentation approaches suffer from the following limitations. On the one hand, image devices or illumination variations always lead to intensity inhomogeneity in the fundus image. On the other hand, the spatial prior knowledge of optic disc and optic cup, e.g., the optic cup is always contained inside the optic disc region, is ignored. Therefore, the effectiveness of segmentation approaches is greatly reduced. Different from most previous approaches, we present a novel locally statistical active contour model with the structure prior (LSACM-SP) approach to jointly and robustly segment the optic disc and optic cup structures. First, some preprocessing techniques are used to automatically extract initial contour of object. Then, we introduce the locally statistical active contour model (LSACM) to optic disc and optic cup segmentation in the presence of intensity inhomogeneity. Finally, taking the specific morphology of optic disc and optic cup into consideration, a novel structure prior is proposed to guide the model to generate accurate segmentation results. Experimental results demonstrate the advantage and superiority of our approach on two publicly available databases, i.e., DRISHTI-GS and RIM-ONE r2, by comparing with some well-known algorithms.


Asunto(s)
Diagnóstico por Computador/métodos , Glaucoma/diagnóstico por imagen , Retina/diagnóstico por imagen , Algoritmos , Análisis por Conglomerados , Reacciones Falso Positivas , Fondo de Ojo , Glaucoma/patología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Estadísticos , Distribución Normal , Oftalmología/métodos , Disco Óptico/diagnóstico por imagen , Reconocimiento de Normas Patrones Automatizadas , Curva ROC , Reproducibilidad de los Resultados , Programas Informáticos
18.
Med Biol Eng Comput ; 57(9): 2055-2067, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31352661

RESUMEN

Glaucoma is a sight-threading disease which can lead to irreversible blindness. Currently, extracting the vertical cup-to-disc ratio (CDR) from 2D retinal fundus images is promising for automatic glaucoma diagnosis. In this paper, we present a novel sparse coding approach for glaucoma diagnosis called adaptive weighted locality-constrained sparse coding (AWLCSC). Different from the existing reconstruction-based glaucoma diagnosis approaches, the weighted matrix in AWLCSC is constructed by adaptively fusing multiple distance measurement information between the reference images and the testing image, making our approach more robust and effective to glaucoma diagnosis. In our approach, the disc image is firstly extracted and reconstructed according to the proposed AWLCSC technique. Then, with the usage of the obtained reconstruction coefficients and a series of reference disc images with known CDRs, the CDR of the testing disc image can be automated estimation for glaucoma diagnosis. The performance of the proposed AWLCSC is evaluated on two publicly available DRISHTI-GS1 and RIM-ONE r2 databases. The experimental results indicate that the proposed approach outperforms the state-of-the-art approaches. Graphical abstract The flowchart of the proposed approach for glaucoma diagnosis.


Asunto(s)
Algoritmos , Glaucoma/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Bases de Datos Factuales , Fondo de Ojo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
19.
Comput Math Methods Med ; 2018: 1942582, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30013614

RESUMEN

The optic disc is a key anatomical structure in retinal images. The ability to detect optic discs in retinal images plays an important role in automated screening systems. Inspired by the fact that humans can find optic discs in retinal images by observing some local features, we propose a local feature spectrum analysis (LFSA) that eliminates the influence caused by the variable spatial positions of local features. In LFSA, a dictionary of local features is used to reconstruct new optic disc candidate images, and the utilization frequencies of every atom in the dictionary are considered as a type of "spectrum" that can be used for classification. We also employ the sparse dictionary selection approach to construct a compact and representative dictionary. Unlike previous approaches, LFSA does not require the segmentation of vessels, and its method of considering the varying information in the retinal images is both simple and robust, making it well-suited for automated screening systems. Experimental results on the largest publicly available dataset indicate the effectiveness of our proposed approach.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Disco Óptico/diagnóstico por imagen , Algoritmos , Color , Humanos , Análisis Espectral
20.
Comput Math Methods Med ; 2017: 9854825, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28512511

RESUMEN

Red lesions can be regarded as one of the earliest lesions in diabetic retinopathy (DR) and automatic detection of red lesions plays a critical role in diabetic retinopathy diagnosis. In this paper, a novel superpixel Multichannel Multifeature (MCMF) classification approach is proposed for red lesion detection. In this paper, firstly, a new candidate extraction method based on superpixel is proposed. Then, these candidates are characterized by multichannel features, as well as the contextual feature. Next, FDA classifier is introduced to classify the red lesions among the candidates. Finally, a postprocessing technique based on multiscale blood vessels detection is modified for removing nonlesions appearing as red. Experiments on publicly available DiaretDB1 database are conducted to verify the effectiveness of our proposed method.


Asunto(s)
Retinopatía Diabética/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador , Bases de Datos Factuales , Humanos , Interpretación de Imagen Asistida por Computador/normas , Reconocimiento de Normas Patrones Automatizadas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA