Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Pak J Med Sci ; 37(6): 1595-1599, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34712289

RESUMEN

OBJECTIVE: Aiming at the problem of low accuracy in extracting small blood vessels from existing retinal blood vessel images, a retinal blood vessel segmentation method based on a combination of a multi-scale linear detector and local and global enhancement is proposed. METHODS: The multi-scale line detector is studied, and it is divided into two parts: small scale and large scale. The small scale is used to detect the locally enhanced image and the large scale is used to detect the globally enhanced image. Fusion the response functions at different scales to get the final retinal vascular structure. RESULTS: Experiments on two databases STARE and DRIVE, show that the average vascular accuracy rates obtained by the algorithm reach 96.62% and 96.45%, and the average true positive rates reach 75.52% and 83.07%, respectively. CONCLUSION: The segmentation accuracy is high, and better blood vessel segmentation results can be obtained.

2.
Biomed Eng Online ; 19(1): 21, 2020 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-32295576

RESUMEN

BACKGROUND: As one of the major complications of diabetes, diabetic retinopathy (DR) is a leading cause of visual impairment and blindness due to delayed diagnosis and intervention. Microaneurysms appear as the earliest symptom of DR. Accurate and reliable detection of microaneurysms in color fundus images has great importance for DR screening. METHODS: A microaneurysms' detection method using machine learning based on directional local contrast (DLC) is proposed for the early diagnosis of DR. First, blood vessels were enhanced and segmented using improved enhancement function based on analyzing eigenvalues of Hessian matrix. Next, with blood vessels excluded, microaneurysm candidate regions were obtained using shape characteristics and connected components analysis. After image segmented to patches, the features of each microaneurysm candidate patch were extracted, and each candidate patch was classified into microaneurysm or non-microaneurysm. The main contributions of our study are (1) making use of directional local contrast in microaneurysms' detection for the first time, which does make sense for better microaneurysms' classification. (2) Applying three different machine learning techniques for classification and comparing their performance for microaneurysms' detection. The proposed algorithm was trained and tested on e-ophtha MA database, and further tested on another independent DIARETDB1 database. Results of microaneurysms' detection on the two databases were evaluated on lesion level and compared with existing algorithms. RESULTS: The proposed method has achieved better performance compared with existing algorithms on accuracy and computation time. On e-ophtha MA and DIARETDB1 databases, the area under curve (AUC) of receiver operating characteristic (ROC) curve was 0.87 and 0.86, respectively. The free-response ROC (FROC) score on the two databases was 0.374 and 0.210, respectively. The computation time per image with resolution of 2544×1969, 1400×960 and 1500×1152 is 29 s, 3 s and 2.6 s, respectively. CONCLUSIONS: The proposed method using machine learning based on directional local contrast of image patches can effectively detect microaneurysms in color fundus images and provide an effective scientific basis for early clinical DR diagnosis.


Asunto(s)
Fondo de Ojo , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Microaneurisma/diagnóstico por imagen , Imagen Molecular , Área Bajo la Curva , Humanos , Curva ROC , Factores de Tiempo
3.
J Digit Imaging ; 31(4): 553-561, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29209841

RESUMEN

Retinal fundus images are often corrupted by non-uniform and/or poor illumination that occur due to overall imperfections in the image acquisition process. This unwanted variation in brightness limits the pathological information that can be gained from the image. Studies have shown that poor illumination can impede human grading in about 10~15% of retinal images. For automated grading, the effect can be even higher. In this perspective, we propose a novel method for illumination correction in the context of retinal imaging. The method splits the color image into luminosity and chroma (i.e., color) components and performs illumination correction in the luminosity channel based on a novel background estimation technique. Extensive subjective and objective experiments were conducted on publicly available DIARETDB1 and EyePACS images to justify the performance of the proposed method. The subjective experiment has confirmed that the proposed method does not create false color/artifacts and at the same time performs better than the traditional method in 84 out of 89 cases. The objective experiment shows an accuracy improvement of 4% in automated disease grading when illumination correction is performed by the proposed method than the traditional method.


Asunto(s)
Fondo de Ojo , Aumento de la Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Fotograbar/métodos , Enfermedades de la Retina/diagnóstico por imagen , Artefactos , Diagnóstico por Imagen/métodos , Femenino , Humanos , Masculino , Imagen Óptica/métodos , Enfermedades de la Retina/patología , Medición de Riesgo , Sensibilidad y Especificidad
4.
Biomed Eng Online ; 16(1): 122, 2017 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-29073912

RESUMEN

BACKGROUND: Non-proliferative diabetic retinopathy is the early stage of diabetic retinopathy. Automatic detection of non-proliferative diabetic retinopathy is significant for clinical diagnosis, early screening and course progression of patients. METHODS: This paper introduces the design and implementation of an automatic system for screening non-proliferative diabetic retinopathy based on color fundus images. Firstly, the fundus structures, including blood vessels, optic disc and macula, are extracted and located, respectively. In particular, a new optic disc localization method using parabolic fitting is proposed based on the physiological structure characteristics of optic disc and blood vessels. Then, early lesions, such as microaneurysms, hemorrhages and hard exudates, are detected based on their respective characteristics. An equivalent optical model simulating human eyes is designed based on the anatomical structure of retina. Main structures and early lesions are reconstructed in the 3D space for better visualization. Finally, the severity of each image is evaluated based on the international criteria of diabetic retinopathy. RESULTS: The system has been tested on public databases and images from hospitals. Experimental results demonstrate that the proposed system achieves high accuracy for main structures and early lesions detection. The results of severity classification for non-proliferative diabetic retinopathy are also accurate and suitable. CONCLUSIONS: Our system can assist ophthalmologists for clinical diagnosis, automatic screening and course progression of patients.


Asunto(s)
Retinopatía Diabética/diagnóstico por imagen , Fondo de Ojo , Procesamiento de Imagen Asistido por Computador , Automatización , Color , Humanos
5.
Med Biol Eng Comput ; 60(5): 1431-1448, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35267149

RESUMEN

Age-related macular degeneration (AMD) is a degenerative disorder in the macular region of the eye. AMD is the leading cause of irreversible vision loss in the elderly population. With the increase in aged population in the world, there is an urgent need to develop low-cost, hassle-free, and portable equipment diagnostic and analytical tools for early diagnosis. As AMD detection is done by examining the fundus images, its diagnosis is heavily dependent on medical personnel and their experience. To remove this issue, computer-aided algorithms may be used for AMD detection. The proposed work offers an effective solution to the AMD detection problem. It proposes a novel 13-layer deep convolutional neural network (DCNN) architecture to screen fundus images to spot direct signs of AMD. Five pairs of convolution and maxpool layers and three fully connected layers are utilized in the proposed network. Extensive simulations on original and augmented versions of two datasets (iChallenge-AMD and ARIA) consisting of healthy and diseased cases show a classification accuracy of 89.75%, 91.69%, and 99.45% on original and augmented versions of iChallenge-AMD and 90.00%, 93.03%, and 99.55% on ARIA, using a 10-fold cross-validation technique. It surpasses the best-known algorithm using DCNN by 2%.


Asunto(s)
Degeneración Macular , Redes Neurales de la Computación , Anciano , Algoritmos , Fondo de Ojo , Humanos , Degeneración Macular/diagnóstico por imagen
6.
Eye Vis (Lond) ; 9(1): 13, 2022 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-35361278

RESUMEN

BACKGROUND: Myopic maculopathy (MM) has become a major cause of visual impairment and blindness worldwide, especially in East Asian countries. Deep learning approaches such as deep convolutional neural networks (DCNN) have been successfully applied to identify some common retinal diseases and show great potential for the intelligent analysis of MM. This study aimed to build a reliable approach for automated detection of MM from retinal fundus images using DCNN models. METHODS: A dual-stream DCNN (DCNN-DS) model that perceives features from both original images and corresponding processed images by color histogram distribution optimization method was designed for classification of no MM, tessellated fundus (TF), and pathologic myopia (PM). A total of 36,515 gradable images from four hospitals were used for DCNN model development, and 14,986 gradable images from the other two hospitals for external testing. We also compared the performance of the DCNN-DS model and four ophthalmologists on 3000 randomly sampled fundus images. RESULTS: The DCNN-DS model achieved sensitivities of 93.3% and 91.0%, specificities of 99.6% and 98.7%, areas under the receiver operating characteristic curves (AUC) of 0.998 and 0.994 for detecting PM, whereas sensitivities of 98.8% and 92.8%, specificities of 95.6% and 94.1%, AUCs of 0.986 and 0.970 for detecting TF in two external testing datasets. In the sampled testing dataset, the sensitivities of four ophthalmologists ranged from 88.3% to 95.8% and 81.1% to 89.1%, and the specificities ranged from 95.9% to 99.2% and 77.8% to 97.3% for detecting PM and TF, respectively. Meanwhile, the DCNN-DS model achieved sensitivities of 90.8% and 97.9% and specificities of 99.1% and 94.0% for detecting PM and TF, respectively. CONCLUSIONS: The proposed DCNN-DS approach demonstrated reliable performance with high sensitivity, specificity, and AUC to classify different MM levels on fundus photographs sourced from clinics. It can help identify MM automatically among the large myopic groups and show great potential for real-life applications.

7.
Ophthalmol Sci ; 2(2): 100116, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36249700

RESUMEN

Purpose: Multimodal imaging was used to identify and characterize the cause of hyperpigmentation seen on color fundus images (CFIs) of eyes with intermediate age-related macular degeneration (iAMD). Design: Retrospective review of a prospective study. Participants: Patients with iAMD. Methods: Color fundus images with macular hyperpigmentation were compared with same-day images obtained using fundus autofluorescence (FAF), near infrared reflectance (NIR), and swept-source (SS) OCT imaging. Two SS OCT en face slabs were generated: a retinal slab to identify hyperreflective foci within the retina and a slab from beneath the retinal pigment epithelium (RPE; the sub-RPE slab) that was used to detect regions that cause decreased light transmission into the choroid, also known as hypotransmission defects. All images were registered to allow for qualitative comparisons by 2 independent graders. Main Outcome Measures: Comparison between foci of macular hyperpigmentation seen on CFIs with the detection of these regions on FAF, NIR, and SS OCT en face images. Results: Compared with CFIs, FAF imaging seemed to be the least sensitive method for the detection of hyperpigmentation, whereas NIR and SS OCT imaging reliably detected these hyperpigmented areas. Although NIR imaging detected most of the hyperpigmentation seen in CFIs, SS OCT imaging detected all the areas of hyperpigmentation and anatomically localized these areas by using both en face and B-scan images. En face OCT slabs of the retina and sub-RPE region were registered to the CFIs, and areas of hyperpigmentation were shown to correspond to hyperreflective foci in the retina and regions of thickened RPE seen on OCT B-scans. Although both hyperpigmentation and early atrophic lesions appeared bright on NIR imaging, en face SS OCT imaging was able to distinguish these lesions because hyperpigmentary changes appeared dark and early atrophic lesions appeared bright on the sub-RPE slab. Conclusions: En face OCT imaging in conjunction with OCT B-scans were able to identify and localize the hyperpigmentation seen in CFIs reliably. This hyperpigmentation was not only associated with intraretinal hyperreflective foci, but also corresponded to areas with a thickened RPE.

8.
Artículo en Zh | WPRIM | ID: wpr-934280

RESUMEN

Objective:To propose automatic measurement of global and local tessellation density on color fundus images based on a deep convolutional neural network (DCNN) method.Methods:An applied study. An artificial intelligence (AI) database was constructed, which contained 1 005 color fundus images captured from 1 024 eyes of 514 myopic patients in the Northern Hospital of Qingdao Eye Hospital from May to July, 2021. The images were preprocessed by using RGB color channel re-calibration method (CCR algorithm), CLAHE algorithm based on Lab color space, Retinex algorithm for multiple iterative illumination estimation, and multi-scale Retinex algorithm. The effects on the segmentation of tessellation by adopting the abovemetioned image enhancement methods and utilizing the Dice, Edge Overlap Rate and clDice loss were compared and observed. The tessellation segmentation model for extracting the tessellated region in the full fundus image as well as the tissue detection model for locating the optic disc and macular fovea were built up. Then, the fundus tessellation density (FTD), macular tessellation density (MTD) and peripapillary tessellation density (PTD) were calculated automatically.Results:When applying CCR algorithm for image preprocessing and the training losses combination strategy, the Dice coefficient, accuracy, sensitivity, specificity and Jordan index for fundus tessellation segmentation were 0.723 4, 94.25%, 74.03%, 96.00% and 70.03%, respectively. Compared with the manual annotations, the mean absolute errors and root mean square errors of FTD, MTD, PTD automatically measured by the model were 0.014 3, 0.020 7, 0.026 7 and 0.017 8, 0.032 3, 0.036 5, respectively.Conclusion:The DCNN-based segmentation and detection method can automatically measure the tessellation density in the global and local regions of the fundus of myopia patients, which can more accurately assist clinical monitoring and evaluation of the impact of fundus tessellation changes on the development of myopia.

9.
J Med Imaging (Bellingham) ; 4(2): 024503, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28560245

RESUMEN

Computer-assisted diagnostic (CAD) tools are of interest as they enable efficient decision-making in clinics and the screening of diseases. The traditional approach to CAD algorithm design focuses on the automated detection of abnormalities independent of the end-user, who can be an image reader or an expert. We propose a reader-centric system design wherein a reader's attention is drawn to abnormal regions in a least-obtrusive yet effective manner, using saliency-based emphasis of abnormalities and without altering the appearance of the background tissues. We present an assistive lesion-emphasis system (ALES) based on the above idea, for fundus image-based diabetic retinopathy diagnosis. Lesion-saliency is learnt using a convolutional neural network (CNN), inspired by the saliency model of Itti and Koch. The CNN is used to fine-tune standard low-level filters and learn high-level filters for deriving a lesion-saliency map, which is then used to perform lesion-emphasis via a spatially variant version of gamma correction. The proposed system has been evaluated on public datasets and benchmarked against other saliency models. It was found to outperform other saliency models by 6% to 30% and boost the contrast-to-noise ratio of lesions by more than 30%. Results of a perceptual study also underscore the effectiveness and, hence, the potential of ALES as an assistive tool for readers.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA