ABSTRACT
This study investigated the possibility of using low-cost, handheld, retinal imaging devices for the automatic extraction of quantifiable measures of retinal blood vessels. Initially, the available handheld devices were compared using a Zeiss model eye incorporating a USAF resolution test chart to assess their optical properties. The only suitable camera of the five evaluated was the Horus DEC 200. This device was then subjected to a detailed evaluation in which images in human eyes taken from the handheld camera were compared in a quantitative analysis with those of the same eye from a Canon CR-DGi retinal desktop camera. We found that the Horus DEC 200 exhibited shortcomings in capturing images of human eyes by comparison with the Canon. More images were rejected as being unevaluable or suffering failures in automatic segmentation than with the Canon, and even after exclusion of affected images, the Horus yielded lower measurements of vessel density than the Canon. A number of issues affecting handheld cameras in general and some features of the Horus in particular have been identified that might contribute to the observed differences in performance. Some potential mitigations are discussed which might yield improvements in performance, thus potentially facilitating use of handheld retinal imaging devices for quantitative retinal microvascular measurements.
ABSTRACT
Energy and storage restrictions are relevant variables that software applications should be concerned about when running in low-power environments. In particular, computer vision (CV) applications exemplify well that concern, since conventional uniform image sensors typically capture large amounts of data to be further handled by the appropriate CV algorithms. Moreover, much of the acquired data are often redundant and outside of the application's interest, which leads to unnecessary processing and energy spending. In the literature, techniques for sensing and re-sampling images in non-uniform fashions have emerged to cope with these problems. In this study, we propose Application-Oriented Retinal Image Models that define a space-variant configuration of uniform images and contemplate requirements of energy consumption and storage footprints for CV applications. We hypothesize that our models might decrease energy consumption in CV tasks. Moreover, we show how to create the models and validate their use in a face detection/recognition application, evidencing the compromise between storage, energy, and accuracy.
ABSTRACT
INTRODUCTION: This is a systematic review on the main algorithms using machine learning (ML) in retinal image processing for glaucoma diagnosis and detection. ML has proven to be a significant tool for the development of computer aided technology. Furthermore, secondary research has been widely conducted over the years for ophthalmologists. Such aspects indicate the importance of ML in the context of retinal image processing. METHODS: The publications that were chosen to compose this review were gathered from Scopus, PubMed, IEEEXplore and Science Direct databases. Then, the papers published between 2014 and 2019 were selected . Researches that used the segmented optic disc method were excluded. Moreover, only the methods which applied the classification process were considered. The systematic analysis was performed in such studies and, thereupon, the results were summarized. DISCUSSION: Based on architectures used for ML in retinal image processing, some studies applied feature extraction and dimensionality reduction to detect and isolate important parts of the analyzed image. Differently, other works utilized a deep convolutional network. Based on the evaluated researches, the main difference between the architectures is the number of images demanded for processing and the high computational cost required to use deep learning techniques. CONCLUSIONS: All the analyzed publications indicated it was possible to develop an automated system for glaucoma diagnosis. The disease severity and its high occurrence rates justify the researches which have been carried out. Recent computational techniques, such as deep learning, have shown to be promising technologies in fundus imaging. Although such a technique requires an extensive database and high computational costs, the studies show that the data augmentation and transfer learning techniques have been applied as an alternative way to optimize and reduce networks training.