RESUMO
PURPOSE: Glaucoma is the leading cause of irreversible blindness worldwide. It is estimated that over 60 million people around the world have this disease, with only part of them knowing they have it. Timely and early diagnosis is vital to delay/prevent patient blindness. Deep learning (DL) could be a tool for ophthalmologists to give a more informed and objective diagnosis. However, there is a lack of studies that apply DL for glaucoma detection to Latino population. Our contribution is to use transfer learning to retrain MobileNet and Inception V3 models with images of the retinal nerve fiber layer thickness map of Mexican patients, obtained with optical coherence tomography (OCT) from the Instituto de la Visión, a clinic in the northern part of Mexico. METHODS: The IBM Foundational Methodology for Data Science was used in this study. The MobileNet and Inception V3 topologies were chosen as the analytical approaches to classify OCT images in two classes, namely glaucomatous and non-glaucomatous. The OCT files were collected from a Zeiss OCT machine at the Instituto de la Visión, and classified by an expert into the two classes under study. These images conform a dataset of 333 files in total. Since this research work is focused on RNFL thickness map images, the OCT files were cropped to obtain only the RNFL thickness map images of the corresponding eye. This action was carried out for images in both classes, glaucomatous and non-glaucomatous. Since some images were damaged (with black spots in which data was missing), these images were cut-out and cut-off. After the preparation process, 50 images per class were used for training. Fifteen images per class, different than the ones used in the training stage, were used for running predictions. In total, 260 images were used in the experiments, 130 per eye. Four models were generated, two trained with MobileNet, one for the left eye and one for the right eye, and another two trained with Inception V3. TensorFlow was used for running transfer learning. RESULTS: The evaluation results of the MobileNet model for the left eye are, accuracy: 86%, precision: 87%, recall: 87%, and F1 score: 87%. The evaluation results of the MobileNet model for the right eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. The evaluation results of the Inception V3 model for the left eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. The evaluation results of the Inception V3 model for the right eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. CONCLUSION: In average, the evaluation results for right eye images were the same for both models. The Inception V3 model showed slight better average results than the MobileNet model in the case of classifying left eye images.