Your browser doesn't support javascript.
loading
Fast and Accurate Ophthalmic Medication Bottle Identification Using Deep Learning on a Smartphone Device.
Tran, Tammy T; Richardson, Alexander J W; Chen, Victoria M; Lin, Ken Y.
Afiliação
  • Tran TT; Gavin Herbert Eye Institute, Department of Ophthalmology, UC Irvine School of Medicine, Irvine, California.
  • Richardson AJW; Gavin Herbert Eye Institute, Department of Ophthalmology, UC Irvine School of Medicine, Irvine, California.
  • Chen VM; Department of Computer Science, University of California, Irvine, California.
  • Lin KY; Gavin Herbert Eye Institute, Department of Ophthalmology, UC Irvine School of Medicine, Irvine, California; Department of Biomedical Engineering, University of California, Irvine, California. Electronic address: linky@hs.uci.edu.
Ophthalmol Glaucoma ; 5(2): 188-194, 2022.
Article em En | MEDLINE | ID: mdl-34389508
PURPOSE: To assess the accuracy and efficacy of deep learning models, specifically convolutional neural networks (CNNs), to identify glaucoma medication bottles. DESIGN: Algorithm development for predicting ophthalmic medication bottles using a large mobile image-based dataset. PARTICIPANTS: A total of 3750 mobile images of 5 ophthalmic medication bottles were included: brimonidine tartrate, dorzolamide-timolol, latanoprost, prednisolone acetate, and moxifloxacin. METHODS: Seven CNN models were initially pretrained on a large-scale image database and subsequently retrained to classify 5 commonly prescribed topical ophthalmic medications using a training dataset of 2250 mobile-phone captured images. The retrained CNN models' accuracies were compared using k-fold cross-validation (k = 10). The top 2 performing CNN models were then embedded into separate iOS apps and evaluated using 1500 mobile images not included in the training dataset. MAIN OUTCOME MEASURES: Prediction accuracy, image processing time. RESULTS: Of the 7 CNN architectures, MobileNet v2 yielded the highest k-fold cross-validation accuracy of 0.974 (95% confidence interval [CI], 0.966-0.980) and the shortest average image processing time at 3.45 (95% CI, 3.13-3.77) sec/image. ResNet V2 had the second highest accuracy of 0.961 (95% CI, 0.952-0.969). When the 2 app-embedded CNNs were compared, in terms of accuracy, MobileNet V2, with an image prediction accuracy of 0.86 (95% CI, 0.84-0.88), was significantly greater than ResNet V2, 0.68 (95% CI, 0.66-0.71) (Table 1). Sensitivities and specificities varied between medications (Table 1). There was no significant difference in average imaging processing time, 0.32 (95% CI, 0.28-0.36) sec/image and 0.31 (95% CI, 0.29-0.33) sec/image for MobileNet V2 and ResNet V2, respectively. Information on beta-testing of the iOS app can be found here: https://lin.hs.uci.edu/research/. CONCLUSIONS: We have retrained MobileNet V2 to accurately identify ophthalmic medication bottles and demonstrated that this neural network can operate in a smartphone environment. This work serves as a proof-of-concept for the production of a CNN-based smartphone application to empower patients by decreasing risk for error.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Diagnostic_studies / Prognostic_studies Limite: Humans Idioma: En Revista: Ophthalmol Glaucoma Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Diagnostic_studies / Prognostic_studies Limite: Humans Idioma: En Revista: Ophthalmol Glaucoma Ano de publicação: 2022 Tipo de documento: Article