Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Graefes Arch Clin Exp Ophthalmol ; 260(4): 1329-1335, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34734349

RESUMO

PURPOSE: To assess the performance of artificial intelligence in the automated classification of images taken with a tablet device of patients with blepharoptosis and subjects with normal eyelid. METHODS: This is a prospective and observational study. A total of 1276 eyelid images (624 images from 347 blepharoptosis cases and 652 images from 367 normal controls) from 606 participants were analyzed. In order to obtain a sufficient number of images for analysis, 1 to 4 eyelid images were obtained from each participant. We developed a model by fully retraining the pre-trained MobileNetV2 convolutional neural network. Subsequently, we verified whether the automatic diagnosis of blepharoptosis was possible using the images. In addition, we visualized how the model captured the features of the test data with Score-CAM. k-fold cross-validation (k = 5) was adopted for splitting the training and validation. Sensitivity, specificity, and the area under the curve (AUC) of the receiver operating characteristic curve for detecting blepharoptosis were examined. RESULTS: We found the model had a sensitivity of 83.0% (95% confidence interval [CI], 79.8-85.9) and a specificity of 82.5% (95% CI, 79.4-85.4). The accuracy of the validation data was 82.8%, and the AUC was 0.900 (95% CI, 0.882-0.917). CONCLUSION: Artificial intelligence was able to classify with high accuracy images of blepharoptosis and normal eyelids taken using a tablet device. Thus, the diagnosis of blepharoptosis with a tablet device is possible at a high level of accuracy. TRIAL REGISTRATION: Date of registration: 2021-06-25. TRIAL REGISTRATION NUMBER: UMIN000044660. Registration site: https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr_view.cgi?recptno=R000051004.


Assuntos
Inteligência Artificial , Blefaroptose , Blefaroptose/diagnóstico , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Estudos Prospectivos
2.
Int Ophthalmol ; 39(6): 1269-1275, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29744763

RESUMO

PURPOSE: To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. METHODS: First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. RESULTS: DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. CONCLUSION: A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Oftalmoscopia/métodos , Degeneração Macular Exsudativa/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Sensibilidade e Especificidade
3.
J Clin Med ; 11(18)2022 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-36143048

RESUMO

An artificial intelligence-based system was implemented for preoperative safety management in cataract surgery, including facial recognition, laterality (right and left eye) confirmation, and intraocular lens (IOL) parameter verification. A deep-learning model was constructed with a face identification development kit for facial recognition, the You Only Look Once Version 3 (YOLOv3) algorithm for laterality confirmation, and the Visual Geometry Group-16 (VGG-16) for IOL parameter verification. In 171 patients who were undergoing phacoemulsification and IOL implantation, a mobile device (iPad mini, Apple Inc.) camera was used to capture patients' faces, location of surgical drape aperture, and IOL parameter descriptions on the packages, which were then checked with the information stored in the referral database. The authentication rates on the first attempt and after repeated attempts were 92.0% and 96.3% for facial recognition, 82.5% and 98.2% for laterality confirmation, and 67.4% and 88.9% for IOL parameter verification, respectively. After authentication, both the false rejection rate and the false acceptance rate were 0% for all three parameters. An artificial intelligence-based system for preoperative safety management was implemented in real cataract surgery with a passable authentication rate and very high accuracy.

4.
Sci Rep ; 11(1): 18559, 2021 09 17.
Artigo em Inglês | MEDLINE | ID: mdl-34535722

RESUMO

The efficacy of deep learning in predicting successful big-bubble (SBB) formation during deep anterior lamellar keratoplasty (DALK) was evaluated. Medical records of patients undergoing DALK at the University of Cologne, Germany between March 2013 and July 2019 were retrospectively analyzed. Patients were divided into two groups: (1) SBB or (2) failed big-bubble (FBB). Preoperative images of anterior segment optical coherence tomography and corneal biometric values (corneal thickness, corneal curvature, and densitometry) were evaluated. A deep neural network model, Visual Geometry Group-16, was selected to test the validation data, evaluate the model, create a heat map image, and calculate the area under the curve (AUC). This pilot study included 46 patients overall (11 women, 35 men). SBBs were more common in keratoconus eyes (KC eyes) than in corneal opacifications of other etiologies (non KC eyes) (p = 0.006). The AUC was 0.746 (95% confidence interval [CI] 0.603-0.889). The determination success rate was 78.3% (18/23 eyes) (95% CI 56.3-92.5%) for SBB and 69.6% (16/23 eyes) (95% CI 47.1-86.8%) for FBB. This automated system demonstrates the potential of SBB prediction in DALK. Although KC eyes had a higher SBB rate, no other specific findings were found in the corneal biometric data.


Assuntos
Córnea/cirurgia , Transplante de Córnea , Aprendizado Profundo , Adulto , Idoso , Biometria , Transplante de Córnea/métodos , Feminino , Humanos , Ceratocone/cirurgia , Masculino , Pessoa de Meia-Idade , Projetos Piloto , Estudos Retrospectivos
5.
Sci Rep ; 10(1): 19369, 2020 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-33168888

RESUMO

This study examined whether age and brachial-ankle pulse-wave velocity (baPWV) can be predicted with ultra-wide-field pseudo-color (UWPC) images using deep learning (DL). We examined 170 UWPC images of both eyes of 85 participants (40 men and 45 women, mean age: 57.5 ± 20.9 years). Three types of images were included (total, central, and peripheral) and analyzed by k-fold cross-validation (k = 5) using Visual Geometry Group-16. After bias was eliminated using the generalized linear mixed model, the standard regression coefficients (SRCs) between actual age and baPWV and predicted age and baPWV from the UWPC images by the neural network were calculated, and the prediction accuracies of the DL model for age and baPWV were examined. The SRC between actual age and predicted age by the neural network was 0.833 for all images, 0.818 for central images, and 0.649 for peripheral images (all P < 0.001) and between the actual baPWV and the predicted baPWV was 0.390 for total images, 0.419 for central images, and 0.312 for peripheral images (all P < 0.001). These results show the potential prediction capability of DL for age and vascular aging and could be useful for disease prevention and early treatment.


Assuntos
Índice Tornozelo-Braço , Percepção de Cores , Aprendizado Profundo , Hipertensão/fisiopatologia , Análise de Onda de Pulso , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
6.
PLoS One ; 15(4): e0227240, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32298265

RESUMO

This study examined and compared outcomes of deep learning (DL) in identifying swept-source optical coherence tomography (OCT) images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)], and OCT images with myopic macular lesions [e.g., myopic choroidal neovascularization (mCNV) and retinoschisis (RS)]. A total of 910 SS-OCT images were included in the study as follows and analyzed by k-fold cross-validation (k = 5) using DL's renowned model, Visual Geometry Group-16: nHM, 146 images; HM, 531 images; mCNV, 122 images; and RS, 111 images (n = 910). The binary classification of OCT images with or without myopic macular lesions; the binary classification of HM images and images with myopic macular lesions (i.e., mCNV and RS images); and the ternary classification of HM, mCNV, and RS images were examined. Additionally, sensitivity, specificity, and the area under the curve (AUC) for the binary classifications as well as the correct answer rate for ternary classification were examined. The classification results of OCT images with or without myopic macular lesions were as follows: AUC, 0.970; sensitivity, 90.6%; specificity, 94.2%. The classification results of HM images and images with myopic macular lesions were as follows: AUC, 1.000; sensitivity, 100.0%; specificity, 100.0%. The correct answer rate in the ternary classification of HM images, mCNV images, and RS images were as follows: HM images, 96.5%; mCNV images, 77.9%; and RS, 67.6% with mean, 88.9%.Using noninvasive, easy-to-obtain swept-source OCT images, the DL model was able to classify OCT images without myopic macular lesions and OCT images with myopic macular lesions such as mCNV and RS with high accuracy. The study results suggest the possibility of conducting highly accurate screening of ocular diseases using artificial intelligence, which may improve the prevention of blindness and reduce workloads for ophthalmologists.


Assuntos
Neovascularização de Coroide/diagnóstico , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Miopia/diagnóstico , Retinosquise/diagnóstico , Adulto , Idoso , Cegueira/prevenção & controle , Corioide/diagnóstico por imagem , Neovascularização de Coroide/complicações , Conjuntos de Dados como Assunto , Diagnóstico Diferencial , Feminino , Humanos , Macula Lutea/diagnóstico por imagem , Masculino , Programas de Rastreamento/métodos , Pessoa de Meia-Idade , Miopia/etiologia , Curva ROC , Retinosquise/complicações , Índice de Gravidade de Doença , Tomografia de Coerência Óptica
7.
PLoS One ; 14(11): e0223965, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31697697

RESUMO

We aimed to assess the ability of deep learning (DL) and support vector machine (SVM) to detect a nonperfusion area (NPA) caused by retinal vein occlusion (RVO) with optical coherence tomography angiography (OCTA) images. The study included 322 OCTA images (normal: 148; NPA owing to RVO: 174 [128 branch RVO images and 46 central RVO images]). Training to construct the DL model using deep convolutional neural network (DNN) algorithms was provided using OCTA images. The SVM used a scikit-learn library with a radial basis function kernel. The area under the curve (AUC), sensitivity and specificity for detecting an NPA were examined. We compared the diagnostic ability (sensitivity, specificity and average required time) between the DNN, SVM and seven ophthalmologists. Heat maps were generated. With regard to the DNN, the mean AUC, sensitivity, specificity and average required time for distinguishing RVO OCTA images with an NPA from normal OCTA images were 0.986, 93.7%, 97.3% and 176.9 s, respectively. With regard to SVM, the mean AUC, sensitivity, and specificity were 0.880, 79.3%, and 81.1%, respectively. With regard to the seven ophthalmologists, the mean AUC, sensitivity, specificity and average required time were 0.962, 90.8%, 89.2%, and 700.6 s, respectively. The DNN focused on the foveal avascular zone and NPA in heat maps. The performance of the DNN was significantly better than that of SVM in all parameters (p < 0.01, all) and that of the ophthalmologists in AUC and specificity (p < 0.01, all). The combination of DL and OCTA images had high accuracy for the detection of an NPA, and it might be useful in clinical practice and retinal screening.


Assuntos
Oclusão da Veia Retiniana/fisiopatologia , Vasos Retinianos/fisiopatologia , Idoso , Aprendizado Profundo , Feminino , Angiofluoresceinografia/métodos , Fóvea Central/fisiopatologia , Humanos , Masculino , Rede Nervosa/fisiopatologia , Perfusão/métodos , Sensibilidade e Especificidade , Tomografia de Coerência Óptica/métodos , Acuidade Visual/fisiologia
8.
Int J Ophthalmol ; 12(1): 94-99, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30662847

RESUMO

AIM: To investigate and compare the efficacy of two machine-learning technologies with deep-learning (DL) and support vector machine (SVM) for the detection of branch retinal vein occlusion (BRVO) using ultrawide-field fundus images. METHODS: This study included 237 images from 236 patients with BRVO with a mean±standard deviation of age 66.3±10.6y and 229 images from 176 non-BRVO healthy subjects with a mean age of 64.9±9.4y. Training was conducted using a deep convolutional neural network using ultrawide-field fundus images to construct the DL model. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the curve (AUC) were calculated to compare the diagnostic abilities of the DL and SVM models. RESULTS: For the DL model, the sensitivity, specificity, PPV, NPV and AUC for diagnosing BRVO was 94.0% (95%CI: 93.8%-98.8%), 97.0% (95%CI: 89.7%-96.4%), 96.5% (95%CI: 94.3%-98.7%), 93.2% (95%CI: 90.5%-96.0%) and 0.976 (95%CI: 0.960-0.993), respectively. In contrast, for the SVM model, these values were 80.5% (95%CI: 77.8%-87.9%), 84.3% (95%CI: 75.8%-86.1%), 83.5% (95%CI: 78.4%-88.6%), 75.2% (95%CI: 72.1%-78.3%) and 0.857 (95%CI: 0.811-0.903), respectively. The DL model outperformed the SVM model in all the aforementioned parameters (P<0.001). CONCLUSION: These results indicate that the combination of the DL model and ultrawide-field fundus ophthalmoscopy may distinguish between healthy and BRVO eyes with a high level of accuracy. The proposed combination may be used for automatically diagnosing BRVO in patients residing in remote areas lacking access to an ophthalmic medical center.

9.
PeerJ ; 7: e6900, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31119087

RESUMO

Evaluating the discrimination ability of a deep convolution neural network for ultrawide-field pseudocolor imaging and ultrawide-field autofluorescence of retinitis pigmentosa. In total, the 373 ultrawide-field pseudocolor and ultrawide-field autofluorescence images (150, retinitis pigmentosa; 223, normal) obtained from the patients who visited the Department of Ophthalmology, Tsukazaki Hospital were used. Training with a convolutional neural network on these learning data objects was conducted. We examined the K-fold cross validation (K = 5). The mean area under the curve of the ultrawide-field pseudocolor group was 0.998 (95% confidence interval (CI) [0.9953-1.0]) and that of the ultrawide-field autofluorescence group was 1.0 (95% CI [0.9994-1.0]). The sensitivity and specificity of the ultrawide-field pseudocolor group were 99.3% (95% CI [96.3%-100.0%]) and 99.1% (95% CI [96.1%-99.7%]), and those of the ultrawide-field autofluorescence group were 100% (95% CI [97.6%-100%]) and 99.5% (95% CI [96.8%-99.9%]), respectively. Heatmaps were in accordance with the clinician's observations. Using the proposed deep neural network model, retinitis pigmentosa can be distinguished from healthy eyes with high sensitivity and specificity on ultrawide-field pseudocolor and ultrawide-field autofluorescence images.

10.
J Glaucoma ; 27(7): 647-652, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29781835

RESUMO

PURPOSE: To evaluate the accuracy of detecting glaucoma visual field defect severity using deep-learning (DL) classifier with an ultrawide-field scanning laser ophthalmoscope. METHODS: One eye of 982 open-angle glaucoma (OAG) patients and 417 healthy eyes were enrolled. We categorized glaucoma patients into 3 groups according to the glaucoma visual field damage (Humphrey Field Analyzer 24-2 program) [early; -6 dB (mean deviation) or better, moderate; between -6 and -12 dB, and severe as mean deviation of -12 dB or worse]. In total, 558 images (446 for training and 112 for grading) from early OAG patients, 203 images (162 for training and 41 for grading) from moderate OAG patients, 221 images (176 for training and 45 for grading) from severe OAG patients and 417 images (333 for training and 84 for grading) from normal subjects were analyzed using DL. The area under the receiver operating characteristic curve (AUC) was used to evaluate the accuracy after 100 trials. RESULTS: The mean AUC between normal versus all glaucoma patients was 0.872, the sensitivity was 81.3% and the specificity was 80.2%. In normal versus early OAG, mean AUC was 0.830, the sensitivity was 83.8% and the specificity was 75.3%. In normal versus moderate OAG, mean AUC was 0.864, sensitivity was 77.5%, and specificity was 90.2%. In normal versus severe OAG glaucoma mean AUC was 0.934, sensitivity was 90.9%, and specificity was 95.8%. CONCLUSIONS: Despite using an ultrawide-field scanning laser ophthalmoscope, DL can detect glaucoma characteristics and glaucoma visual field defect severity with high reliability.


Assuntos
Aprendizado Profundo , Glaucoma/classificação , Glaucoma/diagnóstico , Oftalmoscópios , Testes de Campo Visual/instrumentação , Testes de Campo Visual/métodos , Adulto , Idoso , Estudos Transversais , Feminino , Glaucoma/patologia , Humanos , Interpretação de Imagem Assistida por Computador/instrumentação , Interpretação de Imagem Assistida por Computador/métodos , Pressão Intraocular , Masculino , Microscopia Confocal/instrumentação , Microscopia Confocal/métodos , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade , Índice de Gravidade de Doença , Transtornos da Visão/classificação , Transtornos da Visão/diagnóstico , Transtornos da Visão/patologia , Campos Visuais
11.
J Ophthalmol ; 2018: 1875431, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30515316

RESUMO

The aim of this study is to assess the performance of two machine-learning technologies, namely, deep learning (DL) and support vector machine (SVM) algorithms, for detecting central retinal vein occlusion (CRVO) in ultrawide-field fundus images. Images from 125 CRVO patients (n=125 images) and 202 non-CRVO normal subjects (n=238 images) were included in this study. Training to construct the DL model using deep convolutional neural network algorithms was provided using ultrawide-field fundus images. The SVM uses scikit-learn library with a radial basis function kernel. The diagnostic abilities of DL and the SVM were compared by assessing their sensitivity, specificity, and area under the curve (AUC) of the receiver operating characteristic curve for CRVO. For diagnosing CRVO, the DL model had a sensitivity of 98.4% (95% confidence interval (CI), 94.3-99.8%) and a specificity of 97.9% (95% CI, 94.6-99.1%) with an AUC of 0.989 (95% CI, 0.980-0.999). In contrast, the SVM model had a sensitivity of 84.0% (95% CI, 76.3-89.3%) and a specificity of 87.5% (95% CI, 82.7-91.1%) with an AUC of 0.895 (95% CI, 0.859-0.931). Thus, the DL model outperformed the SVM model in all indices assessed (P < 0.001 for all). Our data suggest that a DL model derived using ultrawide-field fundus images could distinguish between normal and CRVO images with a high level of accuracy and that automatic CRVO detection in ultrawide-field fundus ophthalmoscopy is possible. This proposed DL-based model can also be used in ultrawide-field fundus ophthalmoscopy to accurately diagnose CRVO and improve medical care in remote locations where it is difficult for patients to attend an ophthalmic medical center.

12.
Sci Rep ; 7(1): 9425, 2017 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-28842613

RESUMO

Rhegmatogenous retinal detachment (RRD) is a serious condition that can lead to blindness; however, it is highly treatable with timely and appropriate treatment. Thus, early diagnosis and treatment of RRD is crucial. In this study, we applied deep learning, a machine-learning technology, to detect RRD using ultra-wide-field fundus images and investigated its performance. In total, 411 images (329 for training and 82 for grading) from 407 RRD patients and 420 images (336 for training and 84 for grading) from 238 non-RRD patients were used in this study. The deep learning model demonstrated a high sensitivity of 97.6% [95% confidence interval (CI), 94.2-100%] and a high specificity of 96.5% (95% CI, 90.2-100%), and the area under the curve was 0.988 (95% CI, 0.981-0.995). This model can improve medical care in remote areas where eye clinics are not available by using ultra-wide-field fundus ophthalmoscopy for the accurate diagnosis of RRD. Early diagnosis of RRD can prevent blindness.


Assuntos
Aprendizado Profundo , Fundo de Olho , Aprendizado de Máquina , Oftalmoscopia , Descolamento Retiniano/diagnóstico , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Oftalmoscopia/métodos , Curva ROC , Descolamento Retiniano/etiologia , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA