Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sleep Breath ; 25(4): 2297-2305, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-33559004

RESUMO

PURPOSE: In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. METHODS: A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. RESULTS: The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. CONCLUSIONS: A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


Assuntos
Cefalometria , Aprendizado Profundo , Radiografia , Apneia Obstrutiva do Sono/diagnóstico por imagem , Adulto , Cefalometria/métodos , Cefalometria/normas , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Radiografia/métodos , Radiografia/normas , Sensibilidade e Especificidade
2.
PLoS One ; 14(11): e0223965, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31697697

RESUMO

We aimed to assess the ability of deep learning (DL) and support vector machine (SVM) to detect a nonperfusion area (NPA) caused by retinal vein occlusion (RVO) with optical coherence tomography angiography (OCTA) images. The study included 322 OCTA images (normal: 148; NPA owing to RVO: 174 [128 branch RVO images and 46 central RVO images]). Training to construct the DL model using deep convolutional neural network (DNN) algorithms was provided using OCTA images. The SVM used a scikit-learn library with a radial basis function kernel. The area under the curve (AUC), sensitivity and specificity for detecting an NPA were examined. We compared the diagnostic ability (sensitivity, specificity and average required time) between the DNN, SVM and seven ophthalmologists. Heat maps were generated. With regard to the DNN, the mean AUC, sensitivity, specificity and average required time for distinguishing RVO OCTA images with an NPA from normal OCTA images were 0.986, 93.7%, 97.3% and 176.9 s, respectively. With regard to SVM, the mean AUC, sensitivity, and specificity were 0.880, 79.3%, and 81.1%, respectively. With regard to the seven ophthalmologists, the mean AUC, sensitivity, specificity and average required time were 0.962, 90.8%, 89.2%, and 700.6 s, respectively. The DNN focused on the foveal avascular zone and NPA in heat maps. The performance of the DNN was significantly better than that of SVM in all parameters (p < 0.01, all) and that of the ophthalmologists in AUC and specificity (p < 0.01, all). The combination of DL and OCTA images had high accuracy for the detection of an NPA, and it might be useful in clinical practice and retinal screening.


Assuntos
Oclusão da Veia Retiniana/fisiopatologia , Vasos Retinianos/fisiopatologia , Idoso , Aprendizado Profundo , Feminino , Angiofluoresceinografia/métodos , Fóvea Central/fisiopatologia , Humanos , Masculino , Rede Nervosa/fisiopatologia , Perfusão/métodos , Sensibilidade e Especificidade , Tomografia de Coerência Óptica/métodos , Acuidade Visual/fisiologia
3.
PeerJ ; 7: e6900, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31119087

RESUMO

Evaluating the discrimination ability of a deep convolution neural network for ultrawide-field pseudocolor imaging and ultrawide-field autofluorescence of retinitis pigmentosa. In total, the 373 ultrawide-field pseudocolor and ultrawide-field autofluorescence images (150, retinitis pigmentosa; 223, normal) obtained from the patients who visited the Department of Ophthalmology, Tsukazaki Hospital were used. Training with a convolutional neural network on these learning data objects was conducted. We examined the K-fold cross validation (K = 5). The mean area under the curve of the ultrawide-field pseudocolor group was 0.998 (95% confidence interval (CI) [0.9953-1.0]) and that of the ultrawide-field autofluorescence group was 1.0 (95% CI [0.9994-1.0]). The sensitivity and specificity of the ultrawide-field pseudocolor group were 99.3% (95% CI [96.3%-100.0%]) and 99.1% (95% CI [96.1%-99.7%]), and those of the ultrawide-field autofluorescence group were 100% (95% CI [97.6%-100%]) and 99.5% (95% CI [96.8%-99.9%]), respectively. Heatmaps were in accordance with the clinician's observations. Using the proposed deep neural network model, retinitis pigmentosa can be distinguished from healthy eyes with high sensitivity and specificity on ultrawide-field pseudocolor and ultrawide-field autofluorescence images.

4.
Int Ophthalmol ; 39(10): 2153-2159, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30798455

RESUMO

PURPOSE: We investigated using ultrawide-field fundus images with a deep convolutional neural network (DCNN), which is a machine learning technology, to detect treatment-naïve proliferative diabetic retinopathy (PDR). METHODS: We conducted training with the DCNN using 378 photographic images (132 PDR and 246 non-PDR) and constructed a deep learning model. The area under the curve (AUC), sensitivity, and specificity were examined. RESULT: The constructed deep learning model demonstrated a high sensitivity of 94.7% and a high specificity of 97.2%, with an AUC of 0.969. CONCLUSION: Our findings suggested that PDR could be diagnosed using wide-angle camera images and deep learning.


Assuntos
Aprendizado Profundo , Retinopatia Diabética/diagnóstico , Diagnóstico por Computador/métodos , Oftalmoscopia/métodos , Adulto , Idoso , Área Sob a Curva , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade
5.
Int J Ophthalmol ; 12(1): 94-99, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30662847

RESUMO

AIM: To investigate and compare the efficacy of two machine-learning technologies with deep-learning (DL) and support vector machine (SVM) for the detection of branch retinal vein occlusion (BRVO) using ultrawide-field fundus images. METHODS: This study included 237 images from 236 patients with BRVO with a mean±standard deviation of age 66.3±10.6y and 229 images from 176 non-BRVO healthy subjects with a mean age of 64.9±9.4y. Training was conducted using a deep convolutional neural network using ultrawide-field fundus images to construct the DL model. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the curve (AUC) were calculated to compare the diagnostic abilities of the DL and SVM models. RESULTS: For the DL model, the sensitivity, specificity, PPV, NPV and AUC for diagnosing BRVO was 94.0% (95%CI: 93.8%-98.8%), 97.0% (95%CI: 89.7%-96.4%), 96.5% (95%CI: 94.3%-98.7%), 93.2% (95%CI: 90.5%-96.0%) and 0.976 (95%CI: 0.960-0.993), respectively. In contrast, for the SVM model, these values were 80.5% (95%CI: 77.8%-87.9%), 84.3% (95%CI: 75.8%-86.1%), 83.5% (95%CI: 78.4%-88.6%), 75.2% (95%CI: 72.1%-78.3%) and 0.857 (95%CI: 0.811-0.903), respectively. The DL model outperformed the SVM model in all the aforementioned parameters (P<0.001). CONCLUSION: These results indicate that the combination of the DL model and ultrawide-field fundus ophthalmoscopy may distinguish between healthy and BRVO eyes with a high level of accuracy. The proposed combination may be used for automatically diagnosing BRVO in patients residing in remote areas lacking access to an ophthalmic medical center.

6.
Int Ophthalmol ; 39(8): 1871-1877, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30218173

RESUMO

PURPOSE: In this study, we compared deep learning (DL) with support vector machine (SVM), both of which use three-dimensional optical coherence tomography (3D-OCT) images for detecting epiretinal membrane (ERM). METHODS: In total, 529 3D-OCT images from the Tsukazaki hospital ophthalmology database (184 non-ERM subjects and 205 ERM patients) were assessed; 80% of the images were divided for training, and 20% for test as follows: 423 training (non-ERM 245, ERM 178) and 106 test (non-ERM 59, ERM 47) images. Using the 423 training images, a model was created with deep convolutional neural network and SVM, and the test data were evaluated. RESULTS: The DL model's sensitivity was 97.6% [95% confidence interval (CI), 87.7-99.9%] and specificity was 98.0% (95% CI, 89.7-99.9%), and the area under the curve (AUC) was 0.993 (95% CI, 0.993-0.994). In contrast, the SVM model's sensitivity was 97.6% (95% CI, 87.7-99.9%), specificity was 94.2% (95% CI, 84.0-98.7%), and AUC was 0.988 (95% CI, 0.987-0.988). CONCLUSION: DL model is better than SVM model in detecting ERM by using 3D-OCT images.


Assuntos
Membrana Epirretiniana/diagnóstico , Imageamento Tridimensional/métodos , Aprendizado de Máquina , Retina/diagnóstico por imagem , Máquina de Vetores de Suporte , Tomografia de Coerência Óptica/métodos , Acuidade Visual , Idoso , Aprendizado Profundo , Diagnóstico Precoce , Feminino , Humanos , Masculino
7.
Int Ophthalmol ; 39(6): 1269-1275, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29744763

RESUMO

PURPOSE: To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. METHODS: First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. RESULTS: DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. CONCLUSION: A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Oftalmoscopia/métodos , Degeneração Macular Exsudativa/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Sensibilidade e Especificidade
8.
J Ophthalmol ; 2018: 1875431, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30515316

RESUMO

The aim of this study is to assess the performance of two machine-learning technologies, namely, deep learning (DL) and support vector machine (SVM) algorithms, for detecting central retinal vein occlusion (CRVO) in ultrawide-field fundus images. Images from 125 CRVO patients (n=125 images) and 202 non-CRVO normal subjects (n=238 images) were included in this study. Training to construct the DL model using deep convolutional neural network algorithms was provided using ultrawide-field fundus images. The SVM uses scikit-learn library with a radial basis function kernel. The diagnostic abilities of DL and the SVM were compared by assessing their sensitivity, specificity, and area under the curve (AUC) of the receiver operating characteristic curve for CRVO. For diagnosing CRVO, the DL model had a sensitivity of 98.4% (95% confidence interval (CI), 94.3-99.8%) and a specificity of 97.9% (95% CI, 94.6-99.1%) with an AUC of 0.989 (95% CI, 0.980-0.999). In contrast, the SVM model had a sensitivity of 84.0% (95% CI, 76.3-89.3%) and a specificity of 87.5% (95% CI, 82.7-91.1%) with an AUC of 0.895 (95% CI, 0.859-0.931). Thus, the DL model outperformed the SVM model in all indices assessed (P < 0.001 for all). Our data suggest that a DL model derived using ultrawide-field fundus images could distinguish between normal and CRVO images with a high level of accuracy and that automatic CRVO detection in ultrawide-field fundus ophthalmoscopy is possible. This proposed DL-based model can also be used in ultrawide-field fundus ophthalmoscopy to accurately diagnose CRVO and improve medical care in remote locations where it is difficult for patients to attend an ophthalmic medical center.

9.
PeerJ ; 6: e5696, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30370184

RESUMO

We aimed to investigate the detection of idiopathic macular holes (MHs) using ultra-wide-field fundus images (Optos) with deep learning, which is a machine learning technology. The study included 910 Optos color images (715 normal images, 195 MH images). Of these 910 images, 637 were learning images (501 normal images, 136 MH images) and 273 were test images (214 normal images and 59 MH images). We conducted training with a deep convolutional neural network (CNN) using the images and constructed a deep-learning model. The CNN exhibited high sensitivity of 100% (95% confidence interval CI [93.5-100%]) and high specificity of 99.5% (95% CI [97.1-99.9%]). The area under the curve was 0.9993 (95% CI [0.9993-0.9994]). Our findings suggest that MHs could be diagnosed using an approach involving wide angle camera images and deep learning.

10.
J Glaucoma ; 27(7): 647-652, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29781835

RESUMO

PURPOSE: To evaluate the accuracy of detecting glaucoma visual field defect severity using deep-learning (DL) classifier with an ultrawide-field scanning laser ophthalmoscope. METHODS: One eye of 982 open-angle glaucoma (OAG) patients and 417 healthy eyes were enrolled. We categorized glaucoma patients into 3 groups according to the glaucoma visual field damage (Humphrey Field Analyzer 24-2 program) [early; -6 dB (mean deviation) or better, moderate; between -6 and -12 dB, and severe as mean deviation of -12 dB or worse]. In total, 558 images (446 for training and 112 for grading) from early OAG patients, 203 images (162 for training and 41 for grading) from moderate OAG patients, 221 images (176 for training and 45 for grading) from severe OAG patients and 417 images (333 for training and 84 for grading) from normal subjects were analyzed using DL. The area under the receiver operating characteristic curve (AUC) was used to evaluate the accuracy after 100 trials. RESULTS: The mean AUC between normal versus all glaucoma patients was 0.872, the sensitivity was 81.3% and the specificity was 80.2%. In normal versus early OAG, mean AUC was 0.830, the sensitivity was 83.8% and the specificity was 75.3%. In normal versus moderate OAG, mean AUC was 0.864, sensitivity was 77.5%, and specificity was 90.2%. In normal versus severe OAG glaucoma mean AUC was 0.934, sensitivity was 90.9%, and specificity was 95.8%. CONCLUSIONS: Despite using an ultrawide-field scanning laser ophthalmoscope, DL can detect glaucoma characteristics and glaucoma visual field defect severity with high reliability.


Assuntos
Aprendizado Profundo , Glaucoma/classificação , Glaucoma/diagnóstico , Oftalmoscópios , Testes de Campo Visual/instrumentação , Testes de Campo Visual/métodos , Adulto , Idoso , Estudos Transversais , Feminino , Glaucoma/patologia , Humanos , Interpretação de Imagem Assistida por Computador/instrumentação , Interpretação de Imagem Assistida por Computador/métodos , Pressão Intraocular , Masculino , Microscopia Confocal/instrumentação , Microscopia Confocal/métodos , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade , Índice de Gravidade de Doença , Transtornos da Visão/classificação , Transtornos da Visão/diagnóstico , Transtornos da Visão/patologia , Campos Visuais
11.
Sci Rep ; 7(1): 9425, 2017 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-28842613

RESUMO

Rhegmatogenous retinal detachment (RRD) is a serious condition that can lead to blindness; however, it is highly treatable with timely and appropriate treatment. Thus, early diagnosis and treatment of RRD is crucial. In this study, we applied deep learning, a machine-learning technology, to detect RRD using ultra-wide-field fundus images and investigated its performance. In total, 411 images (329 for training and 82 for grading) from 407 RRD patients and 420 images (336 for training and 84 for grading) from 238 non-RRD patients were used in this study. The deep learning model demonstrated a high sensitivity of 97.6% [95% confidence interval (CI), 94.2-100%] and a high specificity of 96.5% (95% CI, 90.2-100%), and the area under the curve was 0.988 (95% CI, 0.981-0.995). This model can improve medical care in remote areas where eye clinics are not available by using ultra-wide-field fundus ophthalmoscopy for the accurate diagnosis of RRD. Early diagnosis of RRD can prevent blindness.


Assuntos
Aprendizado Profundo , Fundo de Olho , Aprendizado de Máquina , Oftalmoscopia , Descolamento Retiniano/diagnóstico , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Oftalmoscopia/métodos , Curva ROC , Descolamento Retiniano/etiologia , Sensibilidade e Especificidade
12.
Chem Asian J ; 9(11): 3136-40, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25169009

RESUMO

Perylene bisimides (PBIs) are fascinating dyes with various potential applications. To study the effects of introducing a dibenzo-fused structure to the perylene moiety, π-extended PBI derivatives with a dibenzo-fused structure at both of the a and f bonds were synthesized. The twisted structure was characterized by X-ray crystal structure analysis. In the cyclic voltammograms, the dibenzo[a,f]-fused PBI showed a reversible oxidation wave at much less positive potential, relative to a dibenzo[a,o]-fused PBI derivative. These data indicated that two ring fusions at both sides of a naphthalene moiety, which construct a tetracene core, effectively raise the HOMO level compared to fusion of one ring at each naphthalene moiety (two anthracene cores). The dibenzo[a,f]-fused PBI derivative showed an absorption band at 735 nm with a shoulder band reaching 900 nm.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA