Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Gene Med ; 25(2): e3464, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36413603

RESUMO

BACKGROUND: Exon-skipping is a powerful genetic tool, especially when delivering genes using an AAV-mediated full-length gene supplementation strategy is difficult owing to large length of genes. Here, we used engineered human induced pluripotent stem cells and artificial intelligence to evaluate clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR associated protein 9-based exon-skipping vectors targeting genes of the retinal pigment epithelium (RPE). The model system was choroideremia; this is an X-linked inherited retinal disease caused by mutation of the CHM gene. METHODS: We explored whether artificial intelligence detected differentiation of human OTX2, PAX6 and MITF (hOPM) cells, in which OTX2, PAX6 and MITF expression was induced by doxycycline treatment, into RPE. Plasmid encoding CHM exon-skipping modules targeting the splice donor sites of exons 6 were constructed. A clonal hOPM cell line with a frameshift mutation in exon 6 was generated and differentiated into RPE. CHM exon 6-skipping was induced, and the effects of skipping on phagocytic activity, cell death and prenylation of Rab small GTPase (RAB) were evaluated using flow cytometry, an in vitro prenylation assay and western blotting. RESULTS: Artificial intelligence-based evaluation of RPE differentiation was successful. Retinal pigment epithelium cells with a frameshift mutation in exon 6 showed increased cell death, reduced phagocytic activity and increased cytosolic unprenylated RABs only when oxidative stress was in play. The latter two phenotypes were partially rescued by exon 6-skipping of CHM. CONCLUSIONS: CHM exon 6-skipping contributed to RPE phagocytosis probably by increasing RAB38 prenylation under oxidative stress.


Assuntos
Coroideremia , Células-Tronco Pluripotentes Induzidas , Epitélio Pigmentado da Retina , Humanos , Inteligência Artificial , Coroideremia/genética , Coroideremia/terapia , Coroideremia/metabolismo , Sistemas CRISPR-Cas/genética , Éxons/genética , Células-Tronco Pluripotentes Induzidas/metabolismo , Epitélio Pigmentado da Retina/metabolismo
2.
Graefes Arch Clin Exp Ophthalmol ; 260(4): 1329-1335, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34734349

RESUMO

PURPOSE: To assess the performance of artificial intelligence in the automated classification of images taken with a tablet device of patients with blepharoptosis and subjects with normal eyelid. METHODS: This is a prospective and observational study. A total of 1276 eyelid images (624 images from 347 blepharoptosis cases and 652 images from 367 normal controls) from 606 participants were analyzed. In order to obtain a sufficient number of images for analysis, 1 to 4 eyelid images were obtained from each participant. We developed a model by fully retraining the pre-trained MobileNetV2 convolutional neural network. Subsequently, we verified whether the automatic diagnosis of blepharoptosis was possible using the images. In addition, we visualized how the model captured the features of the test data with Score-CAM. k-fold cross-validation (k = 5) was adopted for splitting the training and validation. Sensitivity, specificity, and the area under the curve (AUC) of the receiver operating characteristic curve for detecting blepharoptosis were examined. RESULTS: We found the model had a sensitivity of 83.0% (95% confidence interval [CI], 79.8-85.9) and a specificity of 82.5% (95% CI, 79.4-85.4). The accuracy of the validation data was 82.8%, and the AUC was 0.900 (95% CI, 0.882-0.917). CONCLUSION: Artificial intelligence was able to classify with high accuracy images of blepharoptosis and normal eyelids taken using a tablet device. Thus, the diagnosis of blepharoptosis with a tablet device is possible at a high level of accuracy. TRIAL REGISTRATION: Date of registration: 2021-06-25. TRIAL REGISTRATION NUMBER: UMIN000044660. Registration site: https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr_view.cgi?recptno=R000051004.


Assuntos
Inteligência Artificial , Blefaroptose , Blefaroptose/diagnóstico , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Estudos Prospectivos
3.
Graefes Arch Clin Exp Ophthalmol ; 259(6): 1569-1577, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33576859

RESUMO

PURPOSE: We assessed the ability of deep learning (DL) models to distinguish between tear meniscus of lacrimal duct obstruction (LDO) patients and normal subjects using anterior segment optical coherence tomography (ASOCT) images. METHODS: The study included 117 ASOCT images (19 men and 98 women; mean age, 66.6 ± 13.6 years) from 101 LDO patients and 113 ASOCT images (29 men and 84 women; mean age, 38.3 ± 19.9 years) from 71 normal subjects. We trained to construct 9 single and 502 ensemble DL models with 9 different network structures, and calculated the area under the curve (AUC), sensitivity, and specificity to compare the distinguishing abilities of these single and ensemble DL models. RESULTS: For the highest single DL model (DenseNet169), the AUC, sensitivity, and specificity for distinguishing LDO were 0.778, 64.6%, and 72.1%, respectively. For the highest ensemble DL model (VGG16, ResNet50, DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, and Xception), the AUC, sensitivity, and specificity for distinguishing LDO were 0.824, 84.8%, and 58.8%, respectively. The heat maps indicated that these DL models placed their focus on the tear meniscus region of the ASOCT images. CONCLUSION: The combination of DL and ASOCT images could distinguish between tear meniscus of LDO patients and normal subjects with a high level of accuracy. These results suggest that DL might be useful for automatic screening of patients for LDO.


Assuntos
Aprendizado Profundo , Obstrução dos Ductos Lacrimais , Menisco , Adulto , Idoso , Feminino , Humanos , Obstrução dos Ductos Lacrimais/diagnóstico , Masculino , Lágrimas , Tomografia de Coerência Óptica
4.
Eye Contact Lens ; 46(2): 121-126, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31425350

RESUMO

PURPOSE: To evaluate the efficacy of deep learning in judging the need for rebubbling after Descemet's endothelial membrane keratoplasty (DMEK). METHODS: This retrospective study included eyes that underwent rebubbling after DMEK (rebubbling group: RB group) and the same number of eyes that did not require rebubbling (non-RB group), based on medical records. To classify the RB group, randomly selected images from anterior segment optical coherence tomography at postoperative day 5 were evaluated by corneal specialists. The criterion for rebubbling was the condition where graft detachment reached the central 4.0-mm pupil area. We trained nine types of deep neural network structures (VGG16, VGG19, ResNet50, InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, and DenseNet201) and built nine models. Using each model, we tested the validation data and evaluated the model. RESULTS: This study included 496 images (31 eyes from 24 patients) in the RB group and 496 images (31 eyes from 29 patients) in the non-RB group. Because 16 picture images were obtained from the same point of each eye, a total of 992 images were obtained. The VGG19 model was found to have the highest area under the receiver operating characteristic curve (AUC) of all models. The AUC, sensitivity, and specificity of the VGG19 model were 0.964, 0.967, and 0.915, respectively, whereas those of the best ensemble model were 0.956, 0.913, and 0.921, respectively. CONCLUSIONS: This automated system that enables the physician to be aware of the requirement of RB might be clinically useful.


Assuntos
Aprendizado Profundo , Lâmina Limitante Posterior/cirurgia , Ceratoplastia Endotelial com Remoção da Lâmina Limitante Posterior , Distrofia Endotelial de Fuchs/cirurgia , Reoperação , Idoso , Área Sob a Curva , Perda de Células Endoteliais da Córnea/prevenção & controle , Feminino , Humanos , Masculino , Modelos Teóricos , Curva ROC , Estudos Retrospectivos , Tomografia de Coerência Óptica , Acuidade Visual/fisiologia
5.
Allergol Int ; 69(4): 505-509, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32563623

RESUMO

We have summarized the past efforts and results of objective measurement methods for conjunctival hyperemia classification. Severity classification using conjunctival blood vessel occupancy rate, ocular surface temperature analysis, and artificial intelligence have been reported to be clinically useful, as they have been found to correlate with the severity of conjunctival hyperemia by doctors. The AI method using slit lamp microscope images, whose main purpose is to be widely used in daily clinical practice, can be spread all over the world. As a result, it may lay the foundation for clinical research using large amounts of clinical data collected on the same basis without human bias.


Assuntos
Inteligência Artificial , Conjuntivite Alérgica/diagnóstico , Hiperemia/diagnóstico , Humanos , Índice de Gravidade de Doença , Microscopia com Lâmpada de Fenda
6.
Int Ophthalmol ; 39(6): 1269-1275, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29744763

RESUMO

PURPOSE: To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. METHODS: First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. RESULTS: DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. CONCLUSION: A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Oftalmoscopia/métodos , Degeneração Macular Exsudativa/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Sensibilidade e Especificidade
7.
Int Ophthalmol ; 39(8): 1871-1877, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30218173

RESUMO

PURPOSE: In this study, we compared deep learning (DL) with support vector machine (SVM), both of which use three-dimensional optical coherence tomography (3D-OCT) images for detecting epiretinal membrane (ERM). METHODS: In total, 529 3D-OCT images from the Tsukazaki hospital ophthalmology database (184 non-ERM subjects and 205 ERM patients) were assessed; 80% of the images were divided for training, and 20% for test as follows: 423 training (non-ERM 245, ERM 178) and 106 test (non-ERM 59, ERM 47) images. Using the 423 training images, a model was created with deep convolutional neural network and SVM, and the test data were evaluated. RESULTS: The DL model's sensitivity was 97.6% [95% confidence interval (CI), 87.7-99.9%] and specificity was 98.0% (95% CI, 89.7-99.9%), and the area under the curve (AUC) was 0.993 (95% CI, 0.993-0.994). In contrast, the SVM model's sensitivity was 97.6% (95% CI, 87.7-99.9%), specificity was 94.2% (95% CI, 84.0-98.7%), and AUC was 0.988 (95% CI, 0.987-0.988). CONCLUSION: DL model is better than SVM model in detecting ERM by using 3D-OCT images.


Assuntos
Membrana Epirretiniana/diagnóstico , Imageamento Tridimensional/métodos , Aprendizado de Máquina , Retina/diagnóstico por imagem , Máquina de Vetores de Suporte , Tomografia de Coerência Óptica/métodos , Acuidade Visual , Idoso , Aprendizado Profundo , Diagnóstico Precoce , Feminino , Humanos , Masculino
8.
Int Ophthalmol ; 39(10): 2153-2159, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30798455

RESUMO

PURPOSE: We investigated using ultrawide-field fundus images with a deep convolutional neural network (DCNN), which is a machine learning technology, to detect treatment-naïve proliferative diabetic retinopathy (PDR). METHODS: We conducted training with the DCNN using 378 photographic images (132 PDR and 246 non-PDR) and constructed a deep learning model. The area under the curve (AUC), sensitivity, and specificity were examined. RESULT: The constructed deep learning model demonstrated a high sensitivity of 94.7% and a high specificity of 97.2%, with an AUC of 0.969. CONCLUSION: Our findings suggested that PDR could be diagnosed using wide-angle camera images and deep learning.


Assuntos
Aprendizado Profundo , Retinopatia Diabética/diagnóstico , Diagnóstico por Computador/métodos , Oftalmoscopia/métodos , Adulto , Idoso , Área Sob a Curva , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade
9.
J Gen Virol ; 96(8): 2099-2103, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25957096

RESUMO

Ticks transmit viruses responsible for severe emerging and re-emerging infectious diseases, some of which have a significant impact on public health. In Japan, little is known about the distribution of tick-borne viruses. In this study, we collected and tested ticks to investigate the distribution of tick-borne arboviruses in Kyoto, Japan, and isolated the first Thogoto virus (THOV) to our knowledge from Haemaphysalis longicornis in far-eastern Asia. The Japanese isolate was genetically distinct from a cluster of other isolates from Africa, Europe and the Middle East. Various cell lines derived from mammals and ticks were susceptible to the isolate, but it was not pathogenic in mice. These results advance understanding of the distribution and ecology of THOV.


Assuntos
Vetores Aracnídeos/virologia , Ixodidae/virologia , Thogotovirus/isolamento & purificação , Doenças Transmitidas por Carrapatos/virologia , Animais , Feminino , Humanos , Japão , Masculino , Camundongos , Camundongos Endogâmicos BALB C , Dados de Sequência Molecular , Filogenia , Thogotovirus/classificação , Thogotovirus/genética , Doenças Transmitidas por Carrapatos/transmissão
10.
Cornea ; 42(5): 544-548, 2023 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-35543586

RESUMO

PURPOSE: To develop an artificial intelligence (AI) algorithm enabling corneal surgeons to predict the probability of rebubbling after Descemet membrane endothelial keratoplasty (DMEK) from images obtained using optical coherence tomography (OCT). METHODS: Anterior segment OCT data of patients undergoing DMEK by 2 different DMEK surgeons (C.C. and B.B.; University of Cologne, Cologne, Germany) were extracted from the prospective Cologne DMEK database. An AI algorithm was trained by using a data set of C.C. to detect graft detachments and predict the probability of a rebubbling. The architecture of the AI model used in this study was called EfficientNet. This algorithm was applied to OCT scans of patients, which were operated by B.B. The transferability of this algorithm was analyzed to predict a rebubbling after DMEK. RESULTS: The algorithm reached an area under the curve of 0.875 (95% confidence interval: 0.880-0.929). The cutoff value based on the Youden index was 0.214, and the sensitivity and specificity for this value were 78.9% (67.6%-87.7%) and 78.6% (69.5%-86.1%). CONCLUSIONS: The development of AI algorithms allows good transferability to other surgeons reaching a high accuracy in predicting rebubbling after DMEK based on OCT image data.


Assuntos
Ceratoplastia Endotelial com Remoção da Lâmina Limitante Posterior , Distrofia Endotelial de Fuchs , Humanos , Lâmina Limitante Posterior/cirurgia , Inteligência Artificial , Estudos Prospectivos , Acuidade Visual , Ceratoplastia Endotelial com Remoção da Lâmina Limitante Posterior/métodos , Algoritmos , Estudos Retrospectivos , Endotélio Corneano , Distrofia Endotelial de Fuchs/cirurgia
11.
Sci Rep ; 12(1): 16036, 2022 09 26.
Artigo em Inglês | MEDLINE | ID: mdl-36163451

RESUMO

This study aimed to develop a diagnostic software system to evaluate the enlarged extraocular muscles (EEM) in patients with Graves' ophthalmopathy (GO) by a deep neural network.This prospective observational study involved 371 participants (199 EEM patients with GO and 172 controls with normal extraocular muscles) whose extraocular muscles were examined with orbital coronal computed tomography. When at least one rectus muscle (right or left superior, inferior, medial, or lateral) in the patients was 4.0 mm or larger, it was classified as an EEM patient with GO. We used 222 images of the data from patients as the training data, 74 images as the validation test data, and 75 images as the test data to "train" the deep neural network to judge the thickness of the extraocular muscles on computed tomography. We then validated the performance of the network. In the test data, the area under the curve was 0.946 (95% confidence interval (CI) 0.894-0.998), and receiver operating characteristic analysis demonstrated 92.5% (95% CI 0.796-0.984) sensitivity and 88.6% (95% CI 0.733-0.968) specificity. The results suggest that the deep learning system with the deep neural network can detect EEM in patients with GO.


Assuntos
Oftalmopatia de Graves , Músculos Oculomotores , Oftalmopatia de Graves/diagnóstico por imagem , Humanos , Hipertrofia , Redes Neurais de Computação , Músculos Oculomotores/diagnóstico por imagem , Estudos Prospectivos , Tomografia Computadorizada por Raios X
12.
J Clin Med ; 10(5)2021 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-33800825

RESUMO

The present study aims to describe the use of machine learning (ML) in predicting the occurrence of postoperative refraction after cataract surgery and compares the accuracy of this method to conventional intraocular lens (IOL) power calculation formulas. In total, 3331 eyes from 2010 patients were assessed. The objects were divided into training data and test data. The constants for the IOL power calculation formulas and model training for ML were optimized using training data. Then, the occurrence of postoperative refraction was predicted using conventional formulas, or ML models were calculated using the test data. We evaluated the SRK/T formula, Haigis formula, Holladay 1 formula, Hoffer Q formula, and Barrett Universal II formula (BU-II); similar to ML methods, we assessed support vector regression (SVR), random forest regression (RFR), gradient boosting regression (GBR), and neural network (NN). Among the conventional formulas, BU-II had the lowest mean and median absolute error of prediction. Therefore, we compared the accuracy of our method with that of BU-II. The absolute errors of some ML methods were lower than those of BU-II. However, no statistically significant difference was observed. Thus, the accuracy of our method was not inferior to that of BU-II.

13.
Sci Rep ; 11(1): 18559, 2021 09 17.
Artigo em Inglês | MEDLINE | ID: mdl-34535722

RESUMO

The efficacy of deep learning in predicting successful big-bubble (SBB) formation during deep anterior lamellar keratoplasty (DALK) was evaluated. Medical records of patients undergoing DALK at the University of Cologne, Germany between March 2013 and July 2019 were retrospectively analyzed. Patients were divided into two groups: (1) SBB or (2) failed big-bubble (FBB). Preoperative images of anterior segment optical coherence tomography and corneal biometric values (corneal thickness, corneal curvature, and densitometry) were evaluated. A deep neural network model, Visual Geometry Group-16, was selected to test the validation data, evaluate the model, create a heat map image, and calculate the area under the curve (AUC). This pilot study included 46 patients overall (11 women, 35 men). SBBs were more common in keratoconus eyes (KC eyes) than in corneal opacifications of other etiologies (non KC eyes) (p = 0.006). The AUC was 0.746 (95% confidence interval [CI] 0.603-0.889). The determination success rate was 78.3% (18/23 eyes) (95% CI 56.3-92.5%) for SBB and 69.6% (16/23 eyes) (95% CI 47.1-86.8%) for FBB. This automated system demonstrates the potential of SBB prediction in DALK. Although KC eyes had a higher SBB rate, no other specific findings were found in the corneal biometric data.


Assuntos
Córnea/cirurgia , Transplante de Córnea , Aprendizado Profundo , Adulto , Idoso , Biometria , Transplante de Córnea/métodos , Feminino , Humanos , Ceratocone/cirurgia , Masculino , Pessoa de Meia-Idade , Projetos Piloto , Estudos Retrospectivos
14.
J Clin Med ; 10(4)2021 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-33670732

RESUMO

We aimed to predict keratoconus progression and the need for corneal crosslinking (CXL) using deep learning (DL). Two hundred and seventy-four corneal tomography images taken by Pentacam HR® (Oculus, Wetzlar, Germany) of 158 keratoconus patients were examined. All patients were examined two times or more, and divided into two groups; the progression group and the non-progression group. An axial map of the frontal corneal plane, a pachymetry map, and a combination of these two maps at the initial examination were assessed according to the patients' age. Training with a convolutional neural network on these learning data objects was conducted. Ninety eyes showed progression and 184 eyes showed no progression. The axial map, the pachymetry map, and their combination combined with patients' age showed mean AUC values of 0.783, 0.784, and 0.814 (95% confidence interval (0.721-0.845) (0.722-0.846), and (0.755-0.872), respectively), with sensitivities of 87.8%, 77.8%, and 77.8% ((79.2-93.7), (67.8-85.9), and (67.8-85.9)) and specificities of 59.8%, 65.8%, and 69.6% ((52.3-66.9), (58.4-72.6), and (62.4-76.1)), respectively. Using the proposed DL neural network model, keratoconus progression can be predicted on corneal tomography maps combined with patients' age.

15.
J Ophthalmol ; 2021: 6651175, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33884202

RESUMO

PURPOSE: The present study aimed to compare the accuracy of diabetic retinopathy (DR) staging with a deep convolutional neural network (DCNN) using two different types of fundus cameras and composite images. METHOD: The study included 491 ultra-wide-field fundus ophthalmoscopy and optical coherence tomography angiography (OCTA) images that passed an image-quality review and were graded as no apparent DR (NDR; 169 images), mild nonproliferative DR (NPDR; 76 images), moderate NPDR (54 images), severe NPDR (90 images), and proliferative DR (PDR; 102 images) by three retinal experts by the International Clinical Diabetic Retinopathy Severity Scale. The findings of tests 1 and 2 to identify no apparent diabetic retinopathy (NDR) and PDR, respectively, were then assessed. For each verification, Optos, OCTA, and Optos OCTA imaging scans with DCNN were performed. RESULT: The Optos, OCTA, and Optos OCTA imaging test results for comparison between NDR and DR showed mean areas under the curve (AUC) of 0.79, 0.883, and 0.847; sensitivity rates of 80.9%, 83.9%, and 78.6%; and specificity rates of 55%, 71.6%, and 69.8%, respectively. Meanwhile, the Optos, OCTA, and Optos OCTA imaging test results for comparison between NDR and PDR showed mean AUC of 0.981, 0.928, and 0.964; sensitivity rates of 90.2%, 74.5%, and 80.4%; and specificity rates of 97%, 97%, and 96.4%, respectively. CONCLUSION: The combination of Optos and OCTA imaging with DCNN could detect DR at desirable levels of accuracy and may be useful in clinical practice and retinal screening. Although the combination of multiple imaging techniques might overcome their individual weaknesses and provide comprehensive imaging, artificial intelligence in classifying multimodal images has not always produced accurate results.

16.
Spine J ; 21(10): 1652-1658, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33722728

RESUMO

BACKGROUND CONTEXT: Accurate diagnosis of osteoporotic vertebral fracture (OVF) is important for improving treatment outcomes; however, the gold standard has not been established yet. A deep-learning approach based on convolutional neural network (CNN) has attracted attention in the medical imaging field. PURPOSE: To construct a CNN to detect fresh OVF on magnetic resonance (MR) images. STUDY DESIGN/SETTING: Retrospective analysis of MR images PATIENT SAMPLE: This retrospective study included 814 patients with fresh OVF. For CNN training and validation, 1624 slices of T1-weighted MR image were obtained and used. OUTCOME MEASURE: We plotted the receiver operating characteristic (ROC) curve and calculated the area under the curve (AUC) in order to evaluate the performance of the CNN. Consequently, the sensitivity, specificity, and accuracy of the diagnosis by CNN and that of the two spine surgeons were compared. METHODS: We constructed an optimal model using ensemble method by combining nine types of CNNs to detect fresh OVFs. Furthermore, two spine surgeons independently evaluated 100 vertebrae, which were randomly extracted from test data. RESULTS: The ensemble method using VGG16, VGG19, DenseNet201, and ResNet50 was the combination with the highest AUC of ROC curves. The AUC was 0.949. The evaluation metrics of the diagnosis (CNN/surgeon 1/surgeon 2) for 100 vertebrae were as follows: sensitivity: 88.1%/88.1%/100%; specificity: 87.9%/86.2%/65.5%; accuracy: 88.0%/87.0%/80.0%. CONCLUSIONS: In detecting fresh OVF using MR images, the performance of the CNN was comparable to that of two spine surgeons.


Assuntos
Inteligência Artificial , Fraturas por Osteoporose , Humanos , Imageamento por Ressonância Magnética , Fraturas por Osteoporose/diagnóstico por imagem , Estudos Retrospectivos , Coluna Vertebral
17.
J Clin Med ; 9(12)2020 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-33266345

RESUMO

Surgical skill levels of young ophthalmologists tend to be instinctively judged by ophthalmologists in practice, and hence a stable evaluation is not always made for a single ophthalmologist. Although it has been said that standardizing skill levels presents difficulty as surgical methods vary greatly, approaches based on machine learning seem to be promising for this objective. In this study, we propose a method for displaying the information necessary to quantify the surgical techniques of cataract surgery in real-time. The proposed method consists of two steps. First, we use InceptionV3, an image classification network, to extract important surgical phases and to detect surgical problems. Next, one of the segmentation networks, scSE-FC-DenseNet, is used to detect the cornea and the tip of the surgical instrument and the incisional site in the continuous curvilinear capsulorrhexis, a particularly important phase in cataract surgery. The first and second steps are evaluated in terms of the area under curve (i.e., AUC) of the figure of the true positive rate versus (1-false positive rate) and the intersection over union (i.e., IoU) obtained by the ground truth and prediction associated with the region of interest. As a result, in the first step, the network was able to detect surgical problems with an AUC of 0.97. In the second step, the detection rate of the cornea was 99.7% when the IoU was 0.8 or more, and the detection rates of the tips of the forceps and the incisional site were 86.9% and 94.9% when the IoU was 0.1 or more, respectively. It was thus expected that the proposed method is one of the basic techniques to achieve the standardization of surgical skill levels.

18.
Sci Rep ; 10(1): 5640, 2020 03 27.
Artigo em Inglês | MEDLINE | ID: mdl-32221317

RESUMO

This study was performed to estimate choroidal thickness by fundus photography, based on image processing and deep learning. Colour fundus photography and central choroidal thickness examinations were performed in 200 normal eyes and 200 eyes with central serous chorioretinopathy (CSC). Choroidal thickness under the fovea was measured using optical coherence tomography images. The adaptive binarisation method was used to delineate choroidal vessels within colour fundus photographs. Correlation coefficients were calculated between the choroidal vascular density (defined as the choroidal vasculature appearance index of the binarisation image) and choroidal thickness. The correlations between choroidal vasculature appearance index and choroidal thickness were -0.60 for normal eyes (p < 0.01) and -0.46 for eyes with CSC (p < 0.01). A deep convolutional neural network model was independently created and trained with augmented training data by K-Fold Cross Validation (K = 5). The correlation coefficients between the value predicted from the colour image and the true choroidal thickness were 0.68 for normal eyes (p < 0.01) and 0.48 for eyes with CSC (p < 0.01). Thus, choroidal thickness could be estimated from colour fundus photographs in both normal eyes and eyes with CSC, using imaging analysis and deep learning.


Assuntos
Coriorretinopatia Serosa Central/patologia , Corioide/fisiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Corioide/irrigação sanguínea , Cor , Aprendizado Profundo , Feminino , Angiofluoresceinografia/métodos , Humanos , Masculino , Pessoa de Meia-Idade , Tomografia de Coerência Óptica/métodos , Acuidade Visual/fisiologia , Adulto Jovem
19.
Sci Rep ; 10(1): 19369, 2020 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-33168888

RESUMO

This study examined whether age and brachial-ankle pulse-wave velocity (baPWV) can be predicted with ultra-wide-field pseudo-color (UWPC) images using deep learning (DL). We examined 170 UWPC images of both eyes of 85 participants (40 men and 45 women, mean age: 57.5 ± 20.9 years). Three types of images were included (total, central, and peripheral) and analyzed by k-fold cross-validation (k = 5) using Visual Geometry Group-16. After bias was eliminated using the generalized linear mixed model, the standard regression coefficients (SRCs) between actual age and baPWV and predicted age and baPWV from the UWPC images by the neural network were calculated, and the prediction accuracies of the DL model for age and baPWV were examined. The SRC between actual age and predicted age by the neural network was 0.833 for all images, 0.818 for central images, and 0.649 for peripheral images (all P < 0.001) and between the actual baPWV and the predicted baPWV was 0.390 for total images, 0.419 for central images, and 0.312 for peripheral images (all P < 0.001). These results show the potential prediction capability of DL for age and vascular aging and could be useful for disease prevention and early treatment.


Assuntos
Índice Tornozelo-Braço , Percepção de Cores , Aprendizado Profundo , Hipertensão/fisiopatologia , Análise de Onda de Pulso , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
20.
PLoS One ; 15(4): e0227240, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32298265

RESUMO

This study examined and compared outcomes of deep learning (DL) in identifying swept-source optical coherence tomography (OCT) images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)], and OCT images with myopic macular lesions [e.g., myopic choroidal neovascularization (mCNV) and retinoschisis (RS)]. A total of 910 SS-OCT images were included in the study as follows and analyzed by k-fold cross-validation (k = 5) using DL's renowned model, Visual Geometry Group-16: nHM, 146 images; HM, 531 images; mCNV, 122 images; and RS, 111 images (n = 910). The binary classification of OCT images with or without myopic macular lesions; the binary classification of HM images and images with myopic macular lesions (i.e., mCNV and RS images); and the ternary classification of HM, mCNV, and RS images were examined. Additionally, sensitivity, specificity, and the area under the curve (AUC) for the binary classifications as well as the correct answer rate for ternary classification were examined. The classification results of OCT images with or without myopic macular lesions were as follows: AUC, 0.970; sensitivity, 90.6%; specificity, 94.2%. The classification results of HM images and images with myopic macular lesions were as follows: AUC, 1.000; sensitivity, 100.0%; specificity, 100.0%. The correct answer rate in the ternary classification of HM images, mCNV images, and RS images were as follows: HM images, 96.5%; mCNV images, 77.9%; and RS, 67.6% with mean, 88.9%.Using noninvasive, easy-to-obtain swept-source OCT images, the DL model was able to classify OCT images without myopic macular lesions and OCT images with myopic macular lesions such as mCNV and RS with high accuracy. The study results suggest the possibility of conducting highly accurate screening of ocular diseases using artificial intelligence, which may improve the prevention of blindness and reduce workloads for ophthalmologists.


Assuntos
Neovascularização de Coroide/diagnóstico , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Miopia/diagnóstico , Retinosquise/diagnóstico , Adulto , Idoso , Cegueira/prevenção & controle , Corioide/diagnóstico por imagem , Neovascularização de Coroide/complicações , Conjuntos de Dados como Assunto , Diagnóstico Diferencial , Feminino , Humanos , Macula Lutea/diagnóstico por imagem , Masculino , Programas de Rastreamento/métodos , Pessoa de Meia-Idade , Miopia/etiologia , Curva ROC , Retinosquise/complicações , Índice de Gravidade de Doença , Tomografia de Coerência Óptica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA