Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Ultrasound Med Biol ; 50(8): 1262-1272, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38777640

ABSTRACT

OBJECTIVE: This study aimed to develop and evaluate a deep learning-based model that could automatically measure anterior segment (AS) parameters on preoperative ultrasound biomicroscopy (UBM) images of implantable Collamer lens (ICL) surgery candidates. METHODS: A total of 1164 panoramic UBM images were preoperatively obtained from 321 patients who received ICL surgery in the Eye Center of Renmin Hospital of Wuhan University (Wuhan, China) to develop an imaging database. First, the UNet++ network was utilized to segment AS tissues automatically, such as corneal lens and iris. In addition, image processing techniques and geometric localization algorithms were developed to automatically identify the anatomical landmarks (ALs) of pupil diameter (PD), anterior chamber depth (ACD), angle-to-angle distance (ATA), and sulcus-to-sulcus distance (STS). Based on the results of the latter two processes, PD, ACD, ATA, and STS can be measured. Meanwhile, an external dataset of 294 images from Huangshi Aier Eye Hospital was employed to further assess the model's performance in other center. Lastly, a subset of 100 random images from the external test set was chosen to compare the performance of the model with senior experts. RESULTS: Whether in the internal test dataset or external test dataset, using manual labeling as the reference standard, the models achieved a mean Dice coefficient exceeding 0.880. Additionally, the intra-class correlation coefficients (ICCs) of ALs' coordinates were all greater than 0.947, and the percentage of Euclidean distance distribution of ALs within 250 µm was over 95.24%.While the ICCs for PD, ACD, ATA, and STS were greater than 0.957, furthermore, the average relative error (ARE) of PD, ACD, ATA, and STS were below 2.41%. In terms of human versus machine performance, the ICCs between the measurements performed by the model and those by senior experts were all greater than 0.931. CONCLUSION: A deep learning-based model could measure AS parameters using UBM images of ICL candidates, and exhibited a performance similar to that of a senior ophthalmologist.


Subject(s)
Anterior Eye Segment , Deep Learning , Microscopy, Acoustic , Humans , Microscopy, Acoustic/methods , Anterior Eye Segment/diagnostic imaging , Male , Female , Adult , Phakic Intraocular Lenses , Lens Implantation, Intraocular , Young Adult , Middle Aged , Image Processing, Computer-Assisted/methods
2.
Am J Ophthalmol ; 262: 178-185, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38360335

ABSTRACT

PURPOSE: To investigate the correlation between the opening and closing states of anterior chamber angle (ACA) and the density of limbal epithelial basal cells (LEBCs) in subjects with primary angle-closure glaucoma (PACG). DESIGN: Cross-sectional observational study. METHODS: A total of 54 eyes of 29 patients diagnosed with PACG were included in the study. Fifty-four eyes from normal subjects were included as control. Automatic evaluation system for ultrasound biomicroscopy images of anterior chamber angle was used to assist ophthalmologists in identifying the opening or closing state of ACA, and the in vivo confocal microscopy (IVCM) was used to evaluate the density of LEBCs in different directions. RESULTS: (1) The average density of LEBCs in the superior, inferior, nasal, and temporal limbus of the eyes in the PACG group was lower than that in the control group, and this pattern did not align with the density distribution observed in the control group. (2) In the early, moderate and advanced PACG, the density of LEBCs corresponding to the closed angle was lower than that in the control group (P < .05). Compared with the density of LEBCs corresponding to the closed angle and the open angle, the closed angle of PACG in the early, moderate and advanced stages was less than that in the open angle (P < .05 in the early and moderate stages; advanced stage P > .05). (3) The basal cell density was processed by dimensionless analysis. In the data calculated by averaging and minimizing, both closed angle dimensionless values were smaller than the open angle (P < .05). (4) Comparative analysis was conducted among the normal, open-angle, and closed-angle conditions in the superior, inferior, nasal, and temporal limbus. In the early stage of PACG, significant differences were observed in 4 limbal regions (P < .05), while in the moderate PACG stage, this difference was noted in 3 limbal regions (P < .05). In advanced PACG, 2 limbal regions exhibited significant differences (P < .05). These findings suggest that during the early PACG stage, angle closure is the predominant influencing factor on LEBCs density, while in the advanced stage, the decrease in density is attributed to a combination of angle closure and the natural progression of the disease. CONCLUSIONS: There is a significant correlation between anterior chamber angle status and LEBCs. Advanced PACG and angle closure should be highly suspected of the occurrence of limbal stem cell deficiency (LSCD).


Subject(s)
Anterior Chamber , Glaucoma, Angle-Closure , Intraocular Pressure , Limbus Corneae , Microscopy, Acoustic , Microscopy, Confocal , Stem Cells , Humans , Glaucoma, Angle-Closure/diagnosis , Glaucoma, Angle-Closure/physiopathology , Cross-Sectional Studies , Limbus Corneae/pathology , Limbus Corneae/diagnostic imaging , Male , Female , Middle Aged , Anterior Chamber/diagnostic imaging , Anterior Chamber/pathology , Cell Count , Aged , Stem Cells/pathology , Intraocular Pressure/physiology , Gonioscopy , Limbal Stem Cell Deficiency
3.
Ultrasound Med Biol ; 49(12): 2497-2509, 2023 12.
Article in English | MEDLINE | ID: mdl-37730479

ABSTRACT

OBJECTIVE: The goal of the work described here was to develop and assess a deep learning-based model that could automatically segment anterior chamber angle (ACA) tissues; classify iris curvature (I-Curv), iris root insertion (IRI), and angle closure (AC); automatically locate scleral spur; and measure ACA parameters in ultrasound biomicroscopy (UBM) images. METHODS: A total of 11,006 UBM images were obtained from 1538 patients with primary angle-closure glaucoma who were admitted to the Eye Center of Renmin Hospital of Wuhan University (Wuhan, China) to develop an imaging database. The UNet++ network was used to segment ACA tissues automatically. In addition, two support vector machine (SVM) algorithms were developed to classify I-Curv and AC, and a logistic regression (LR) algorithm was developed to classify IRI. Meanwhile, an algorithm was developed to automatically locate the scleral spur and measure ACA parameters. An external data set of 1,658 images from Huangshi Aier Eye Hospital was used to evaluate the performance of the model under different conditions. An additional 439 images were collected to compare the performance of the model with experts. RESULTS: The model achieved accuracies of 95.2%, 88.9% and 85.6% in classification of AC, I-Curv and IRI, respectively. Compared with ophthalmologists, the model achieved an accuracy of 0.765 in classifying AC, I-Curv and IRI, indicating that its high accuracy was as high as that of the ophthalmologists (p > 0.05). The average relative errors (AREs) of ACA parameters were smaller than 15% in the internal data sets. Intraclass correlation coefficients (ICCs) of all the angle-related parameters were greater than 0.911. ICC values of all iris thickness parameters were greater than 0.884. The accurate measurement of ACA parameters partly depended on accurate localization of the scleral spur (p < 0.001). CONCLUSION: The model could effectively and accurately evaluate the ACA automatically based on fully automated analysis of UBM images, and it can potentially be a promising tool to assist ophthalmologists. The present study suggested that the deep learning model can be extensively applied to the evaluation of ACA and AC-related biometric risk factors, and it may broaden the application of UBM imaging in the clinical research of primary angle-closure glaucoma.


Subject(s)
Deep Learning , Glaucoma, Angle-Closure , Humans , Glaucoma, Angle-Closure/diagnostic imaging , Microscopy, Acoustic/methods , Gonioscopy , Tomography, Optical Coherence/methods , Anterior Chamber
4.
Front Med (Lausanne) ; 10: 1164188, 2023.
Article in English | MEDLINE | ID: mdl-37153082

ABSTRACT

Objective: In order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians' workload. Methods: A total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman's membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance. Results: The accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman's membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886. Conclusion: A computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes.

SELECTION OF CITATIONS
SEARCH DETAIL