Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Med Phys ; 2024 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-39395206

RESUMEN

BACKGROUND: Although the uterus, bladder, and rectum are distinct organs, their muscular fasciae are often interconnected. Clinical experience suggests that they may share common risk factors and associations. When one organ experiences prolapse, it can potentially affect the neighboring organs. However, the current assessment of disease severity still relies on manual measurements, which can yield varying results depending on the physician, thereby leading to diagnostic inaccuracies. PURPOSE: This study aims to develop a multilabel grading model based on deep learning to classify the degree of prolapse of three organs in the female pelvis using stress magnetic resonance imaging (MRI) and provide interpretable result analysis. METHODS: We utilized sagittal MRI sequences taken at rest and during maximum Valsalva maneuver from 662 subjects. The training set included 464 subjects, the validation set included 98 subjects, and the test set included 100 subjects (training set n = 464, validation set n = 98, test set n = 100). We designed a feature extraction module specifically for pelvic floor MRI using the vision transformer architecture and employed label masking training strategy and pre-training methods to enhance model convergence. The grading results were evaluated using Precision, Kappa, Recall, and Area Under the Curve (AUC). To validate the effectiveness of the model, the designed model was compared with classic grading methods. Finally, we provided interpretability charts illustrating the model's operational principles on the grading task. RESULTS: In terms of POP grading detection, the model achieved an average Precision, Kappa coefficient, Recall, and AUC of 0.86, 0.77, 0.76, and 0.86, respectively. Compared to existing studies, our model achieved the highest performance metrics. The average time taken to diagnose a patient was 0.38 s. CONCLUSIONS: The proposed model achieved detection accuracy that is comparable to or even exceeds that of physicians, demonstrating the effectiveness of the vision transformer architecture and label masking training strategy for assisting in the grading of POP under static and maximum Valsalva conditions. This offers a promising option for computer-aided diagnosis and treatment planning of POP.

2.
Int J Ophthalmol ; 17(7): 1184-1192, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39026919

RESUMEN

AIM: To evaluate the application of an intelligent diagnostic model for pterygium. METHODS: For intelligent diagnosis of pterygium, the attention mechanisms-SENet, ECANet, CBAM, and Self-Attention-were fused with the lightweight MobileNetV2 model structure to construct a tri-classification model. The study used 1220 images of three types of anterior ocular segments of the pterygium provided by the Eye Hospital of Nanjing Medical University. Conventional classification models-VGG16, ResNet50, MobileNetV2, and EfficientNetB7-were trained on the same dataset for comparison. To evaluate model performance in terms of accuracy, Kappa value, test time, sensitivity, specificity, the area under curve (AUC), and visual heat map, 470 test images of the anterior segment of the pterygium were used. RESULTS: The accuracy of the MobileNetV2+Self-Attention model with 281 MB in model size was 92.77%, and the Kappa value of the model was 88.92%. The testing time using the model was 9ms/image in the server and 138ms/image in the local computer. The sensitivity, specificity, and AUC for the diagnosis of pterygium using normal anterior segment images were 99.47%, 100%, and 100%, respectively; using anterior segment images in the observation period were 88.30%, 95.32%, and 96.70%, respectively; and using the anterior segment images in the surgery period were 88.18%, 94.44%, and 97.30%, respectively. CONCLUSION: The developed model is lightweight and can be used not only for detection but also for assessing the severity of pterygium.

3.
Front Neurosci ; 18: 1339075, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38808029

RESUMEN

Aim: Conventional approaches to diagnosing common eye diseases using B-mode ultrasonography are labor-intensive and time-consuming, must requiring expert intervention for accuracy. This study aims to address these challenges by proposing an intelligence-assisted analysis five-classification model for diagnosing common eye diseases using B-mode ultrasound images. Methods: This research utilizes 2064 B-mode ultrasound images of the eye to train a novel model integrating artificial intelligence technology. Results: The ConvNeXt-L model achieved outstanding performance with an accuracy rate of 84.3% and a Kappa value of 80.3%. Across five classifications (no obvious abnormality, vitreous opacity, posterior vitreous detachment, retinal detachment, and choroidal detachment), the model demonstrated sensitivity values of 93.2%, 67.6%, 86.1%, 89.4%, and 81.4%, respectively, and specificity values ranging from 94.6% to 98.1%. F1 scores ranged from 71% to 92%, while AUC values ranged from 89.7% to 97.8%. Conclusion: Among various models compared, the ConvNeXt-L model exhibited superior performance. It effectively categorizes and visualizes pathological changes, providing essential assisted information for ophthalmologists and enhancing diagnostic accuracy and efficiency.

4.
Med Phys ; 51(8): 5236-5249, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38767532

RESUMEN

BACKGROUND: Bladder prolapse is a common clinical disorder of pelvic floor dysfunction in women, and early diagnosis and treatment can help them recover. Pelvic magnetic resonance imaging (MRI) is one of the most important methods used by physicians to diagnose bladder prolapse; however, it is highly subjective and largely dependent on the clinical experience of physicians. The application of computer-aided diagnostic techniques to achieve a graded diagnosis of bladder prolapse can help improve its accuracy and shorten the learning curve. PURPOSE: The purpose of this study is to combine convolutional neural network (CNN) and vision transformer (ViT) for grading bladder prolapse in place of traditional neural networks, and to incorporate attention mechanisms into mobile vision transformer (MobileViT) for assisting in the grading of bladder prolapse. METHODS: This study focuses on the grading of bladder prolapse in pelvic organs using a combination of a CNN and a ViT. First, this study used MobileNetV2 to extract the local features of the images. Next, a ViT was used to extract the global features by modeling the non-local dependencies at a distance. Finally, a channel attention module (i.e., squeeze-and-excitation network) was used to improve the feature extraction network and enhance its feature representation capability. The final grading of the degree of bladder prolapse was thus achieved. RESULTS: Using pelvic MRI images provided by a Huzhou Maternal and Child Health Care Hospital, this study used the proposed method to grade patients with bladder prolapse. The accuracy, Kappa value, sensitivity, specificity, precision, and area under the curve of our method were 86.34%, 78.27%, 83.75%, 95.43%, 85.70%, and 95.05%, respectively. In comparison with other CNN models, the proposed method performed better. CONCLUSIONS: Thus, the model based on attention mechanisms exhibits better classification performance than existing methods for grading bladder prolapse in pelvic organs, and it can effectively assist physicians in achieving a more accurate bladder prolapse diagnosis.


Asunto(s)
Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Humanos , Femenino , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Pelvis/diagnóstico por imagen , Enfermedades de la Vejiga Urinaria/diagnóstico por imagen
5.
Sci Rep ; 14(1): 883, 2024 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-38195826

RESUMEN

The effectiveness of sequence-to-sequence (seq2seq) models in natural language processing has been well-established over time, and recent studies have extended their utility by treating mathematical computing tasks as instances of machine translation and achieving remarkable results. However, our exploratory experiments have revealed that the seq2seq model, when employing a generic sorting strategy, is incapable of inferring on matrices of unseen rank, resulting in suboptimal performance. This paper aims to address this limitation by focusing on the matrix-to-sequence process and proposing a novel diagonal-based sorting. The method constructs a stable ordering structure of elements for the shared leading principal submatrix sections in matrices with varying ranks. We conduct experiments involving maximal independent sets and Sudoku laws, comparing seq2seq models utilizing different sorting methods. Our findings demonstrate the advantages of the proposed diagonal-based sorting in inference, particularly when dealing with matrices of unseen ranks. By introducing and advocating for this method, we enhance the suitability of seq2seq models for investigating the laws of matrix inclusion and exploring their potential in solving matrix-related tasks.

6.
Indian J Ophthalmol ; 72(Suppl 1): S53-S59, 2024 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-38131543

RESUMEN

PURPOSE: We aimed to develop an artificial intelligence-based myopic maculopathy grading method using EfficientNet to overcome the delayed grading and diagnosis of different myopic maculopathy degrees. METHODS: The cooperative hospital provided 4642 healthy and myopic maculopathy color fundus photographs, comprising the four degrees of myopic maculopathy and healthy fundi. The myopic maculopathy grading models were trained using EfficientNet-B0 to EfficientNet-B7 models. The diagnostic results were compared with those of the VGG16 and ResNet50 classification models. The leading evaluation indicators were sensitivity, specificity, F1 score, area under the receiver operating characteristic (ROC) curve area under curve (AUC), 95% confidence interval, kappa value, and accuracy. The ROC curves of the ten grading models were also compared. RESULTS: We used 1199 color fundus photographs to evaluate the myopic maculopathy grading models. The size of the EfficientNet-B0 myopic maculopathy grading model was 15.6 MB, and it had the highest kappa value (88.32%) and accuracy (83.58%). The model's sensitivities to diagnose tessellated fundus (TF), diffuse chorioretinal atrophy (DCA), patchy chorioretinal atrophy (PCA), and macular atrophy (MA) were 96.86%, 75.98%, 64.67%, and 88.75%, respectively. The specificity was above 93%, and the AUCs were 0.992, 0.960, 0.964, and 0.989, respectively. CONCLUSION: The EfficientNet models were used to design grading diagnostic models for myopic maculopathy. Based on the collected fundus images, the models could diagnose a healthy fundus and four types of myopic maculopathy. The models might help ophthalmologists to make preliminary diagnoses of different degrees of myopic maculopathy.


Asunto(s)
Degeneración Macular , Miopía Degenerativa , Enfermedades de la Retina , Humanos , Miopía Degenerativa/diagnóstico , Agudeza Visual , Inteligencia Artificial , Factores de Riesgo , Enfermedades de la Retina/diagnóstico , Atrofia
7.
Int J Ophthalmol ; 16(9): 1386-1394, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37724272

RESUMEN

Pterygium is a prevalent ocular disease that can cause discomfort and vision impairment. Early and accurate diagnosis is essential for effective management. Recently, artificial intelligence (AI) has shown promising potential in assisting clinicians with pterygium diagnosis. This paper provides an overview of AI-assisted pterygium diagnosis, including the AI techniques used such as machine learning, deep learning, and computer vision. Furthermore, recent studies that have evaluated the diagnostic performance of AI-based systems for pterygium detection, classification and segmentation were summarized. The advantages and limitations of AI-assisted pterygium diagnosis and discuss potential future developments in this field were also analyzed. The review aims to provide insights into the current state-of-the-art of AI and its potential applications in pterygium diagnosis, which may facilitate the development of more efficient and accurate diagnostic tools for this common ocular disease.

8.
Int J Ophthalmol ; 16(7): 995-1004, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37465510

RESUMEN

AIM: To conduct a classification study of high myopic maculopathy (HMM) using limited datasets, including tessellated fundus, diffuse chorioretinal atrophy, patchy chorioretinal atrophy, and macular atrophy, and minimize annotation costs, and to optimize the ALFA-Mix active learning algorithm and apply it to HMM classification. METHODS: The optimized ALFA-Mix algorithm (ALFA-Mix+) was compared with five algorithms, including ALFA-Mix. Four models, including ResNet18, were established. Each algorithm was combined with four models for experiments on the HMM dataset. Each experiment consisted of 20 active learning rounds, with 100 images selected per round. The algorithm was evaluated by comparing the number of rounds in which ALFA-Mix+ outperformed other algorithms. Finally, this study employed six models, including EfficientFormer, to classify HMM. The best-performing model among these models was selected as the baseline model and combined with the ALFA-Mix+ algorithm to achieve satisfactory classification results with a small dataset. RESULTS: ALFA-Mix+ outperforms other algorithms with an average superiority of 16.6, 14.75, 16.8, and 16.7 rounds in terms of accuracy, sensitivity, specificity, and Kappa value, respectively. This study conducted experiments on classifying HMM using several advanced deep learning models with a complete training set of 4252 images. The EfficientFormer achieved the best results with an accuracy, sensitivity, specificity, and Kappa value of 0.8821, 0.8334, 0.9693, and 0.8339, respectively. Therefore, by combining ALFA-Mix+ with EfficientFormer, this study achieved results with an accuracy, sensitivity, specificity, and Kappa value of 0.8964, 0.8643, 0.9721, and 0.8537, respectively. CONCLUSION: The ALFA-Mix+ algorithm reduces the required samples without compromising accuracy. Compared to other algorithms, ALFA-Mix+ outperforms in more rounds of experiments. It effectively selects valuable samples compared to other algorithms. In HMM classification, combining ALFA-Mix+ with EfficientFormer enhances model performance, further demonstrating the effectiveness of ALFA-Mix+.

9.
Indian J Ophthalmol ; 71(5): 2115-2131, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37203092

RESUMEN

Purpose: Recently, the proportion of patients with high myopia has shown a continuous growing trend, more toward the younger age groups. This study aimed to predict the changes in spherical equivalent refraction (SER) and axial length (AL) in children using machine learning methods. Methods: This study is a retrospective study. The cooperative ophthalmology hospital of this study collected data on 179 sets of childhood myopia examinations. The data collected included AL and SER from grades 1 to 6. This study used the six machine learning models to predict AL and SER based on the data. Six evaluation indicators were used to evaluate the prediction results of the models. Results: For predicting SER in grade 6, grade 5, grade 4, grade 3, and grade 2, the best results were obtained through the multilayer perceptron (MLP) algorithm, MLP algorithm, orthogonal matching pursuit (OMP) algorithm, OMP algorithm, and OMP algorithm, respectively. The R2 of the five models were 0.8997, 0.7839, 0.7177, 0.5118, and 0.1758, respectively. For predicting AL in grade 6, grade 5, grade 4, grade 3, and grade 2, the best results were obtained through the Extra Tree (ET) algorithm, MLP algorithm, kernel ridge (KR) algorithm, KR algorithm, and MLP algorithm, respectively. The R2 of the five models were 0.7546, 0.5456, 0.8755, 0.9072, and 0.8534, respectively. Conclusion: Therefore, in predicting SER, the OMP model performed better than the other models in most experiments. In predicting AL, the KR and MLP models were better than the other models in most experiments.


Asunto(s)
Miopía , Refracción Ocular , Humanos , Niño , Estudios Retrospectivos , Pruebas de Visión , Miopía/diagnóstico , Miopía/epidemiología , Aprendizaje Automático , Longitud Axial del Ojo
10.
Front Comput Neurosci ; 16: 1079155, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36568576

RESUMEN

Purpose: To assess the value of an automated classification model for dry and wet macular degeneration based on the ConvNeXT model. Methods: A total of 672 fundus images of normal, dry, and wet macular degeneration were collected from the Affiliated Eye Hospital of Nanjing Medical University and the fundus images of dry macular degeneration were expanded. The ConvNeXT three-category model was trained on the original and expanded datasets, and compared to the results of the VGG16, ResNet18, ResNet50, EfficientNetB7, and RegNet three-category models. A total of 289 fundus images were used to test the models, and the classification results of the models on different datasets were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), accuracy, and kappa. Results: Using 289 fundus images, three-category models trained on the original and expanded datasets were assessed. The ConvNeXT model trained on the expanded dataset was the most effective, with a diagnostic accuracy of 96.89%, kappa value of 94.99%, and high diagnostic consistency. The sensitivity, specificity, F1-score, and AUC values for normal fundus images were 100.00, 99.41, 99.59, and 99.80%, respectively. The sensitivity, specificity, F1-score, and AUC values for dry macular degeneration diagnosis were 87.50, 98.76, 90.32, and 97.10%, respectively. The sensitivity, specificity, F1-score, and AUC values for wet macular degeneration diagnosis were 97.52, 97.02, 96.72, and 99.10%, respectively. Conclusion: The ConvNeXT-based category model for dry and wet macular degeneration automatically identified dry and wet macular degeneration, aiding rapid, and accurate clinical diagnosis.

11.
J Healthc Eng ; 2022: 3942110, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36451763

RESUMEN

A two-category model and a segmentation model of pterygium were proposed to assist ophthalmologists in establishing the diagnosis of ophthalmic diseases. A total of 367 normal anterior segment images and 367 pterygium anterior segment images were collected at the Affiliated Eye Hospital of Nanjing Medical University. AlexNet, VGG16, ResNet18, and ResNet50 models were used to train the two-category pterygium models. A total of 150 normal and 150 pterygium anterior segment images were used to test the models, and the results were compared. The main evaluation indicators, including sensitivity, specificity, area under the curve, kappa value, and receiver operator characteristic curves of the four models, were compared. Simultaneously, 367 pterygium anterior segment images were used to train two improved pterygium segmentation models based on PSPNet. A total of 150 pterygium images were used to test the models, and the results were compared with those of the other four segmentation models. The main evaluation indicators included mean intersection over union (MIOU), IOU, mean average precision (MPA), and PA. Among the two-category models of pterygium, the best diagnostic result was obtained using the VGG16 model. The diagnostic accuracy, kappa value, diagnostic sensitivity of pterygium, diagnostic specificity of pterygium, and F1-score were 99%, 98%, 98.67%, 99.33%, and 99%, respectively. Among the pterygium segmentation models, the double phase-fusion PSPNet model had the best results, with MIOU, IOU, MPA, and PA of 86.57%, 78.1%, 92.3%, and 86.96%, respectively. This study designed a pterygium two-category model and a pterygium segmentation model for the images of the normal anterior and pterygium anterior segments, which could help patients self-screen easily and assist ophthalmologists in establishing the diagnosis of ophthalmic diseases and marking the actual scope of surgery.


Asunto(s)
Aprendizaje Profundo , Pterigion , Humanos , Pterigion/diagnóstico por imagen , Investigación , Universidades
12.
Front Neurol ; 13: 949805, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35968300

RESUMEN

Purpose: To assess the value of automatic disc-fovea angle (DFA) measurement using the DeepLabv3+ segmentation model. Methods: A total of 682 normal fundus image datasets were collected from the Eye Hospital of Nanjing Medical University. The following parts of the images were labeled and subsequently reviewed by ophthalmologists: optic disc center, macular center, optic disc area, and virtual macular area. A total of 477 normal fundus images were used to train DeepLabv3+, U-Net, and PSPNet model, which were used to obtain the optic disc area and virtual macular area. Then, the coordinates of the optic disc center and macular center were obstained by using the minimum outer circle technique. Finally the DFA was calculated. Results: In this study, 205 normal fundus images were used to test the model. The experimental results showed that the errors in automatic DFA measurement using DeepLabv3+, U-Net, and PSPNet segmentation models were 0.76°, 1.4°, and 2.12°, respectively. The mean intersection over union (MIoU), mean pixel accuracy (MPA), average error in the center of the optic disc, and average error in the center of the virtual macula obstained by using DeepLabv3+ model was 94.77%, 97.32%, 10.94 pixels, and 13.44 pixels, respectively. The automatic DFA measurement using DeepLabv3+ got the less error than the errors that using the other segmentation models. Therefore, the DeepLabv3+ segmentation model was finally chosen to measure DFA automatically. Conclusions: The DeepLabv3+ segmentation model -based automatic segmentation techniques can produce accurate and rapid DFA measurements.

13.
Front Med (Lausanne) ; 9: 808402, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35280876

RESUMEN

Purpose: A six-category model of common retinal diseases is proposed to help primary medical institutions in the preliminary screening of the five common retinal diseases. Methods: A total of 2,400 fundus images of normal and five common retinal diseases were provided by a cooperative hospital. Two six-category deep learning models of common retinal diseases based on the EfficientNet-B4 and ResNet50 models were trained. The results from the six-category models in this study and the results from a five-category model in our previous study based on ResNet50 were compared. A total of 1,315 fundus images were used to test the models, the clinical diagnosis results and the diagnosis results of the two six-category models were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), 95% confidence interval, kappa and accuracy, and the receiver operator characteristic curves of the two six-category models were compared in the study. Results: The diagnostic accuracy rate of EfficientNet-B4 model was 95.59%, the kappa value was 94.61%, and there was high diagnostic consistency. The AUC of the normal diagnosis and the five retinal diseases were all above 0.95. The sensitivity, specificity, and F1-score for the diagnosis of normal fundus images were 100, 99.9, and 99.83%, respectively. The specificity and F1-score for RVO diagnosis were 95.68, 98.61, and 93.09%, respectively. The sensitivity, specificity, and F1-score for high myopia diagnosis were 96.1, 99.6, and 97.37%, respectively. The sensitivity, specificity, and F1-score for glaucoma diagnosis were 97.62, 99.07, and 94.62%, respectively. The sensitivity, specificity, and F1-score for DR diagnosis were 90.76, 99.16, and 93.3%, respectively. The sensitivity, specificity, and F1-score for MD diagnosis were 92.27, 98.5, and 91.51%, respectively. Conclusion: The EfficientNet-B4 model was used to design a six-category model of common retinal diseases. It can be used to diagnose the normal fundus and five common retinal diseases based on fundus images. It can help primary doctors in the screening for common retinal diseases, and give suitable suggestions and recommendations. Timely referral can improve the efficiency of diagnosis of eye diseases in rural areas and avoid delaying treatment.

14.
Front Psychol ; 12: 759229, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34744935

RESUMEN

Objective: This study aims to implement and investigate the application of a special intelligent diagnostic system based on deep learning in the diagnosis of pterygium using anterior segment photographs. Methods: A total of 1,220 anterior segment photographs of normal eyes and pterygium patients were collected for training (using 750 images) and testing (using 470 images) to develop an intelligent pterygium diagnostic model. The images were classified into three categories by the experts and the intelligent pterygium diagnosis system: (i) the normal group, (ii) the observation group of pterygium, and (iii) the operation group of pterygium. The intelligent diagnostic results were compared with those of the expert diagnosis. Indicators including accuracy, sensitivity, specificity, kappa value, the area under the receiver operating characteristic curve (AUC), as well as 95% confidence interval (CI) and F1-score were evaluated. Results: The accuracy rate of the intelligent diagnosis system on the 470 testing photographs was 94.68%; the diagnostic consistency was high; the kappa values of the three groups were all above 85%. Additionally, the AUC values approached 100% in group 1 and 95% in the other two groups. The best results generated from the proposed system for sensitivity, specificity, and F1-scores were 100, 99.64, and 99.74% in group 1; 90.06, 97.32, and 92.49% in group 2; and 92.73, 95.56, and 89.47% in group 3, respectively. Conclusion: The intelligent pterygium diagnosis system based on deep learning can not only judge the presence of pterygium but also classify the severity of pterygium. This study is expected to provide a new screening tool for pterygium and benefit patients from areas lacking medical resources.

15.
BMC Health Serv Res ; 21(1): 1067, 2021 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-34627239

RESUMEN

BACKGROUND: In the development of artificial intelligence in ophthalmology, the ophthalmic AI-related recognition issues are prominent, but there is a lack of research into people's familiarity with and their attitudes toward ophthalmic AI. This survey aims to assess medical workers' and other professional technicians' familiarity with, attitudes toward, and concerns about AI in ophthalmology. METHODS: This is a cross-sectional study design study. An electronic questionnaire was designed through the app Questionnaire Star, and was sent to respondents through WeChat, China's version of Facebook or WhatsApp. The participation was voluntary and anonymous. The questionnaire consisted of four parts, namely the respondents' background, their basic understanding of AI, their attitudes toward AI, and their concerns about AI. A total of 562 respondents were counted, with 562 valid questionnaires returned. The results of the questionnaires are displayed in an Excel 2003 form. RESULTS: There were 291 medical workers and 271 other professional technicians completed the questionnaire. About 1/3 of the respondents understood AI and ophthalmic AI. The percentages of people who understood ophthalmic AI among medical workers and other professional technicians were about 42.6 % and 15.6 %, respectively. About 66.0 % of the respondents thought that AI in ophthalmology would partly replace doctors, about 59.07 % having a relatively high acceptance level of ophthalmic AI. Meanwhile, among those with AI in ophthalmology application experiences (30.6 %), above 70 % of respondents held a full acceptance attitude toward AI in ophthalmology. The respondents expressed medical ethics concerns about AI in ophthalmology. And among the respondents who understood AI in ophthalmology, almost all the people said that there was a need to increase the study of medical ethics issues in the ophthalmic AI field. CONCLUSIONS: The survey results revealed that the medical workers had a higher understanding level of AI in ophthalmology than other professional technicians, making it necessary to popularize ophthalmic AI education among other professional technicians. Most of the respondents did not have any experience in ophthalmic AI but generally had a relatively high acceptance level of AI in ophthalmology, and there was a need to strengthen research into medical ethics issues.


Asunto(s)
Oftalmología , Inteligencia Artificial , Actitud del Personal de Salud , Estudios Transversales , Humanos , Encuestas y Cuestionarios
16.
Dis Markers ; 2021: 7651462, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34367378

RESUMEN

AIMS: The lack of primary ophthalmologists in China results in the inability of basic-level hospitals to diagnose pterygium patients. To solve this problem, an intelligent-assisted lightweight pterygium diagnosis model based on anterior segment images is proposed in this study. METHODS: Pterygium is a common and frequently occurring disease in ophthalmology, and fibrous tissue hyperplasia is both a diagnostic biomarker and a surgical biomarker. The model diagnosed pterygium based on biomarkers of pterygium. First, a total of 436 anterior segment images were collected; then, two intelligent-assisted lightweight pterygium diagnosis models (MobileNet 1 and MobileNet 2) based on raw data and augmented data were trained via transfer learning. The results of the lightweight models were compared with the clinical results. The classic models (AlexNet, VGG16 and ResNet18) were also used for training and testing, and their results were compared with the lightweight models. A total of 188 anterior segment images were used for testing. Sensitivity, specificity, F1-score, accuracy, kappa, area under the concentration-time curve (AUC), 95% CI, size, and parameters are the evaluation indicators in this study. RESULTS: There are 188 anterior segment images that were used for testing the five intelligent-assisted pterygium diagnosis models. The overall evaluation index for the MobileNet2 model was the best. The sensitivity, specificity, F1-score, and AUC of the MobileNet2 model for the normal anterior segment image diagnosis were 96.72%, 98.43%, 96.72%, and 0976, respectively; for the pterygium observation period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 83.7%, 90.48%, 82.54%, and 0.872, respectively; for the surgery period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 84.62%, 93.50%, 85.94%, and 0.891, respectively. The kappa value of the MobileNet2 model was 77.64%, the accuracy was 85.11%, the model size was 13.5 M, and the parameter size was 4.2 M. CONCLUSION: This study used deep learning methods to propose a three-category intelligent lightweight-assisted pterygium diagnosis model. The developed model can be used to screen patients for pterygium problems initially, provide reasonable suggestions, and provide timely referrals. It can help primary doctors improve pterygium diagnoses, confer social benefits, and lay the foundation for future models to be embedded in mobile devices.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Pterigion/diagnóstico por imagen , Inteligencia Artificial , China , Diagnóstico Precoz , Humanos , Modelos Teóricos , Sensibilidad y Especificidad , Microscopía con Lámpara de Hendidura
17.
Transl Vis Sci Technol ; 10(7): 20, 2021 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-34132760

RESUMEN

Purpose: The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. Methods: A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. Results: The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. Conclusions: This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. Translational Relevance: This study will help the primary doctors to provide effective services to all ophthalmologic patients.


Asunto(s)
Retinopatía Diabética , Glaucoma , Oftalmólogos , China , Retinopatía Diabética/diagnóstico , Fondo de Ojo , Humanos
18.
Diabetes Ther ; 10(5): 1811-1822, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31290125

RESUMEN

INTRODUCTION: In April 2018, the US Food and Drug Administration (FDA) approved the world's first artificial intelligence (AI) medical device for detecting diabetic retinopathy (DR), the IDx-DR. However, there is a lack of evaluation systems for DR intelligent diagnostic technology. METHODS: Five hundred color fundus photographs of diabetic patients were selected. DR severity varied from grade 0 to 4, with 100 photographs for each grade. Following that, these were diagnosed by both ophthalmologists and the intelligent technology, the results of which were compared by applying the evaluation system. The system includes primary, intermediate, and advanced evaluations, of which the intermediate evaluation incorporated two methods. Main evaluation indicators were sensitivity, specificity, and kappa value. RESULTS: The AI technology diagnosed 93 photographs with no DR, 107 with mild non-proliferative DR (NPDR), 107 with moderate NPDR, 108 with severe NPDR, and 85 with proliferative DR (PDR). The sensitivity, specificity, and kappa value of the AI diagnoses in the primary evaluation were 98.8%, 88.0%, and 0.89, respectively. According to method 1 of the intermediate evaluation, the sensitivity of AI diagnosis was 98.0%, specificity 97.0%, and the kappa value 0.95. In method 2 of the intermediate evaluation, the sensitivity of AI diagnosis was 95.5%, the specificity 99.3%, and kappa value 0.95. In the advanced evaluation, the kappa value of the intelligent diagnosis was 0.86. CONCLUSIONS: This article proposes an evaluation system for color fundus photograph-based intelligent diagnostic technology of DR and demonstrates an application of this system in a clinical setting. The results from this evaluation system serve as the basis for the selection of scenarios in which DR intelligent diagnostic technology can be applied.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...