Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Asia Pac J Ophthalmol (Phila) ; 13(4): 100096, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39209215

RESUMO

PURPOSE: To discuss the worldwide applications and potential impact of artificial intelligence (AI) for the diagnosis, management and analysis of treatment outcomes of common retinal diseases. METHODS: We performed an online literature review, using PubMed Central (PMC), of AI applications to evaluate and manage retinal diseases. Search terms included AI for screening, diagnosis, monitoring, management, and treatment outcomes for age-related macular degeneration (AMD), diabetic retinopathy (DR), retinal surgery, retinal vascular disease, retinopathy of prematurity (ROP) and sickle cell retinopathy (SCR). Additional search terms included AI and color fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). We included original research articles and review articles. RESULTS: Research studies have investigated and shown the utility of AI for screening for diseases such as DR, AMD, ROP, and SCR. Research studies using validated and labeled datasets confirmed AI algorithms could predict disease progression and response to treatment. Studies showed AI facilitated rapid and quantitative interpretation of retinal biomarkers seen on OCT and OCTA imaging. Research articles suggest AI may be useful for planning and performing robotic surgery. Studies suggest AI holds the potential to help lessen the impact of socioeconomic disparities on the outcomes of retinal diseases. CONCLUSIONS: AI applications for retinal diseases can assist the clinician, not only by disease screening and monitoring for disease recurrence but also in quantitative analysis of treatment outcomes and prediction of treatment response. The public health impact on the prevention of blindness from DR, AMD, and other retinal vascular diseases remains to be determined.


Assuntos
Inteligência Artificial , Interpretação de Imagem Assistida por Computador , Programas de Rastreamento , Doenças Retinianas , Doenças Retinianas/diagnóstico por imagem , Doenças Retinianas/terapia , Programas de Rastreamento/métodos , Biomarcadores/análise , Progressão da Doença , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/normas , Retina/diagnóstico por imagem
2.
bioRxiv ; 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38746183

RESUMO

Background: Training Large Language Models (LLMs) with in-domain data can significantly enhance their performance, leading to more accurate and reliable question-answering (QA) systems essential for supporting clinical decision-making and educating patients. Methods: This study introduces LLMs trained on in-domain, well-curated ophthalmic datasets. We also present an open-source substantial ophthalmic language dataset for model training. Our LLMs (EYE-Llama), first pre-trained on an ophthalmology-specific dataset, including paper abstracts, textbooks, EyeWiki, and Wikipedia articles. Subsequently, the models underwent fine-tuning using a diverse range of QA datasets. The LLMs at each stage were then compared to baseline Llama 2, ChatDoctor, and ChatGPT (GPT3.5) models, using four distinct test sets, and evaluated quantitatively (Accuracy, F1 score, and BERTScore) and qualitatively by two ophthalmologists. Results: Upon evaluating the models using the American Academy of Ophthalmology (AAO) test set and BERTScore as the metric, our models surpassed both Llama 2 and ChatDoctor in terms of F1 score and performed equally to ChatGPT, which was trained with 175 billion parameters (EYE-Llama: 0.57, Llama 2: 0.56, ChatDoctor: 0.56, and ChatGPT: 0.57). When evaluated on the MedMCQA test set, the fine-tuned models demonstrated a higher accuracy compared to the Llama 2 and ChatDoctor models (EYE-Llama: 0.39, Llama 2: 0.33, ChatDoctor: 0.29). However, ChatGPT outperformed EYE-Llama with an accuracy of 0.55. When tested with the PubmedQA set, the fine-tuned model showed improvement in accuracy over both the Llama 2, ChatGPT, and ChatDoctor models (EYE-Llama: 0.96, Llama 2: 0.90, ChatGPT: 0.93, ChatDoctor: 0.92). Conclusion: The study shows that pre-training and fine-tuning LLMs like EYE-Llama enhances their performance in specific medical domains. Our EYE-Llama models surpass baseline Llama 2 in all evaluations, highlighting the effectiveness of specialized LLMs in medical QA systems. (Funded by NEI R15EY035804 (MNA) and UNC Charlotte Faculty Research Grant (MNA).).

3.
medRxiv ; 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38464168

RESUMO

Purpose: This study explores the feasibility of using generative machine learning (ML) to translate Optical Coherence Tomography (OCT) images into Optical Coherence Tomography Angiography (OCTA) images, potentially bypassing the need for specialized OCTA hardware. Methods: The method involved implementing a generative adversarial network framework that includes a 2D vascular segmentation model and a 2D OCTA image translation model. The study utilizes a public dataset of 500 patients, divided into subsets based on resolution and disease status, to validate the quality of TR-OCTA images. The validation employs several quality and quantitative metrics to compare the translated images with ground truth OCTAs (GT-OCTA). We then quantitatively characterize vascular features generated in TR-OCTAs with GT-OCTAs to assess the feasibility of using TR-OCTA for objective disease diagnosis. Result: TR-OCTAs showed high image quality in both 3 and 6 mm datasets (high-resolution, moderate structural similarity and contrast quality compared to GT-OCTAs). There were slight discrepancies in vascular metrics, especially in diseased patients. Blood vessel features like tortuosity and vessel perimeter index showed a better trend compared to density features which are affected by local vascular distortions. Conclusion: This study presents a promising solution to the limitations of OCTA adoption in clinical practice by using vascular features from TR-OCTA for disease detection. Translation relevance: This study has the potential to significantly enhance the diagnostic process for retinal diseases by making detailed vascular imaging more widely available and reducing dependency on costly OCTA equipment.

4.
Front Med (Lausanne) ; 10: 1259017, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37901412

RESUMO

This paper presents a federated learning (FL) approach to train deep learning models for classifying age-related macular degeneration (AMD) using optical coherence tomography image data. We employ the use of residual network and vision transformer encoders for the normal vs. AMD binary classification, integrating four unique domain adaptation techniques to address domain shift issues caused by heterogeneous data distribution in different institutions. Experimental results indicate that FL strategies can achieve competitive performance similar to centralized models even though each local model has access to a portion of the training data. Notably, the Adaptive Personalization FL strategy stood out in our FL evaluations, consistently delivering high performance across all tests due to its additional local model. Furthermore, the study provides valuable insights into the efficacy of simpler architectures in image classification tasks, particularly in scenarios where data privacy and decentralization are critical using both encoders. It suggests future exploration into deeper models and other FL strategies for a more nuanced understanding of these models' performance. Data and code are available at https://github.com/QIAIUNCC/FL_UNCC_QIAI.

5.
Sci Rep ; 13(1): 6047, 2023 04 13.
Artigo em Inglês | MEDLINE | ID: mdl-37055475

RESUMO

Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Redes Neurais de Computação , Algoritmos , Aprendizado de Máquina , Fundo de Olho
6.
J Clin Med ; 12(1)2023 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-36615186

RESUMO

With the progression of diabetic retinopathy (DR) from the non-proliferative (NPDR) to proliferative (PDR) stage, the possibility of vision impairment increases significantly. Therefore, it is clinically important to detect the progression to PDR stage for proper intervention. We propose a segmentation-assisted DR classification methodology, that builds on (and improves) current methods by using a fully convolutional network (FCN) to segment retinal neovascularizations (NV) in retinal images prior to image classification. This study utilizes the Kaggle EyePacs dataset, containing retinal photographs from patients with varying degrees of DR (mild, moderate, severe NPDR and PDR. Two graders annotated the NV (a board-certified ophthalmologist and a trained medical student). Segmentation was performed by training an FCN to locate neovascularization on 669 retinal fundus photographs labeled with PDR status according to NV presence. The trained segmentation model was used to locate probable NV in images from the classification dataset. Finally, a CNN was trained to classify the combined images and probability maps into categories of PDR. The mean accuracy of segmentation-assisted classification was 87.71% on the test set (SD = 7.71%). Segmentation-assisted classification of PDR achieved accuracy that was 7.74% better than classification alone. Our study shows that segmentation assistance improves identification of the most severe stage of diabetic retinopathy and has the potential to improve deep learning performance in other imaging problems with limited data availability.

7.
J Clin Med ; 11(24)2022 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-36556019

RESUMO

Hyperreflective foci (HRF) have been associated with retinal disease progression and demonstrated as a negative prognostic biomarker for visual function. Automated segmentation of HRF in retinal optical coherence tomography (OCT) scans can be beneficial to identify the formation and movement of the HRF biomarker as a retinal disease progresses and can serve as the first step in understanding the nature and severity of the disease. In this paper, we propose a fully automated deep neural network based HRF segmentation model in OCT images. We enhance the model's performance by using a patch-based strategy that increases the model's compute on the HRF pixels. The patch-based strategy is evaluated against state of the art HRF segmentation pipelines on clinical retinal image data. Our results shows that the patch-based approach demonstrates a high precision score and intersection over union (IOU) using a ResNet34 segmentation model with Binary Cross Entropy loss function. The HRF segmentation pipeline can be used for analyzing HRF biomarkers for different retinopathies.

8.
Radiol Artif Intell ; 4(3): e210174, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35652118

RESUMO

Purpose: To develop a deep learning-based risk stratification system for thyroid nodules using US cine images. Materials and Methods: In this retrospective study, 192 biopsy-confirmed thyroid nodules (175 benign, 17 malignant) in 167 unique patients (mean age, 56 years ± 16 [SD], 137 women) undergoing cine US between April 2017 and May 2018 with American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS)-structured radiology reports were evaluated. A deep learning-based system that exploits the cine images obtained during three-dimensional volumetric thyroid scans and outputs malignancy risk was developed and compared, using fivefold cross-validation, against a two-dimensional (2D) deep learning-based model (Static-2DCNN), a radiomics-based model using cine images (Cine-Radiomics), and the ACR TI-RADS level, with histopathologic diagnosis as ground truth. The system was used to revise the ACR TI-RADS recommendation, and its diagnostic performance was compared against the original ACR TI-RADS. Results: The system achieved higher average area under the receiver operating characteristic curve (AUC, 0.88) than Static-2DCNN (0.72, P = .03) and tended toward higher average AUC than Cine-Radiomics (0.78, P = .16) and ACR TI-RADS level (0.80, P = .21). The system downgraded recommendations for 92 benign and two malignant nodules and upgraded none. The revised recommendation achieved higher specificity (139 of 175, 79.4%) than the original ACR TI-RADS (47 of 175, 26.9%; P < .001), with no difference in sensitivity (12 of 17, 71% and 14 of 17, 82%, respectively; P = .63). Conclusion: The risk stratification system using US cine images had higher diagnostic performance than prior models and improved specificity of ACR TI-RADS when used to revise ACR TI-RADS recommendation.Keywords: Neural Networks, US, Abdomen/GI, Head/Neck, Thyroid, Computer Applications-3D, Oncology, Diagnosis, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

9.
Exp Biol Med (Maywood) ; 246(20): 2159-2169, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34404252

RESUMO

Age-related macular degeneration (AMD) is a leading cause of severe vision loss. With our aging population, it may affect 288 million people globally by the year 2040. AMD progresses from an early and intermediate dry form to an advanced one, which manifests as choroidal neovascularization and geographic atrophy. Conversion to AMD-related exudation is known as progression to neovascular AMD, and presence of geographic atrophy is known as progression to advanced dry AMD. AMD progression predictions could enable timely monitoring, earlier detection and treatment, improving vision outcomes. Machine learning approaches, a subset of artificial intelligence applications, applied on imaging data are showing promising results in predicting progression. Extracted biomarkers, specifically from optical coherence tomography scans, are informative in predicting progression events. The purpose of this mini review is to provide an overview about current machine learning applications in artificial intelligence for predicting AMD progression, and describe the various methods, data-input types, and imaging modalities used to identify high-risk patients. With advances in computational capabilities, artificial intelligence applications are likely to transform patient care and management in AMD. External validation studies that improve generalizability to populations and devices, as well as evaluating systems in real-world clinical settings are needed to improve the clinical translations of artificial intelligence AMD applications.


Assuntos
Aprendizado Profundo , Degeneração Macular/diagnóstico por imagem , Degeneração Macular/diagnóstico , Tomografia de Coerência Óptica/métodos , Envelhecimento/fisiologia , Algoritmos , Biomarcadores/análise , Biologia Computacional/métodos , Progressão da Doença , Feminino , Humanos , Degeneração Macular/patologia , Prognóstico , Vasos Retinianos/diagnóstico por imagem , Acuidade Visual/fisiologia
11.
Quant Imaging Med Surg ; 11(3): 1102-1119, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33654680

RESUMO

Quantitative retinal imaging is essential for eye disease detection, staging classification, and treatment assessment. It is known that different eye diseases or severity stages can affect the artery and vein systems in different ways. Therefore, differential artery-vein (AV) analysis can improve the performance of quantitative retinal imaging. In this article, we provide a brief summary of technical rationales and clinical applications of differential AV analysis in fundus photography, optical coherence tomography (OCT), and OCT angiography (OCTA).

12.
Retina ; 41(3): 538-545, 2021 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-32568980

RESUMO

PURPOSE: This study aimed to verify the feasibility of using vascular complexity features for objective differentiation of controls and nonproliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR) patients. METHODS: This was a cross-sectional study conducted in a tertiary, subspecialty, academic practice. The cohort included 20 control subjects, 60 NPDR patients, and 56 PDR patients. Three vascular complexity features, including the vessel complexity index, fractal dimension, and blood vessel tortuosity, were derived from each optical coherence tomography angiography image. A shifting-window measurement was further implemented to identify local feature distortions due to localized neovascularization and mesh structures in PDR. RESULTS: With mean value analysis of the whole-image, only the vessel complexity index and blood vessel tortuosity were able to classify NPDR versus PDR patients. Comparative shifting-window measurement revealed increased sensitivity of complexity feature analysis, particularly for NPDR versus PDR classification. A multivariate regression model indicated that the combination of all three vascular complexity features with shifting-window measurement provided the best classification accuracy for controls versus NPDR versus PDR. CONCLUSION: Vessel complexity index and blood vessel tortuosity were the most sensitive in differentiating NPDR and PDR patients. A shifting-window measurement increased the sensitivity significantly for objective optical coherence tomography angiography classification of diabetic retinopathy.


Assuntos
Retinopatia Diabética/diagnóstico , Angiofluoresceinografia/métodos , Vasos Retinianos/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Adulto , Estudos Transversais , Feminino , Fundo de Olho , Humanos , Masculino , Pessoa de Meia-Idade
13.
Biomed Opt Express ; 11(9): 5249-5257, 2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-33014612

RESUMO

This study is to demonstrate deep learning for automated artery-vein (AV) classification in optical coherence tomography angiography (OCTA). The AV-Net, a fully convolutional network (FCN) based on modified U-shaped CNN architecture, incorporates enface OCT and OCTA to differentiate arteries and veins. For the multi-modal training process, the enface OCT works as a near infrared fundus image to provide vessel intensity profiles, and the OCTA contains blood flow strength and vessel geometry features. A transfer learning process is also integrated to compensate for the limitation of available dataset size of OCTA, which is a relatively new imaging modality. By providing an average accuracy of 86.75%, the AV-Net promises a fully automated platform to foster clinical deployment of differential AV analysis in OCTA.

14.
Transl Vis Sci Technol ; 9(2): 35, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32855839

RESUMO

Purpose: To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy. Methods: A deep-learning convolutional neural network (CNN) architecture, VGG16, was employed for this study. A transfer learning process was implemented to retrain the CNN for robust OCTA classification. One dataset, consisting of images of 32 healthy eyes, 75 eyes with diabetic retinopathy (DR), and 24 eyes with diabetes but no DR (NoDR), was used for training and cross-validation. A second dataset consisting of 20 NoDR and 26 DR eyes was used for external validation. To demonstrate the feasibility of using artificial intelligence (AI) screening of DR in clinical environments, the CNN was incorporated into a graphical user interface (GUI) platform. Results: With the last nine layers retrained, the CNN architecture achieved the best performance for automated OCTA classification. The cross-validation accuracy of the retrained classifier for differentiating among healthy, NoDR, and DR eyes was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR, and DR eyes were 0.97, 0.98, and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusions: With a transfer learning process for retraining, a CNN can be used for robust OCTA classification of healthy, NoDR, and DR eyes. The AI-based OCTA classification platform may provide a practical solution to reducing the burden of experienced ophthalmologists with regard to mass screening of DR patients. Translational Relevance: Deep-learning-based OCTA classification can alleviate the need for manual graders and improve DR screening efficiency.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Angiografia , Inteligência Artificial , Retinopatia Diabética/diagnóstico , Humanos , Aprendizado de Máquina , Vasos Retinianos , Tomografia de Coerência Óptica
15.
Retina ; 40(2): 322-332, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31972803

RESUMO

PURPOSE: This study aims to characterize quantitative optical coherence tomography angiography (OCTA) features of nonproliferative diabetic retinopathy (NPDR) and to validate them for computer-aided NPDR staging. METHODS: One hundred and twenty OCTA images from 60 NPDR (mild, moderate, and severe stages) patients and 40 images from 20 control subjects were used for this study conducted in a tertiary, subspecialty, academic practice. Both eyes were photographed and all the OCTAs were 6 mm × 6 mm macular scans. Six quantitative features, that is, blood vessel tortuosity, blood vascular caliber, vessel perimeter index, blood vessel density, foveal avascular zone area, and foveal avascular zone contour irregularity (FAZ-CI) were derived from each OCTA image. A support vector machine classification model was trained and tested for computer-aided classification of NPDR stages. Sensitivity, specificity, and accuracy were used as performance metrics of computer-aided classification, and receiver operation characteristics curve was plotted to measure the sensitivity-specificity tradeoff of the classification algorithm. RESULTS: Among 6 individual OCTA features, blood vessel density shows the best classification accuracies, 93.89% and 90.89% for control versus disease and control versus mild NPDR, respectively. Combined feature classification achieved improved accuracies, 94.41% and 92.96%, respectively. Moreover, the temporal-perifoveal region was the most sensitive region for early detection of DR. For multiclass classification, support vector machine algorithm achieved 84% accuracy. CONCLUSION: Blood vessel density was observed as the most sensitive feature, and temporal-perifoveal region was the most sensitive region for early detection of DR. Quantitative OCTA analysis enabled computer-aided identification and staging of NPDR.


Assuntos
Algoritmos , Retinopatia Diabética/classificação , Angiofluoresceinografia/métodos , Macula Lutea/patologia , Tomografia de Coerência Óptica/métodos , Adulto , Retinopatia Diabética/diagnóstico , Feminino , Fundo de Olho , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Índice de Gravidade de Doença
17.
Biomed Opt Express ; 10(5): 2493-2503, 2019 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-31149381

RESUMO

This study is to establish quantitative features of vascular geometry in optical coherence tomography angiography (OCTA) and validate them for the objective classification of diabetic retinopathy (DR). Six geometric features, including total vessel branching angle (VBA: θ), child branching angles (CBAs: α1 and α2), vessel branching coefficient (VBC), and children-to-parent vessel width ratios (VWR1 and VWR2), were automatically derived from each vessel branch in OCTA. Comparative analysis of heathy control, diabetes with no DR (NoDR), and non-proliferative DR (NPDR) was conducted. Our study reveals four quantitative OCTA features to produce robust DR detection and staging classification: (ANOVA, P<0.05), VBA, CBA1, VBC, and VWR1.

18.
Ophthalmol Retina ; 3(10): 826-834, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31227330

RESUMO

PURPOSE: To correlate quantitative OCT angiography (OCTA) biomarkers with clinical features and to predict the extent of visual improvement after ranibizumab treatment for diabetic macular edema (DME) with OCTA biomarkers. DESIGN: Retrospective, longitudinal study in Taiwan. PARTICIPANTS: Fifty eyes of 50 patients with DME and 22 eyes of 22 healthy persons, with the exception of cataract and refractive error, from 1 hospital. METHODS: Each eye underwent OCT angiography (RTVue XR Avanti System with AngioVue software version 2017.1; Optovue, Fremont, CA), and 3×3-mm2 en face OCTA images of the superficial layer and the deep layer were obtained at baseline and after 3 monthly injections of ranibizumab in the study group. OCT angiography images also were acquired from the control group. MAIN OUTCOME MEASURES: Five OCTA biomarkers, including foveal avascular zone (FAZ) area (FAZ-A), FAZ contour irregularity (FAZ-CI), average vessel caliber (AVC), vessel tortuosity (VT), and vessel density (VD), were analyzed comprehensively. Best-corrected visual acuity (BCVA) and central retinal thickness (CRT) also were obtained. Student t tests were used to compare the OCTA biomarkers between the study group and the control group. Linear regression models were used to evaluate the correlations between the baseline OCTA biomarkers and the changes of BCVA and CRT after treatment. RESULTS: Eyes with DME had larger AVC, VT, FAZ-A, and FAZ-CI and lower VD than those in the control group (P < 0.001 for all). After the loading ranibizumab treatment, these OCTA biomarkers improved but did not return to normal levels. Among all biomarkers, higher inner parafoveal VD in the superficial layer at baseline correlated most significantly with visual gain after treatment in the multiple regression model with adjustment for CRT and ellipsoid zone disruption (P < 0.001). To predict visual improvement, outer parafoveal VD in the superficial layer at the baseline showed the largest area under the receiver operating characteristic curve (0.787; P = 0.004). No baseline OCTA biomarkers showed any significant correlation specifically with anatomic improvement. CONCLUSIONS: For eyes with DME, parafoveal VD in the superficial layer at baseline was an independent predictor for visual improvement after the loading ranibizumab treatment.


Assuntos
Retinopatia Diabética/tratamento farmacológico , Angiofluoresceinografia/métodos , Macula Lutea/patologia , Edema Macular/tratamento farmacológico , Ranibizumab/administração & dosagem , Tomografia de Coerência Óptica/métodos , Acuidade Visual , Inibidores da Angiogênese/administração & dosagem , Retinopatia Diabética/complicações , Retinopatia Diabética/diagnóstico , Feminino , Seguimentos , Fundo de Olho , Humanos , Injeções Intravítreas , Macula Lutea/efeitos dos fármacos , Edema Macular/diagnóstico , Edema Macular/etiologia , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Fator A de Crescimento do Endotélio Vascular/antagonistas & inibidores
19.
J Clin Med ; 8(6)2019 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-31216768

RESUMO

Artificial intelligence (AI) classification holds promise as a novel and affordable screening tool for clinical management of ocular diseases. Rural and underserved areas, which suffer from lack of access to experienced ophthalmologists may particularly benefit from this technology. Quantitative optical coherence tomography angiography (OCTA) imaging provides excellent capability to identify subtle vascular distortions, which are useful for classifying retinovascular diseases. However, application of AI for differentiation and classification of multiple eye diseases is not yet established. In this study, we demonstrate supervised machine learning based multi-task OCTA classification. We sought 1) to differentiate normal from diseased ocular conditions, 2) to differentiate different ocular disease conditions from each other, and 3) to stage the severity of each ocular condition. Quantitative OCTA features, including blood vessel tortuosity (BVT), blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour irregularity (FAZ-CI) were fully automatically extracted from the OCTA images. A stepwise backward elimination approach was employed to identify sensitive OCTA features and optimal-feature-combinations for the multi-task classification. For proof-of-concept demonstration, diabetic retinopathy (DR) and sickle cell retinopathy (SCR) were used to validate the supervised machine leaning classifier. The presented AI classification methodology is applicable and can be readily extended to other ocular diseases, holding promise to enable a mass-screening platform for clinical deployment and telemedicine.

20.
Biomed Opt Express ; 10(4): 2055-2066, 2019 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-31061771

RESUMO

Differential artery-vein analysis promises better sensitivity for retinal disease detection and classification. However, clinical optical coherence tomography angiography (OCTA) instruments lack the function of artery-vein differentiation. This study aims to verify the feasibility of using OCT intensity feature analysis to guide artery-vein differentiation in OCTA. Four OCT intensity profile features, including i) ratio of vessel width to central reflex, ii) average of maximum profile brightness, iii) average of median profile intensity, and iv) optical density of vessel boundary intensity compared to background intensity, are used to classify artery-vein source nodes in OCT. A blood vessel tracking algorithm is then employed to automatically generate the OCT artery-vein map. Given the fact that OCT and OCTA are intrinsically reconstructed from the same raw spectrogram, the OCT artery-vein map is able to guide artery-vein differentiation in OCTA directly.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA