Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
J Med Internet Res ; 26: e52506, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39141915

RESUMEN

BACKGROUND: For medical artificial intelligence (AI) training and validation, human expert labels are considered the gold standard that represents the correct answers or desired outputs for a given data set. These labels serve as a reference or benchmark against which the model's predictions are compared. OBJECTIVE: This study aimed to assess the accuracy of a custom deep learning (DL) algorithm on classifying diabetic retinopathy (DR) and further demonstrate how label errors may contribute to this assessment in a nationwide DR-screening program. METHODS: Fundus photographs from the Lifeline Express, a nationwide DR-screening program, were analyzed to identify the presence of referable DR using both (1) manual grading by National Health Service England-certificated graders and (2) a DL-based DR-screening algorithm with validated good lab performance. To assess the accuracy of labels, a random sample of images with disagreement between the DL algorithm and the labels was adjudicated by ophthalmologists who were masked to the previous grading results. The error rates of labels in this sample were then used to correct the number of negative and positive cases in the entire data set, serving as postcorrection labels. The DL algorithm's performance was evaluated against both pre- and postcorrection labels. RESULTS: The analysis included 736,083 images from 237,824 participants. The DL algorithm exhibited a gap between the real-world performance and the lab-reported performance in this nationwide data set, with a sensitivity increase of 12.5% (from 79.6% to 92.5%, P<.001) and a specificity increase of 6.9% (from 91.6% to 98.5%, P<.001). In the random sample, 63.6% (560/880) of negative images and 5.2% (140/2710) of positive images were misclassified in the precorrection human labels. High myopia was the primary reason for misclassifying non-DR images as referable DR images, while laser spots were predominantly responsible for misclassified referable cases. The estimated label error rate for the entire data set was 1.2%. The label correction was estimated to bring about a 12.5% enhancement in the estimated sensitivity of the DL algorithm (P<.001). CONCLUSIONS: Label errors based on human image grading, although in a small percentage, can significantly affect the performance evaluation of DL algorithms in real-world DR screening.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética , Retinopatía Diabética/diagnóstico , Humanos , Algoritmos , Tamizaje Masivo/métodos , Tamizaje Masivo/normas , Femenino , Masculino , Persona de Mediana Edad
2.
NPJ Aging ; 10(1): 36, 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39103390

RESUMEN

The comorbidity of Alzheimer's disease (AD) and age-related macular degeneration (AMD) has been established in clinical and genetic studies. There is growing interest in determining the shared environmental factors associated with both conditions. Recent advancements in record linkage techniques enable us to identify the contributing factors to AD and AMD from a wide range of variables. As such, we first constructed a knowledge graph based on the literature, which included all statistically significant risk factors for AD and AMD. An environment-wide association study (EWAS) was conducted to assess the contribution of various environmental factors to the comorbidity of AD and AMD based on the UK biobank. Based on the conditional Q-Q plots and Bayesian algorithm, several shared environmental factors were identified, which could be categorized into the domains of health condition, biological sample parameters, body index, and attendance availability. Finally, we generated a shared etiology landscape for AD and AMD by combining existing knowledge with our novel findings.

3.
Mult Scler Relat Disord ; 88: 105753, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38996710

RESUMEN

BACKGROUND: There is growing evidence supporting that vascular abnormalities contribute to multiple sclerosis (MS), and retinal microvasculature functions as a visible window to observe vessels. We hypothesized that retinal vascular curve tortuosity is associated with MS, which this study aims to address. METHODS: Participants from the UK Biobank with complete clinical records and gradable fundus photos were included in the study. Arteriolar and venular curve tortuosity and vessel area density are quantified automatically using a deep learning system. Individuals with MS were matched to healthy controls using propensity score matching (PSM). Conditional logistic regression was used to investigate the association between retinal vascular characteristics and MS. We also used a receiver operating characteristic (ROC) curve to assess the diagnostic performance of MS. RESULTS: Venular curve tortuosity (VCT) was found to be significantly associated with MS. And patients with multiple sclerosis were probable to have lower VCT than the non-MS group (OR = 0.22 [95 % CI, 0.05 to 0.92], P < 0.05). CONCLUSIONS: Our study reveals a significant association between vessel curve tortuosity and MS. The lower curve tortuosity of the retinal venular network may indicate a higher risk of incident multiple sclerosis.


Asunto(s)
Bancos de Muestras Biológicas , Esclerosis Múltiple , Vasos Retinianos , Humanos , Esclerosis Múltiple/fisiopatología , Esclerosis Múltiple/diagnóstico por imagen , Esclerosis Múltiple/epidemiología , Esclerosis Múltiple/diagnóstico , Femenino , Masculino , Persona de Mediana Edad , Reino Unido , Vasos Retinianos/diagnóstico por imagen , Vasos Retinianos/patología , Estudios Transversales , Adulto , Microvasos/patología , Microvasos/diagnóstico por imagen , Microvasos/fisiopatología , Anciano , Aprendizaje Profundo , Biobanco del Reino Unido
4.
J Med Internet Res ; 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39046096

RESUMEN

BACKGROUND: Large language models (LLMs) demonstrated advanced performance in processing clinical information. However, commercially available LLMs lack specialized medical knowledge and remain susceptible to generating inaccurate information. Given the need for self-management in diabetes, patients commonly seek information online. We introduce the RISE framework and evaluate its performance in enhancing LLMs to provide accurate responses to diabetes-related inquiries. OBJECTIVE: This study aimed to evaluate the potential of RISE framework, an information retrieval and augmentation tool, to improve the LLM's performance to accurately and safely respond to diabetes-related inquiries. METHODS: The RISE, an innovative retrieval augmentation framework, comprises four steps: Rewriting Query, Information Retrieval, Summarization, and Execution. Using a set of 43 common diabetes-related questions, we evaluated three base LLMs (GPT-4, Anthropic Claude 2, Google Bard) and their RISE-enhanced versions. Assessments were conducted by clinicians for accuracy and comprehensiveness, and by patients for understandability. RESULTS: The integration of RISE significantly improved the accuracy and comprehensiveness of responses from all three based LLMs. On average, the percentage of accurate responses increased by 12% (122 - 107/129) with RISE. Specifically, the rates of accurate responses increased by 7% (42 - 39/43) for GPT-4, 19% (39 - 31/43) for Claude 2, and 9% (41 - 37/43) for Google Bard. The framework also enhanced response comprehensiveness, with mean scores improving by 0.44. Understandability was also enhanced by 0.19 on average. Data collection was conducted from Sept. 30, 2023, to Feb. 05, 2024. CONCLUSIONS: RISE significantly improves LLMs' performance in responding to diabetes-related inquiries, enhancing accuracy, comprehensiveness, and understandability. These improvements have crucial implications for RISE's future role in patient education and chronic illness self-management, which contributes to relieving medical resource pressures and raising public awareness of medical knowledge.

5.
iScience ; 27(7): 110021, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39055931

RESUMEN

Existing automatic analysis of fundus fluorescein angiography (FFA) images faces limitations, including a predetermined set of possible image classifications and being confined to text-based question-answering (QA) approaches. This study aims to address these limitations by developing an end-to-end unified model that utilizes synthetic data to train a visual question-answering model for FFA images. To achieve this, we employed ChatGPT to generate 4,110,581 QA pairs for a large FFA dataset, which encompassed a total of 654,343 FFA images from 9,392 participants. We then fine-tuned the Bootstrapping Language-Image Pre-training (BLIP) framework to enable simultaneous handling of vision and language. The performance of the fine-tuned model (ChatFFA) was thoroughly evaluated through automated and manual assessments, as well as case studies based on an external validation set, demonstrating satisfactory results. In conclusion, our ChatFFA system paves the way for improved efficiency and feasibility in medical imaging analysis by leveraging generative large language models.

6.
Arch Gerontol Geriatr ; 126: 105546, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38941948

RESUMEN

OBJECTIVES: To examine the associaiton between environmental measures and brain volumes and its potential mediators. STUDY DESIGN: This was a prospective study. METHODS: Our analysis included 34,454 participants (53.4% females) aged 40-73 years at baseline (between 2006 and 2010) from the UK Biobank. Brain volumes were measured using magnetic resonance imaging between 2014 and 2019. RESULTS: Greater proximity to greenspace buffered at 1000 m at baseline was associated with larger volumes of total brain measured 8.8 years after baseline assessment (standardized ß (95% CI) for each 10% increment in coverage: 0.013(0.005,0.020)), grey matter (0.013(0.006,0.020)), and white matter (0.011(0.004,0.017)) after adjustment for covariates and air pollution. The corresponding numbers for natural environment buffered at 1000 m were 0.010 (0.004,0.017), 0.009 (0.004,0.015), and 0.010 (0.004,0.016), respectively. Similar results were observed for greenspace and natural environment buffered at 300 m. The strongest mediator for the association between greenspace buffered at 1000 m and total brain volume was smoking (percentage (95% CI) of total variance explained: 7.9% (5.5-11.4%)) followed by mean sphered cell volume (3.3% (1.8-5.8%)), vitamin D (2.9% (1.6-5.1%)), and creatinine in blood (2.7% (1.6-4.7%)). Significant mediators combined explained 18.5% (13.2-25.3%) of the association with total brain volume and 32.9% (95% CI: 22.3-45.7%) of the association with grey matter volume. The percentage (95% CI) of the association between natural environment and total brain volume explained by significant mediators combined was 20.6% (14.7-28.1%)). CONCLUSIONS: Higher coverage percentage of greenspace and environment may benefit brain health by promoting healthy lifestyle and improving biomarkers including vitamin D and red blood cell indices.


Asunto(s)
Biomarcadores , Encéfalo , Estilo de Vida , Imagen por Resonancia Magnética , Humanos , Femenino , Masculino , Persona de Mediana Edad , Encéfalo/diagnóstico por imagen , Anciano , Estudios Prospectivos , Adulto , Biomarcadores/sangre , Población Urbana/estadística & datos numéricos , Reino Unido , Tamaño de los Órganos , Ambiente
7.
Adv Sci (Weinh) ; 11(28): e2403507, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38733084

RESUMEN

The defects in perovskite film can cause charge carrier trapping which shortens carrier lifetime and diffusion length. So defects passivation has become promising for the perovskite studies. However, how defects disturb the carrier transport and how the passivating affects the carrier transport in CsPbBr3 are still unclear. Here the carrier dynamics and diffusion processes of CsPbBr3 and LiBr passivated CsPbBr3 films are investigated by using transient absorption spectroscopy and transient absorption microscopy. It's found that there is a fast hot carrier trapping process with the above bandgap excitation, and the hot carrier trapping would decrease the population of cold carriers which are diffusible, then lower the carrier diffusion constant. It's proved that LiBr can passivate the defect and lower the trapping probability of hot carriers, thus improve the carrier diffusion rate. The finding demonstrates the influence of hot carrier trapping to the carrier diffusion in CsPbBr3 film.

8.
NPJ Digit Med ; 7(1): 111, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38702471

RESUMEN

Fundus fluorescein angiography (FFA) is a crucial diagnostic tool for chorioretinal diseases, but its interpretation requires significant expertise and time. Prior studies have used Artificial Intelligence (AI)-based systems to assist FFA interpretation, but these systems lack user interaction and comprehensive evaluation by ophthalmologists. Here, we used large language models (LLMs) to develop an automated interpretation pipeline for both report generation and medical question-answering (QA) for FFA images. The pipeline comprises two parts: an image-text alignment module (Bootstrapping Language-Image Pre-training) for report generation and an LLM (Llama 2) for interactive QA. The model was developed using 654,343 FFA images with 9392 reports. It was evaluated both automatically, using language-based and classification-based metrics, and manually by three experienced ophthalmologists. The automatic evaluation of the generated reports demonstrated that the system can generate coherent and comprehensible free-text reports, achieving a BERTScore of 0.70 and F1 scores ranging from 0.64 to 0.82 for detecting top-5 retinal conditions. The manual evaluation revealed acceptable accuracy (68.3%, Kappa 0.746) and completeness (62.3%, Kappa 0.739) of the generated reports. The generated free-form answers were evaluated manually, with the majority meeting the ophthalmologists' criteria (error-free: 70.7%, complete: 84.0%, harmless: 93.7%, satisfied: 65.3%, Kappa: 0.762-0.834). This study introduces an innovative framework that combines multi-modal transformers and LLMs, enhancing ophthalmic image interpretation, and facilitating interactive communications during medical consultation.

9.
Br J Ophthalmol ; 2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38789133

RESUMEN

PURPOSE: To evaluate the capabilities and incapabilities of a GPT-4V(ision)-based chatbot in interpreting ocular multimodal images. METHODS: We developed a digital ophthalmologist app using GPT-4V and evaluated its performance with a dataset (60 images, 60 ophthalmic conditions, 6 modalities) that included slit-lamp, scanning laser ophthalmoscopy, fundus photography of the posterior pole (FPP), optical coherence tomography, fundus fluorescein angiography and ocular ultrasound images. The chatbot was tested with ten open-ended questions per image, covering examination identification, lesion detection, diagnosis and decision support. The responses were manually assessed for accuracy, usability, safety and diagnosis repeatability. Auto-evaluation was performed using sentence similarity and GPT-4-based auto-evaluation. RESULTS: Out of 600 responses, 30.6% were accurate, 21.5% were highly usable and 55.6% were deemed as no harm. GPT-4V performed best with slit-lamp images, with 42.0%, 38.5% and 68.5% of the responses being accurate, highly usable and no harm, respectively. However, its performance was weaker in FPP images, with only 13.7%, 3.7% and 38.5% in the same categories. GPT-4V correctly identified 95.6% of the imaging modalities and showed varying accuracies in lesion identification (25.6%), diagnosis (16.1%) and decision support (24.0%). The overall repeatability of GPT-4V in diagnosing ocular images was 63.3% (38/60). The overall sentence similarity between responses generated by GPT-4V and human answers is 55.5%, with Spearman correlations of 0.569 for accuracy and 0.576 for usability. CONCLUSION: GPT-4V currently is not yet suitable for clinical decision-making in ophthalmology. Our study serves as a benchmark for enhancing ophthalmic multimodal models.

10.
Br J Ophthalmol ; 2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38508675

RESUMEN

BACKGROUND: Indocyanine green angiography (ICGA) is vital for diagnosing chorioretinal diseases, but its interpretation and patient communication require extensive expertise and time-consuming efforts. We aim to develop a bilingual ICGA report generation and question-answering (QA) system. METHODS: Our dataset comprised 213 129 ICGA images from 2919 participants. The system comprised two stages: image-text alignment for report generation by a multimodal transformer architecture, and large language model (LLM)-based QA with ICGA text reports and human-input questions. Performance was assessed using both qualitative metrics (including Bilingual Evaluation Understudy (BLEU), Consensus-based Image Description Evaluation (CIDEr), Recall-Oriented Understudy for Gisting Evaluation-Longest Common Subsequence (ROUGE-L), Semantic Propositional Image Caption Evaluation (SPICE), accuracy, sensitivity, specificity, precision and F1 score) and subjective evaluation by three experienced ophthalmologists using 5-point scales (5 refers to high quality). RESULTS: We produced 8757 ICGA reports covering 39 disease-related conditions after bilingual translation (66.7% English, 33.3% Chinese). The ICGA-GPT model's report generation performance was evaluated with BLEU scores (1-4) of 0.48, 0.44, 0.40 and 0.37; CIDEr of 0.82; ROUGE of 0.41 and SPICE of 0.18. For disease-based metrics, the average specificity, accuracy, precision, sensitivity and F1 score were 0.98, 0.94, 0.70, 0.68 and 0.64, respectively. Assessing the quality of 50 images (100 reports), three ophthalmologists achieved substantial agreement (kappa=0.723 for completeness, kappa=0.738 for accuracy), yielding scores from 3.20 to 3.55. In an interactive QA scenario involving 100 generated answers, the ophthalmologists provided scores of 4.24, 4.22 and 4.10, displaying good consistency (kappa=0.779). CONCLUSION: This pioneering study introduces the ICGA-GPT model for report generation and interactive QA for the first time, underscoring the potential of LLMs in assisting with automated ICGA image interpretation.

11.
NPJ Digit Med ; 7(1): 34, 2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38347098

RESUMEN

Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.

12.
NPJ Digit Med ; 7(1): 43, 2024 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-38383738

RESUMEN

Artificial intelligence (AI) models have shown great accuracy in health screening. However, for real-world implementation, high accuracy may not guarantee cost-effectiveness. Improving AI's sensitivity finds more high-risk patients but may raise medical costs while increasing specificity reduces unnecessary referrals but may weaken detection capability. To evaluate the trade-off between AI model performance and the long-running cost-effectiveness, we conducted a cost-effectiveness analysis in a nationwide diabetic retinopathy (DR) screening program in China, comprising 251,535 participants with diabetes over 30 years. We tested a validated AI model in 1100 different diagnostic performances (presented as sensitivity/specificity pairs) and modeled annual screening scenarios. The status quo was defined as the scenario with the most accurate AI performance. The incremental cost-effectiveness ratio (ICER) was calculated for other scenarios against the status quo as cost-effectiveness metrics. Compared to the status quo (sensitivity/specificity: 93.3%/87.7%), six scenarios were cost-saving and seven were cost-effective. To achieve cost-saving or cost-effective, the AI model should reach a minimum sensitivity of 88.2% and specificity of 80.4%. The most cost-effective AI model exhibited higher sensitivity (96.3%) and lower specificity (80.4%) than the status quo. In settings with higher DR prevalence and willingness-to-pay levels, the AI needed higher sensitivity for optimal cost-effectiveness. Urban regions and younger patient groups also required higher sensitivity in AI-based screening. In real-world DR screening, the most accurate AI model may not be the most cost-effective. Cost-effectiveness should be independently evaluated, which is most likely to be affected by the AI's sensitivity.

13.
Ophthalmol Sci ; 4(3): 100441, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38420613

RESUMEN

Purpose: We aim to use fundus fluorescein angiography (FFA) to label the capillaries on color fundus (CF) photographs and train a deep learning model to quantify retinal capillaries noninvasively from CF and apply it to cardiovascular disease (CVD) risk assessment. Design: Cross-sectional and longitudinal study. Participants: A total of 90732 pairs of CF-FFA images from 3893 participants for segmentation model development, and 49229 participants in the UK Biobank for association analysis. Methods: We matched the vessels extracted from FFA and CF, and used vessels from FFA as labels to train a deep learning model (RMHAS-FA) to segment retinal capillaries using CF. We tested the model's accuracy on a manually labeled internal test set (FundusCapi). For external validation, we tested the segmentation model on 7 vessel segmentation datasets, and investigated the clinical value of the segmented vessels in predicting CVD events in the UK Biobank. Main Outcome Measures: Area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity for segmentation. Hazard ratio (HR; 95% confidence interval [CI]) for Cox regression analysis. Results: On the FundusCapi dataset, the segmentation performance was AUC = 0.95, accuracy = 0.94, sensitivity = 0.90, and specificity = 0.93. Smaller vessel skeleton density had a stronger correlation with CVD risk factors and incidence (P < 0.01). Reduced density of small vessel skeletons was strongly associated with an increased risk of CVD incidence and mortality for women (HR [95% CI] = 0.91 [0.84-0.98] and 0.68 [0.54-0.86], respectively). Conclusions: Using paired CF-FFA images, we automated the laborious manual labeling process and enabled noninvasive capillary quantification from CF, supporting its potential as a sensitive screening method for identifying individuals at high risk of future CVD events. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

14.
Transl Vis Sci Technol ; 13(1): 2, 2024 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-38165718

RESUMEN

Purpose: This study aimed to investigate the association between quantitative retinal vascular measurements and the risk of all-cause and premature mortality. Methods: In this population-based cohort study using the UK Biobank data, we employed the Retina-based Microvascular Health Assessment System to assess fundus images for image quality and extracted 392 retinal vascular measurements per fundus image. These measurements encompass six categories of vascular features: caliber, density, length, tortuosity, branching angle, and complexity. Univariate Cox regression models were used to identify potential indicators of mortality risk using data on all-cause and premature mortality from death registries. Multivariate Cox regression models were then used to test these associations while controlling for confounding factors. Results: The final analysis included 66,415 participants. After adjusting for demographic, health, and lifestyle factors and genetic risk score, 18 and 10 retinal vascular measurements were significantly associated with all-cause mortality and premature mortality, respectively. In the fully adjusted model, the following measurements of different vascular features were significantly associated with all-cause mortality and premature mortality: arterial bifurcation density (branching angle), number of arterial segments (complexity), interquartile range and median absolute deviation of arterial curve angle (tortuosity), mean and median values of mean pixel widths of all arterial segments in each image (caliber), skeleton density of arteries in macular area (density), and minimum venular arc length (length). Conclusions: The study revealed 18 retinal vascular measurements significantly associated with all-cause mortality and 10 associated with premature mortality. Those identified parameters should be further studied for biological mechanisms connecting them to increased mortality risk. Translational Relevance: This study identifies retinal biomarkers for increased mortality risk and provides novel targets for investigating the underlying biological mechanisms.


Asunto(s)
Vasos Retinianos , Biobanco del Reino Unido , Humanos , Vasos Retinianos/diagnóstico por imagen , Estudios de Cohortes , Bancos de Muestras Biológicas , Retina/diagnóstico por imagen
15.
Adv Ophthalmol Pract Res ; 3(4): 192-198, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38059165

RESUMEN

Background: Fundus Autofluorescence (FAF) is a valuable imaging technique used to assess metabolic alterations in the retinal pigment epithelium (RPE) associated with various age-related and disease-related changes. The practical uses of FAF are ever-growing. This study aimed to evaluate the effectiveness of a generative deep learning (DL) model in translating color fundus (CF) images into synthetic FAF images and explore its potential for enhancing screening of age-related macular degeneration (AMD). Methods: A generative adversarial network (GAN) model was trained on pairs of CF and FAF images to generate synthetic FAF images. The quality of synthesized FAF images was assessed objectively by common generation metrics. Additionally, the clinical effectiveness of the generated FAF images in AMD classification was evaluated by measuring the area under the curve (AUC), using the LabelMe dataset. Results: A total of 8410 FAF images from 2586 patients were analyzed. The synthesized FAF images exhibited an impressive objectively assessed quality, achieving a multi-scale structural similarity index (MS-SSIM) of 0.67. When evaluated on the LabelMe dataset, the combination of generated FAF images and CF images resulted in a noteworthy improvement in AMD classification accuracy, with the AUC increasing from 0.931 to 0.968. Conclusions: This study presents the first attempt to use a generative deep learning model to create authentic and high-quality FAF images from CF images. The incorporation of the translated FAF images on top of CF images improved the accuracy of AMD classification. Overall, this study presents a promising approach to enhance large-scale AMD screening.

16.
Transl Vis Sci Technol ; 12(12): 20, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-38133514

RESUMEN

Purpose: The purpose of this study was to improve the automated diagnosis of glaucomatous optic neuropathy (GON), we propose a generative adversarial network (GAN) model that translates Optain images to Topcon images. Methods: We trained the GAN model on 725 paired images from Topcon and Optain cameras and externally validated it using an additional 843 paired images collected from the Aravind Eye Hospital in India. An optic disc segmentation model was used to assess the disparities in disc parameters across cameras. The performance of the translated images was evaluated using root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), 95% limits of agreement (LOA), Pearson's correlations, and Cohen's Kappa coefficient. The evaluation compared the performance of the GON model on Topcon photographs as a reference to that of Optain photographs and GAN-translated photographs. Results: The GAN model significantly reduced Optain false positive results for GON diagnosis, with RMSE, PSNR, and SSIM of GAN images being 0.067, 14.31, and 0.64, respectively, the mean difference of VCDR and cup-to-disc area ratio between Topcon and GAN images being 0.03, 95% LOA ranging from -0.09 to 0.15 and -0.05 to 0.10. Pearson correlation coefficients increased from 0.61 to 0.85 in VCDR and 0.70 to 0.89 in cup-to-disc area ratio, whereas Cohen's Kappa improved from 0.32 to 0.60 after GAN translation. Conclusions: Image-to-image translation across cameras can be achieved by using GAN to solve the problem of disc overexposure in Optain cameras. Translational Relevance: Our approach enhances the generalizability of deep learning diagnostic models, ensuring their performance on cameras that are outside of the original training data set.


Asunto(s)
Glaucoma , Disco Óptico , Enfermedades del Nervio Óptico , Humanos , Glaucoma/diagnóstico , Disco Óptico/diagnóstico por imagen , Enfermedades del Nervio Óptico/diagnóstico
17.
Ophthalmol Sci ; 3(4): 100401, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38025160

RESUMEN

Purpose: To develop and validate a deep learning model that can transform color fundus (CF) photography into corresponding venous and late-phase fundus fluorescein angiography (FFA) images. Design: Cross-sectional study. Participants: We included 51 370 CF-venous FFA pairs and 14 644 CF-late FFA pairs from 4438 patients for model development. External testing involved 50 eyes with CF-FFA pairs and 2 public datasets for diabetic retinopathy (DR) classification, with 86 952 CF from EyePACs, and 1744 CF from MESSIDOR2. Methods: We trained a deep-learning model to transform CF into corresponding venous and late-phase FFA images. The translated FFA images' quality was evaluated quantitatively on the internal test set and subjectively on 100 eyes with CF-FFA paired images (50 from external), based on the realisticity of the global image, anatomical landmarks (macula, optic disc, and vessels), and lesions. Moreover, we validated the clinical utility of the translated FFA for classifying 5-class DR and diabetic macular edema (DME) in the EyePACs and MESSIDOR2 datasets. Main Outcome Measures: Image generation was quantitatively assessed by structural similarity measures (SSIM), and subjectively by 2 clinical experts on a 5-point scale (1 refers real FFA); intragrader agreement was assessed by kappa. The DR classification accuracy was assessed by area under the receiver operating characteristic curve. Results: The SSIM of the translated FFA images were > 0.6, and the subjective quality scores ranged from 1.37 to 2.60. Both experts reported similar quality scores with substantial agreement (all kappas > 0.8). Adding the generated FFA on top of CF improved DR classification in the EyePACs and MESSIDOR2 datasets, with the area under the receiver operating characteristic curve increased from 0.912 to 0.939 on the EyePACs dataset and from 0.952 to 0.972 on the MESSIDOR2 dataset. The DME area under the receiver operating characteristic curve also increased from 0.927 to 0.974 in the MESSIDOR2 dataset. Conclusions: Our CF-to-FFA framework produced realistic FFA images. Moreover, adding the translated FFA images on top of CF improved the accuracy of DR screening. These results suggest that CF-to-FFA translation could be used as a surrogate method when FFA examination is not feasible and as a simple add-on to improve DR screening. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

18.
Artif Intell Med ; 143: 102611, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37673579

RESUMEN

Medical Visual Question Answering (VQA) is a combination of medical artificial intelligence and popular VQA challenges. Given a medical image and a clinically relevant question in natural language, the medical VQA system is expected to predict a plausible and convincing answer. Although the general-domain VQA has been extensively studied, the medical VQA still needs specific investigation and exploration due to its task features. In the first part of this survey, we collect and discuss the publicly available medical VQA datasets up-to-date about the data source, data quantity, and task feature. In the second part, we review the approaches used in medical VQA tasks. We summarize and discuss their techniques, innovations, and potential improvements. In the last part, we analyze some medical-specific challenges for the field and discuss future research directions. Our goal is to provide comprehensive and helpful information for researchers interested in the medical visual question answering field and encourage them to conduct further research in this field.


Asunto(s)
Inteligencia Artificial
19.
Atherosclerosis ; 380: 117196, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37562159

RESUMEN

BACKGROUND AND AIMS: The high mortality rate and huge disease burden of coronary heart disease (CHD) highlight the importance of its early detection and timely intervention. Given the non-invasive nature of fundus photography and recent development in the quantification of retinal microvascular parameters with deep learning techniques, our study aims to investigate the association between incident CHD and retinal microvascular parameters. METHODS: UK Biobanks participants with gradable fundus images and without a history of diagnosed CHD at recruitment were included for analysis. A fully automated artificial intelligence system was used to extract quantitative measurements that represent the density and complexity of the retinal microvasculature, including fractal dimension (Df), number of vascular segments (NS), vascular skeleton density (VSD) and vascular area density (VAD). RESULTS: A total of 57,947 participants (mean age 55.6 ± 8.1 years; 56% female) without a history of diagnosed CHD were included. During a median follow-up of 11.0 (interquartile range, 10.88 to 11.19) years, 3211 incident CHD events occurred. In multivariable Cox proportional hazards models, we found decreasing Df (adjusted HR = 0.80, 95% CI, 0.65-0.98, p = 0.033), lower NS of arteries (adjusted HR = 0.69, 95% CI, 0.54-0.88, p = 0.002) and venules (adjusted HR = 0.77, 95% CI, 0.61-0.97, p = 0.024), and reduced arterial VSD (adjusted HR = 0.72, 95% CI, 0.57-0.91, p = 0.007) and venous VSD (adjusted HR = 0.78, 95% CI, 0.62-0.98, p = 0.034) were related to an increased risk of incident CHD. CONCLUSIONS: Our study revealed a significant association between retinal microvascular parameters and incident CHD. As the lower complexity and density of the retinal vascular network may indicate an increased risk of incident CHD, this may empower its prediction with the quantitative measurements of retinal structure.


Asunto(s)
Inteligencia Artificial , Enfermedad Coronaria , Humanos , Femenino , Persona de Mediana Edad , Masculino , Densidad Microvascular , Factores de Riesgo , Enfermedad Coronaria/diagnóstico , Enfermedad Coronaria/epidemiología , Microvasos , Incidencia
20.
Asia Pac J Ophthalmol (Phila) ; 12(4): 377-383, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37523429

RESUMEN

PURPOSE: Repeated low-level red-light (RLRL) therapy has been confirmed as a novel intervention for myopia control in children. This study aims to investigate longitudinal changes in choroidal structure in myopic children following 12-month RLRL treatment. MATERIALS AND METHODS: The current study is a secondary analysis from a multicenter, randomized controlled trial (NCT04073238). Choroidal parameters were derived from baseline and follow-up swept-source optical coherence tomography scans taken at 1, 3, 6, and 12 months. These parameters included the luminal area (LA), stromal area (SA), total choroidal area (TCA; a combination of LA and SA), and choroidal vascularity index (CVI; ratio of LA to TCA), which were automatically measured by a validated custom choroidal structure assessment tool. RESULTS: A total of 143 children (88.3% of all participants) with sufficient image quality were included in the analysis (n=67 in the RLRL and n=76 in the control groups). At the 12-month visit, all choroidal parameters increased in the RLRL group, with changes from baseline of 11.70×10 3  µm 2 (95% CI: 4.14-19.26×10 3  µm 2 ), 3.92×10 3  µm 2 (95% CI: 0.56-7.27×10 3  µm 2 ), 15.61×10 3  µm 2 (95% CI: 5.02-26.20×10 3  µm 2 ), and 0.21% (95% CI: -0.09% to 0.51%) for LA, SA, TCA, and CVI, respectively, whereas these parameters reduced in the control group. CONCLUSIONS: Following RLRL therapy, the choroidal thickening was found to be accompanied by increases in both the vessel LA and SA, with the increase in LA being greater than that of SA. In the control group, with myopia progression, both the LA and SA decreased over time.


Asunto(s)
Coroides , Miopía , Niño , Humanos , Luz , Miopía/terapia , Tomografía de Coherencia Óptica , Fototerapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...