Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
JAMA Ophthalmol ; 141(7): 677-685, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37289463

RESUMO

Importance: Best-corrected visual acuity (BCVA) is a measure used to manage diabetic macular edema (DME), sometimes suggesting development of DME or consideration of initiating, repeating, withholding, or resuming treatment with anti-vascular endothelial growth factor. Using artificial intelligence (AI) to estimate BCVA from fundus images could help clinicians manage DME by reducing the personnel needed for refraction, the time presently required for assessing BCVA, or even the number of office visits if imaged remotely. Objective: To evaluate the potential application of AI techniques for estimating BCVA from fundus photographs with and without ancillary information. Design, Setting, and Participants: Deidentified color fundus images taken after dilation were used post hoc to train AI systems to perform regression from image to BCVA and to evaluate resultant estimation errors. Participants were patients enrolled in the VISTA randomized clinical trial through 148 weeks wherein the study eye was treated with aflibercept or laser. The data from study participants included macular images, clinical information, and BCVA scores by trained examiners following protocol refraction and VA measurement on Early Treatment Diabetic Retinopathy Study (ETDRS) charts. Main Outcomes: Primary outcome was regression evaluated by mean absolute error (MAE); the secondary outcome included percentage of predictions within 10 letters, computed over the entire cohort as well as over subsets categorized by baseline BCVA, determined from baseline through the 148-week visit. Results: Analysis included 7185 macular color fundus images of the study and fellow eyes from 459 participants. Overall, the mean (SD) age was 62.2 (9.8) years, and 250 (54.5%) were male. The baseline BCVA score for the study eyes ranged from 73 to 24 letters (approximate Snellen equivalent 20/40 to 20/320). Using ResNet50 architecture, the MAE for the testing set (n = 641 images) was 9.66 (95% CI, 9.05-10.28); 33% of the values (95% CI, 30%-37%) were within 0 to 5 letters and 28% (95% CI, 25%-32%) within 6 to 10 letters. For BCVA of 100 letters or less but more than 80 letters (20/10 to 20/25, n = 161) and 80 letters or less but more than 55 letters (20/32 to 20/80, n = 309), the MAE was 8.84 letters (95% CI, 7.88-9.81) and 7.91 letters (95% CI, 7.28-8.53), respectively. Conclusions and Relevance: This investigation suggests AI can estimate BCVA directly from fundus photographs in patients with DME, without refraction or subjective visual acuity measurements, often within 1 to 2 lines on an ETDRS chart, supporting this AI concept if additional improvements in estimates can be achieved.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Edema Macular , Humanos , Masculino , Pessoa de Meia-Idade , Feminino , Edema Macular/diagnóstico , Edema Macular/tratamento farmacológico , Edema Macular/fisiopatologia , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/tratamento farmacológico , Retinopatia Diabética/complicações , Inibidores da Angiogênese/uso terapêutico , Inteligência Artificial , Fator A de Crescimento do Endotélio Vascular , Acuidade Visual , Algoritmos , Diabetes Mellitus/tratamento farmacológico
2.
Neural Comput ; 34(3): 716-753, 2022 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-35016212

RESUMO

We propose a novel method for enforcing AI fairness with respect to protected or sensitive factors. This method uses a dual strategy performing training and representation alteration (TARA) for the mitigation of prominent causes of AI bias. It includes the use of representation learning alteration via adversarial independence to suppress the bias-inducing dependence of the data representation from protected factors and training set alteration via intelligent augmentation to address bias-causing data imbalance by using generative models that allow the fine control of sensitive factors related to underrepresented populations via domain adaptation and latent space manipulation. When testing our methods on image analytics, experiments demonstrate that TARA significantly or fully debiases baseline models while outperforming competing debiasing methods that have the same amount of information-for example, with (% overall accuracy, % accuracy gap) = (78.8, 0.5) versus the baseline method's score of (71.8, 10.5) for Eye-PACS, and (73.7, 11.8) versus (69.1, 21.7) for CelebA. Furthermore, recognizing certain limitations in current metrics used for assessing debiasing performance, we propose novel conjunctive debiasing metrics. Our experiments also demonstrate the ability of these novel metrics in assessing the Pareto efficiency of the proposed methods.


Assuntos
Generalização Psicológica , Processamento de Imagem Assistida por Computador , Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos
3.
JAMA Ophthalmol ; 140(2): 185-189, 2022 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-34967890

RESUMO

IMPORTANCE: Anomaly detectors could be pursued for retinal diagnoses based on artificial intelligence systems that may not have access to training examples for all retinal diseases in all phenotypic presentations. Possible applications could include screening of population for any retinal disease rather than a specific disease such as diabetic retinopathy, detection of novel retinal diseases or novel presentations of common retinal diseases, and detection of rare diseases with little or no data available for training. OBJECTIVE: To study the application of anomaly detection to retinal diseases. DESIGN, SETTING, AND PARTICIPANTS: High-resolution retinal images from the publicly available EyePACS data set with fundus images with a corresponding label ranging from 0 to 4 for representing different severities of diabetic retinopathy. Sixteen variants of anomaly detectors were designed. For evaluation, a surrogate problem was constructed, using diabetic retinopathy images, in which only retinas with nonreferable diabetic retinopathy, ie, no diabetic macular edema, and no diabetic retinopathy or mild to moderate nonproliferative diabetic retinopathy were used for training an artificial intelligence system, but both nonreferable and referable diabetic retinopathy (including diabetic macular edema or proliferative diabetic retinopathy) were used to test the system for detecting retinal disease. MAIN OUTCOMES AND MEASURES: Anomaly detectors were evaluated by commonly accepted performance metrics, including area under the receiver operating characteristic curve, F1 score, and accuracy. RESULTS: A total of 88 692 high-resolution retinal images of 44 346 individuals with varying severity of diabetic retinopathy were analyzed. The best performing across all anomaly detectors had an area under the receiver operating characteristic of 0.808 (95% CI, 0.789-0.827) and was obtained using an embedding method that involved a self-supervised network. CONCLUSIONS AND RELEVANCE: This study suggests when abnormal (diseased) data, ie, referable diabetic retinopathy in this study, were not available for training of retinal diagnostic systems wherein only nonreferable diabetic retinopathy was used for training, anomaly detection techniques were useful in identifying images with and without referable diabetic retinopathy. This suggests that anomaly detectors may be used to detect retinal diseases in more generalized settings and potentially could play a role in screening of populations for retinal diseases or identifying novel diseases and phenotyping or detecting unusual presentations of common retinal diseases.


Assuntos
Aprendizado Profundo , Retinopatia Diabética , Edema Macular , Inteligência Artificial , Retinopatia Diabética/diagnóstico , Fundo de Olho , Humanos
4.
Neural Comput ; 33(9): 2473-2510, 2021 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-34412112

RESUMO

We investigate the use of parameterized families of information-theoretic measures to generalize the loss functions of generative adversarial networks (GANs) with the objective of improving performance. A new generator loss function, least kth-order GAN (LkGAN), is introduced, generalizing the least squares GANs (LSGANs) by using a kth-order absolute error distortion measure with k≥1 (which recovers the LSGAN loss function when k=2). It is shown that minimizing this generalized loss function under an (unconstrained) optimal discriminator is equivalent to minimizing the kth-order Pearson-Vajda divergence. Another novel GAN generator loss function is next proposed in terms of Rényi cross-entropy functionals with order α>0, α≠1. It is demonstrated that this Rényi-centric generalized loss function, which provably reduces to the original GAN loss function as α→1, preserves the equilibrium point satisfied by the original GAN based on the Jensen-Rényi divergence, a natural extension of the Jensen-Shannon divergence. Experimental results indicate that the proposed loss functions, applied to the MNIST and CelebA data sets, under both DCGAN and StyleGAN architectures, confer performance benefits by virtue of the extra degrees of freedom provided by the parameters k and α, respectively. More specifically, experiments show improvements with regard to the quality of the generated images as measured by the Fréchet inception distance score and training stability. While it was applied to GANs in this study, the proposed approach is generic and can be used in other applications of information theory to deep learning, for example, the issues of fairness or privacy in artificial intelligence.

5.
Transl Vis Sci Technol ; 10(2): 13, 2021 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-34003898

RESUMO

Purpose: This study evaluated generative methods to potentially mitigate artificial intelligence (AI) bias when diagnosing diabetic retinopathy (DR) resulting from training data imbalance or domain generalization, which occurs when deep learning systems (DLSs) face concepts at test/inference time they were not initially trained on. Methods: The public domain Kaggle EyePACS dataset (88,692 fundi and 44,346 individuals, originally diverse for ethnicity) was modified by adding clinician-annotated labels and constructing an artificial scenario of data imbalance and domain generalization by disallowing training (but not testing) exemplars for images of retinas with DR warranting referral (DR-referable) from darker-skin individuals, who presumably have greater concentration of melanin within uveal melanocytes, on average, contributing to retinal image pigmentation. A traditional/baseline diagnostic DLS was compared against new DLSs that would use training data augmented via generative models for debiasing. Results: Accuracy (95% confidence intervals [CIs]) of the baseline diagnostics DLS for fundus images of lighter-skin individuals was 73.0% (66.9% to 79.2%) versus darker-skin of 60.5% (53.5% to 67.3%), demonstrating bias/disparity (delta = 12.5%; Welch t-test t = 2.670, P = 0.008) in AI performance across protected subpopulations. Using novel generative methods for addressing missing subpopulation training data (DR-referable darker-skin) achieved instead accuracy, for lighter-skin, of 72.0% (65.8% to 78.2%), and for darker-skin, of 71.5% (65.2% to 77.8%), demonstrating closer parity (delta = 0.5%) in accuracy across subpopulations (Welch t-test t = 0.111, P = 0.912). Conclusions: Findings illustrate how data imbalance and domain generalization can lead to disparity of accuracy across subpopulations, and show that novel generative methods of synthetic fundus images may play a role for debiasing AI. Translational Relevance: New AI methods have possible applications to address potential AI bias in DR diagnostics from fundus pigmentation, and potentially other ophthalmic DLSs too.


Assuntos
Inteligência Artificial , Retinopatia Diabética , Retinopatia Diabética/diagnóstico , Fundo de Olho , Humanos , Programas de Rastreamento , Retina
6.
Neural Comput ; 33(3): 802-826, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33513320

RESUMO

Our work focuses on unsupervised and generative methods that address the following goals: (1) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (2) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (3) developing anomaly detection methods that leverage representations learned in the first goal. For goal 1, we propose a network architecture that exploits the combination of multiscale generative models with mutual information (MI) maximization. For goal 2, we derive an analytical result, lemma 1, that brings clarity to two related but distinct concepts: the ability of generative networks to control semantic attributes of images they generate, resulting from MI maximization, and the ability to disentangle latent space representations, obtained via total correlation minimization. More specifically, we demonstrate that maximizing semantic attribute control encourages disentanglement of latent factors. Using lemma 1 and adopting MI in our loss function, we then show empirically that for image generation tasks, the proposed approach exhibits superior performance as measured in the quality and disentanglement of the generated images when compared to other state-of-the-art methods, with quality assessed via the Fréchet inception distance (FID) and disentanglement via mutual information gap. For goal 3, we design several systems for anomaly detection exploiting representations learned in goal 1 and demonstrate their performance benefits when compared to state-of-the-art generative and discriminative algorithms. Our contributions in representation learning have potential applications in addressing other important problems in computer vision, such as bias and privacy in AI.

7.
JAMA Ophthalmol ; 138(10): 1070-1077, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32880609

RESUMO

Importance: Recent studies have demonstrated the successful application of artificial intelligence (AI) for automated retinal disease diagnostics but have not addressed a fundamental challenge for deep learning systems: the current need for large, criterion standard-annotated retinal data sets for training. Low-shot learning algorithms, aiming to learn from a relatively low number of training data, may be beneficial for clinical situations involving rare retinal diseases or when addressing potential bias resulting from data that may not adequately represent certain groups for training, such as individuals older than 85 years. Objective: To evaluate whether low-shot deep learning methods are beneficial when using small training data sets for automated retinal diagnostics. Design, Setting, and Participants: This cross-sectional study, conducted from July 1, 2019, to June 21, 2020, compared different diabetic retinopathy classification algorithms, traditional and low-shot, for 2-class designations (diabetic retinopathy warranting referral vs not warranting referral). The public domain EyePACS data set was used, which originally included 88 692 fundi from 44 346 individuals. Statistical analysis was performed from February 1 to June 21, 2020. Main Outcomes and Measures: The performance (95% CIs) of the various AI algorithms was measured via receiver operating curves and their area under the curve (AUC), precision recall curves, accuracy, and F1 score, evaluated for different training data sizes, ranging from 5120 to 10 samples per class. Results: Deep learning algorithms, when trained with sufficiently large data sets (5120 samples per class), yielded comparable performance, with an AUC of 0.8330 (95% CI, 0.8140-0.8520) for a traditional approach (eg, fined-tuned ResNet), compared with low-shot methods (AUC, 0.8348 [95% CI, 0.8159-0.8537]) (using self-supervised Deep InfoMax [our method denoted as DIM]). However, when far fewer training images were available (n = 160), the traditional deep learning approach had an AUC decreasing to 0.6585 (95% CI, 0.6332-0.6838) and was outperformed by a low-shot method using self-supervision with an AUC of 0.7467 (95% CI, 0.7239-0.7695). At very low shots (n = 10), the traditional approach had performance close to chance, with an AUC of 0.5178 (95% CI, 0.4909-0.5447) compared with the best low-shot method (AUC, 0.5778 [95% CI, 0.5512-0.6044]). Conclusions and Relevance: These findings suggest the potential benefits of using low-shot methods for AI retinal diagnostics when a limited number of annotated training retinal images are available (eg, with rare ophthalmic diseases or when addressing potential AI bias).


Assuntos
Algoritmos , Inteligência Artificial , Aprendizado Profundo , Retinopatia Diabética/diagnóstico , Redes Neurais de Computação , Doenças Raras/diagnóstico , Estudos Transversais , Feminino , Humanos , Masculino , Curva ROC , Estudos Retrospectivos
8.
Comput Biol Med ; 125: 103977, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32949845

RESUMO

This study examines the use of AI methods and deep learning (DL) for prescreening skin lesions and detecting the characteristic erythema migrans rash of acute Lyme disease. Accurate identification of erythema migrans allows for early diagnosis and treatment, which avoids the potential for later neurologic, rheumatologic, and cardiac complications of Lyme disease. We develop and test several deep learning models for detecting erythema migrans versus several other clinically relevant skin conditions, including cellulitis, tinea corporis, herpes zoster, erythema multiforme, lesions due to tick bites and insect bites, as well as non-pathogenic normal skin. We consider a set of clinically-relevant binary and multiclass classification problems of increasing complexity. We train the DL models on a combination of publicly available images and test on public images as well as images obtained in the clinical setting. We report performance metrics that measure agreement with a gold standard, as well as a receiver operating characteristic curve and associated area under the curve. On public images, we find that the DL system has an accuracy ranging from 71.58% (and 95% error margin equal to 3.77%) for an 8-class problem of EM versus 7 other classes including other skin pathologies, insect bites and normal skin, to 94.23% (3.66%) for a binary problem of EM vs. non-pathological skin. On clinical images of affected individuals, the DL system has a sensitivity of 88.55% (2.39%). These results suggest that a DL system can help in prescreening and referring individuals to physicians for earlier diagnosis and treatment, in the presence of clinically relevant confusers, thereby reducing further complications and morbidity.


Assuntos
Eritema Migrans Crônico , Doença de Lyme , Eritema , Humanos , Curva ROC , Pele
9.
Prog Retin Eye Res ; 72: 100759, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31048019

RESUMO

The advent of computer graphic processing units, improvement in mathematical models and availability of big data has allowed artificial intelligence (AI) using machine learning (ML) and deep learning (DL) techniques to achieve robust performance for broad applications in social-media, the internet of things, the automotive industry and healthcare. DL systems in particular provide improved capability in image, speech and motion recognition as well as in natural language processing. In medicine, significant progress of AI and DL systems has been demonstrated in image-centric specialties such as radiology, dermatology, pathology and ophthalmology. New studies, including pre-registered prospective clinical trials, have shown DL systems are accurate and effective in detecting diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), retinopathy of prematurity, refractive error and in identifying cardiovascular risk factors and diseases, from digital fundus photographs. There is also increasing attention on the use of AI and DL systems in identifying disease features, progression and treatment response for retinal diseases such as neovascular AMD and diabetic macular edema using optical coherence tomography (OCT). Additionally, the application of ML to visual fields may be useful in detecting glaucoma progression. There are limited studies that incorporate clinical data including electronic health records, in AL and DL algorithms, and no prospective studies to demonstrate that AI and DL algorithms can predict the development of clinical eye disease. This article describes global eye disease burden, unmet needs and common conditions of public health importance for which AI and DL systems may be applicable. Technical and clinical aspects to build a DL system to address those needs, and the potential challenges for clinical adoption are discussed. AI, ML and DL will likely play a crucial role in clinical ophthalmology practice, with implications for screening, diagnosis and follow up of the major causes of vision impairment in the setting of ageing populations globally.


Assuntos
Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Oftalmopatias/diagnóstico , Oftalmologia/métodos , Humanos
11.
Comput Biol Med ; 105: 151-156, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30654165

RESUMO

Lyme disease can lead to neurological, cardiac, and rheumatologic complications when untreated. Timely recognition of the erythema migrans rash of acute Lyme disease by patients and clinicians is crucial to early diagnosis and treatment. Our objective in this study was to develop deep learning approaches using deep convolutional neural networks for detecting acute Lyme disease from erythema migrans images of varying quality and acquisition conditions. This study used a cross-sectional dataset of images to train a model employing a deep convolutional neural network to perform classification of erythema migrans versus other skin conditions including tinea corporis and herpes zoster, and normal, non-pathogenic skin. Evaluation of the machine's ability to classify skin types was also performed on a validation set of images. Machine performance for detecting erythema migrans was further tested against a panel of non-medical humans. Online, publicly available images of both erythema migrans and non-Lyme confounding skin lesions were mined, and combined with erythema migrans images from an ongoing, longitudinal study of participants with acute Lyme disease enrolled in 2016 and 2017 who were recruited from primary and urgent care centers. The final dataset had 1834 images, including 1718 expert clinician-curated online images from unknown individuals with erythema migrans, tinea corporis, herpes zoster, and normal skin. It also included 116 images taken of 63 research participants from the Mid-Atlantic region. Two clinicians carefully annotated all lesion images. A convenience sample of 7 non-medically-trained humans were used as a panel to compare against machine performance. We calculated several performance metrics, including accuracy and Kappa (characterizing agreement with gold standard), as well as a receiver operating characteristic curve and associated area under the curve. For detecting erythema migrans, the machine had an accuracy (95% confidence interval error margin) of 86.53% (2.70), ROCAUC of 0.9510 (0.0171) and Kappa of 0.7143. Our results suggested substantial agreement between machine and clinician criterion standard. Comparison of machine with non-medical expert human performance indicated that the machine almost always exceeded acceptable specificity, and could operate with higher sensitivity. This could have benefits for prescreening prior to physician referral, earlier treatment, and reductions in morbidity.


Assuntos
Aprendizado Profundo , Eritema Migrans Crônico/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Pele/diagnóstico por imagem , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC
12.
JAMA Ophthalmol ; 137(3): 258-264, 2019 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-30629091

RESUMO

Importance: Deep learning (DL) used for discriminative tasks in ophthalmology, such as diagnosing diabetic retinopathy or age-related macular degeneration (AMD), requires large image data sets graded by human experts to train deep convolutional neural networks (DCNNs). In contrast, generative DL techniques could synthesize large new data sets of artificial retina images with different stages of AMD. Such images could enhance existing data sets of common and rare ophthalmic diseases without concern for personally identifying information to assist medical education of students, residents, and retinal specialists, as well as for training new DL diagnostic models for which extensive data sets from large clinical trials of expertly graded images may not exist. Objective: To develop DL techniques for synthesizing high-resolution realistic fundus images serving as proxy data sets for use by retinal specialists and DL machines. Design, Setting, and Participants: Generative adversarial networks were trained on 133 821 color fundus images from 4613 study participants from the Age-Related Eye Disease Study (AREDS), generating synthetic fundus images with and without AMD. We compared retinal specialists' ability to diagnose AMD on both real and synthetic images, asking them to assess image gradability and testing their ability to discern real from synthetic images. The performance of AMD diagnostic DCNNs (referable vs not referable AMD) trained on either all-real vs all-synthetic data sets was compared. Main Outcomes and Measures: Accuracy of 2 retinal specialists (T.Y.A.L. and K.D.P.) for diagnosing and distinguishing AMD on real vs synthetic images and diagnostic performance (area under the curve) of DL algorithms trained on synthetic vs real images. Results: The diagnostic accuracy of 2 retinal specialists on real vs synthetic images was similar. The accuracy of diagnosis as referable vs nonreferable AMD compared with certified human graders for retinal specialist 1 was 84.54% (error margin, 4.06%) on real images vs 84.12% (error margin, 4.16%) on synthetic images and for retinal specialist 2 was 89.47% (error margin, 3.45%) on real images vs 89.19% (error margin, 3.54%) on synthetic images. Retinal specialists could not distinguish real from synthetic images, with an accuracy of 59.50% (error margin, 3.93%) for retinal specialist 1 and 53.67% (error margin, 3.99%) for retinal specialist 2. The DCNNs trained on real data showed an area under the curve of 0.9706 (error margin, 0.0029), and those trained on synthetic data showed an area under the curve of 0.9235 (error margin, 0.0045). Conclusions and Relevance: Deep learning-synthesized images appeared to be realistic to retinal specialists, and DCNNs achieved diagnostic performance on synthetic data close to that for real images, suggesting that DL generative techniques hold promise for training humans and machines.


Assuntos
Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Degeneração Macular/diagnóstico , Fundo de Olho , Humanos , Reprodutibilidade dos Testes
13.
Comput Biol Med ; 105: 46-53, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30583249

RESUMO

We address the challenge of finding anomalies in ultrasound images via deep learning, specifically applying this to screening for myopathies and finding rare presentations of myopathic disease. Among myopathic diseases, this study focuses on the use case of myositis given the spectrum of muscle involvement seen in these inflammatory muscle diseases, as well as the potential for treatment. For this study, we have developed a fully annotated dataset (called "Myositis3K") which includes 3586 images of eighty-nine individuals (35 control and 54 with myositis) acquired with informed consent. We approach this challenge as one of performing unsupervised novelty detection (ND), and use tools leveraging deep embeddings combined with several novelty scoring methods. We evaluated these various ND algorithms and compared their performance against human clinician performance, against other methods including supervised binary classification approaches, and against unsupervised novelty detection approaches using generative methods. Our best performing approach resulted in a (ROC) AUC (and 95% CI error margin) of 0.7192 (0.0164), which is a promising baseline for developing future clinical tools for unsupervised prescreening of myopathies.


Assuntos
Bases de Dados Factuais , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Miosite/diagnóstico por imagem , Feminino , Humanos , Masculino , Ultrassonografia
14.
JAMA Ophthalmol ; 136(12): 1359-1366, 2018 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-30242349

RESUMO

Importance: Although deep learning (DL) can identify the intermediate or advanced stages of age-related macular degeneration (AMD) as a binary yes or no, stratified gradings using the more granular Age-Related Eye Disease Study (AREDS) 9-step detailed severity scale for AMD provide more precise estimation of 5-year progression to advanced stages. The AREDS 9-step detailed scale's complexity and implementation solely with highly trained fundus photograph graders potentially hampered its clinical use, warranting development and use of an alternate AREDS simple scale, which although valuable, has less predictive ability. Objective: To describe DL techniques for the AREDS 9-step detailed severity scale for AMD to estimate 5-year risk probability with reasonable accuracy. Design, Setting, and Participants: This study used data collected from November 13, 1992, to November 30, 2005, from 4613 study participants of the AREDS data set to develop deep convolutional neural networks that were trained to provide detailed automated AMD grading on several AMD severity classification scales, using a multiclass classification setting. Two AMD severity classification problems using criteria based on 4-step (AMD-1, AMD-2, AMD-3, and AMD-4 from classifications developed for AREDS eligibility criteria) and 9-step (from AREDS detailed severity scale) AMD severity scales were investigated. The performance of these algorithms was compared with a contemporary human grader and against a criterion standard (fundus photograph reading center graders) used at the time of AREDS enrollment and follow-up. Three methods for estimating 5-year risk were developed, including one based on DL regression. Data were analyzed from December 1, 2017, through April 15, 2018. Main Outcomes and Measures: Weighted κ scores and mean unsigned errors for estimating 5-year risk probability of progression to advanced AMD. Results: This study used 67 401 color fundus images from the 4613 study participants. The weighted κ scores were 0.77 for the 4-step and 0.74 for the 9-step AMD severity scales. The overall mean estimation error for the 5-year risk ranged from 3.5% to 5.3%. Conclusions and Relevance: These findings suggest that DL AMD grading has, for the 4-step classification evaluation, performance comparable with that of humans and achieves promising results for providing AMD detailed severity grading (9-step classification), which normally requires highly trained graders, and for estimating 5-year risk of progression to advanced AMD. Use of DL has the potential to assist physicians in longitudinal care for individualized, detailed risk assessment as well as clinical studies of disease progression during treatment or as public screening or monitoring worldwide.


Assuntos
Algoritmos , Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Macula Lutea/diagnóstico por imagem , Degeneração Macular/diagnóstico , Medição de Risco/métodos , Idoso , Progressão da Doença , Feminino , Seguimentos , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Fatores de Risco , Índice de Gravidade de Doença , Fatores de Tempo , Estados Unidos/epidemiologia
16.
Clin Exp Rheumatol ; 36(6): 996-1002, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29745890

RESUMO

OBJECTIVES: Imaging plays a role in myositis assessment by detecting muscle changes indicative of pathology. This study was conducted to determine the ultrasonographic pattern of muscle involvement in patients with inclusion body myositis (IBM) through an assessment of muscle echointensity. METHODS: Sixty-two individuals were consecutively studied, 18 with IBM, 16 with polymyositis or dermatomyositis and 28 normal controls. Standardised scans were completed bilaterally for the deltoids, biceps, flexor digitorum profundus (FDP), flexor carpi ulnaris, rectus femoris, tibialis anterior and gastrocnemius assessing for muscle echointensity changes. RESULTS: Patients with IBM had a markedly increased muscle echointensity when compared with comparator groups for all muscles studied. This was most discriminating at the FDP, gastrocnemius and rectus femoris. Asymmetry between sides and a heterogeneously increased echointensity were also seen. CONCLUSIONS: Ultrasonography can aid in the assessment of IBM by displaying an increased echointensity in characteristically involved muscles, particularly when combined with assessments for asymmetry and echotexture.


Assuntos
Músculo Esquelético/diagnóstico por imagem , Miosite de Corpos de Inclusão/diagnóstico por imagem , Ultrassonografia/métodos , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes
17.
JAMA Ophthalmol ; 135(11): 1170-1176, 2017 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-28973096

RESUMO

Importance: Age-related macular degeneration (AMD) affects millions of people throughout the world. The intermediate stage may go undetected, as it typically is asymptomatic. However, the preferred practice patterns for AMD recommend identifying individuals with this stage of the disease to educate how to monitor for the early detection of the choroidal neovascular stage before substantial vision loss has occurred and to consider dietary supplements that might reduce the risk of the disease progressing from the intermediate to the advanced stage. Identification, though, can be time-intensive and requires expertly trained individuals. Objective: To develop methods for automatically detecting AMD from fundus images using a novel application of deep learning methods to the automated assessment of these images and to leverage artificial intelligence advances. Design, Setting, and Participants: Deep convolutional neural networks that are explicitly trained for performing automated AMD grading were compared with an alternate deep learning method that used transfer learning and universal features and with a trained clinical grader. Age-related macular degeneration automated detection was applied to a 2-class classification problem in which the task was to distinguish the disease-free/early stages from the referable intermediate/advanced stages. Using several experiments that entailed different data partitioning, the performance of the machine algorithms and human graders in evaluating over 130 000 images that were deidentified with respect to age, sex, and race/ethnicity from 4613 patients against a gold standard included in the National Institutes of Health Age-related Eye Disease Study data set was evaluated. Main Outcomes and Measures: Accuracy, receiver operating characteristics and area under the curve, and kappa score. Results: The deep convolutional neural network method yielded accuracy (SD) that ranged between 88.4% (0.5%) and 91.6% (0.1%), the area under the receiver operating characteristic curve was between 0.94 and 0.96, and kappa coefficient (SD) between 0.764 (0.010) and 0.829 (0.003), which indicated a substantial agreement with the gold standard Age-related Eye Disease Study data set. Conclusions and Relevance: Applying a deep learning-based automated assessment of AMD from fundus images can produce results that are similar to human performance levels. This study demonstrates that automated algorithms could play a role that is independent of expert human graders in the current management of AMD and could address the costs of screening or monitoring, access to health care, and the assessment of novel treatments that address the development or progression of AMD.


Assuntos
Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação , Degeneração Macular Exsudativa/diagnóstico , Fundo de Olho , Humanos , Curva ROC , Reprodutibilidade dos Testes
18.
PLoS One ; 12(8): e0184059, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28854220

RESUMO

OBJECTIVE: To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. METHODS: Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. RESULTS: The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). CONCLUSIONS: This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.


Assuntos
Aprendizado de Máquina , Músculos/diagnóstico por imagem , Miosite/diagnóstico por imagem , Redes Neurais de Computação , Ultrassonografia/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Dermatomiosite/diagnóstico por imagem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Miosite de Corpos de Inclusão/diagnóstico por imagem , Polimiosite/diagnóstico por imagem , Adulto Jovem
19.
Comput Biol Med ; 82: 80-86, 2017 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-28167406

RESUMO

BACKGROUND: When left untreated, age-related macular degeneration (AMD) is the leading cause of vision loss in people over fifty in the US. Currently it is estimated that about eight million US individuals have the intermediate stage of AMD that is often asymptomatic with regard to visual deficit. These individuals are at high risk for progressing to the advanced stage where the often treatable choroidal neovascular form of AMD can occur. Careful monitoring to detect the onset and prompt treatment of the neovascular form as well as dietary supplementation can reduce the risk of vision loss from AMD, therefore, preferred practice patterns recommend identifying individuals with the intermediate stage in a timely manner. METHODS: Past automated retinal image analysis (ARIA) methods applied on fundus imagery have relied on engineered and hand-designed visual features. We instead detail the novel application of a machine learning approach using deep learning for the problem of ARIA and AMD analysis. We use transfer learning and universal features derived from deep convolutional neural networks (DCNN). We address clinically relevant 4-class, 3-class, and 2-class AMD severity classification problems. RESULTS: Using 5664 color fundus images from the NIH AREDS dataset and DCNN universal features, we obtain values for accuracy for the (4-, 3-, 2-) class classification problem of (79.4%, 81.5%, 93.4%) for machine vs. (75.8%, 85.0%, 95.2%) for physician grading. DISCUSSION: This study demonstrates the efficacy of machine grading based on deep universal features/transfer learning when applied to ARIA and is a promising step in providing a pre-screener to identify individuals with intermediate AMD and also as a tool that can facilitate identifying such individuals for clinical studies aimed at developing improved therapies. It also demonstrates comparable performance between computer and physician grading.


Assuntos
Algoritmos , Angiofluoresceinografia/métodos , Aprendizado de Máquina , Degeneração Macular/diagnóstico por imagem , Degeneração Macular/patologia , Reconhecimento Automatizado de Padrão/métodos , Diagnóstico Precoce , Humanos , Interpretação de Imagem Assistida por Computador , Degeneração Macular/classificação , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Índice de Gravidade de Doença
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 411-414, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268360

RESUMO

Retinal prosthetic devices can significantly and positively impact the ability of visually challenged individuals to live a more independent life. We describe a visual processing system which leverages image analysis techniques to produce visual patterns and allows the user to more effectively perceive their environment. These patterns are used to stimulate a retinal prosthesis to allow self guidance and a higher degree of autonomy for the affected individual. Specifically, we describe an image processing pipeline that allows for object and face localization in cluttered environments as well as various contrast enhancement strategies in the "implanted image." Finally, we describe a real-time implementation and deployment of this system on the Argus II platform. We believe that these advances can significantly improve the effectiveness of the next generation of retinal prostheses.


Assuntos
Algoritmos , Face , Próteses Visuais , Humanos , Processamento de Imagem Assistida por Computador , Reconhecimento Visual de Modelos/fisiologia , Pessoas com Deficiência Visual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA