Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Ophthalmology ; 126(4): 552-564, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30553900

RESUMO

PURPOSE: To understand the impact of deep learning diabetic retinopathy (DR) algorithms on physician readers in computer-assisted settings. DESIGN: Evaluation of diagnostic technology. PARTICIPANTS: One thousand seven hundred ninety-six retinal fundus images from 1612 diabetic patients. METHODS: Ten ophthalmologists (5 general ophthalmologists, 4 retina specialists, 1 retina fellow) read images for DR severity based on the International Clinical Diabetic Retinopathy disease severity scale in each of 3 conditions: unassisted, grades only, or grades plus heatmap. Grades-only assistance comprised a histogram of DR predictions (grades) from a trained deep-learning model. For grades plus heatmap, we additionally showed explanatory heatmaps. MAIN OUTCOME MEASURES: For each experiment arm, we computed sensitivity and specificity of each reader and the algorithm for different levels of DR severity against an adjudicated reference standard. We also measured accuracy (exact 5-class level agreement and Cohen's quadratically weighted κ), reader-reported confidence (5-point Likert scale), and grading time. RESULTS: Readers graded more accurately with model assistance than without for the grades-only condition (P < 0.001). Grades plus heatmaps improved accuracy for patients with DR (P < 0.001), but reduced accuracy for patients without DR (P = 0.006). Both forms of assistance increased readers' sensitivity moderate-or-worse DR: unassisted: mean, 79.4% [95% confidence interval (CI), 72.3%-86.5%]; grades only: mean, 87.5% [95% CI, 85.1%-89.9%]; grades plus heatmap: mean, 88.7% [95% CI, 84.9%-92.5%] without a corresponding drop in specificity (unassisted: mean, 96.6% [95% CI, 95.9%-97.4%]; grades only: mean, 96.1% [95% CI, 95.5%-96.7%]; grades plus heatmap: mean, 95.5% [95% CI, 94.8%-96.1%]). Algorithmic assistance increased the accuracy of retina specialists above that of the unassisted reader or model alone; and increased grading confidence and grading time across all readers. For most cases, grades plus heatmap was only as effective as grades only. Over the course of the experiment, grading time decreased across all conditions, although most sharply for grades plus heatmap. CONCLUSIONS: Deep learning algorithms can improve the accuracy of, and confidence in, DR diagnosis in an assisted read setting. They also may increase grading time, although these effects may be ameliorated with experience.


Assuntos
Algoritmos , Aprendizado Profundo , Retinopatia Diabética/classificação , Retinopatia Diabética/diagnóstico , Diagnóstico por Computador/métodos , Feminino , Humanos , Masculino , Oftalmologistas/normas , Fotografação/métodos , Curva ROC , Padrões de Referência , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
2.
Invest Ophthalmol Vis Sci ; 59(7): 2861-2868, 2018 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-30025129

RESUMO

Purpose: We evaluate how deep learning can be applied to extract novel information such as refractive error from retinal fundus imaging. Methods: Retinal fundus images used in this study were 45- and 30-degree field of view images from the UK Biobank and Age-Related Eye Disease Study (AREDS) clinical trials, respectively. Refractive error was measured by autorefraction in UK Biobank and subjective refraction in AREDS. We trained a deep learning algorithm to predict refractive error from a total of 226,870 images and validated it on 24,007 UK Biobank and 15,750 AREDS images. Our model used the "attention" method to identify features that are correlated with refractive error. Results: The resulting algorithm had a mean absolute error (MAE) of 0.56 diopters (95% confidence interval [CI]: 0.55-0.56) for estimating spherical equivalent on the UK Biobank data set and 0.91 diopters (95% CI: 0.89-0.93) for the AREDS data set. The baseline expected MAE (obtained by simply predicting the mean of this population) was 1.81 diopters (95% CI: 1.79-1.84) for UK Biobank and 1.63 (95% CI: 1.60-1.67) for AREDS. Attention maps suggested that the foveal region was one of the most important areas used by the algorithm to make this prediction, though other regions also contribute to the prediction. Conclusions: To our knowledge, the ability to estimate refractive error with high accuracy from retinal fundus photos has not been previously known and demonstrates that deep learning can be applied to make novel predictions from medical images.


Assuntos
Aprendizado Profundo , Fundo de Olho , Erros de Refração/diagnóstico , Retina/diagnóstico por imagem , Adulto , Idoso , Algoritmos , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Refração Ocular , Testes Visuais , Campos Visuais/fisiologia
3.
Nat Biomed Eng ; 2(3): 158-164, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-31015713

RESUMO

Traditionally, medical discoveries are made by observing associations, making hypotheses from them and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can often be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real data. Here, we show that deep learning can extract new knowledge from retinal fundus images. Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). We also show that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction.


Assuntos
Doenças Cardiovasculares , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Retina/diagnóstico por imagem , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Doenças Cardiovasculares/diagnóstico por imagem , Doenças Cardiovasculares/epidemiologia , Feminino , Fundo de Olho , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Risco
4.
Biochim Biophys Acta ; 1858(7 Pt A): 1499-506, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27033412

RESUMO

Cell-penetrating peptides (CPPs) have emerged as a potentially powerful tool for drug delivery due to their ability to efficiently transport a whole host of biologically active cargoes into cells. Although concerted efforts have shed some light on the cellular internalization pathways of CPPs, quantification of CPP uptake has proved problematic. Here we describe an experimental approach that combines two powerful biophysical techniques, fluorescence-activated cell sorting (FACS) and fluorescence correlation spectroscopy (FCS), to directly, accurately and precisely measure the cellular uptake of fluorescently-labeled molecules. This rapid and technically simple approach is highly versatile and can readily be applied to characterize all major CPP properties that normally require multiple assays, including amount taken up by cells (in moles/cell), uptake efficiency, internalization pathways, intracellular distribution, intracellular degradation and toxicity threshold. The FACS-FCS approach provides a means for quantifying any intracellular biochemical entity, whether expressed in the cell or introduced exogenously and transported across the plasma membrane.


Assuntos
Membrana Celular/metabolismo , Peptídeos Penetradores de Células/análise , Coloração e Rotulagem/métodos , Cloreto de Amônio/farmacologia , Biotina/química , Membrana Celular/efeitos dos fármacos , Permeabilidade da Membrana Celular/efeitos dos fármacos , Peptídeos Penetradores de Células/metabolismo , Clorpromazina/farmacologia , Citocalasina D/farmacologia , Endocitose/efeitos dos fármacos , Filipina/farmacologia , Citometria de Fluxo , Corantes Fluorescentes/química , Células HeLa , Humanos , Cinética , Transporte Proteico/efeitos dos fármacos , Espectrometria de Fluorescência/métodos , Estreptavidina/química , Succinimidas/química , beta-Ciclodextrinas/farmacologia
5.
Nucleic Acids Res ; 44(11): e102, 2016 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-27036861

RESUMO

Scalable production of DNA nanostructures remains a substantial obstacle to realizing new applications of DNA nanotechnology. Typical DNA nanostructures comprise hundreds of DNA oligonucleotide strands, where each unique strand requires a separate synthesis step. New design methods that reduce the strand count for a given shape while maintaining overall size and complexity would be highly beneficial for efficiently producing DNA nanostructures. Here, we report a method for folding a custom template strand by binding individual staple sequences to multiple locations on the template. We built several nanostructures for well-controlled testing of various design rules, and demonstrate folding of a 6-kb template by as few as 10 unique strand sequences binding to 10 ± 2 locations on the template strand.


Assuntos
DNA/química , Nanoestruturas , Conformação de Ácido Nucleico , Sequência de Bases , Nanotecnologia , Oligonucleotídeos/química
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA