Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nature ; 542(7639): 115-118, 2017 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-28117445

RESUMO

Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.


Assuntos
Dermatologistas/normas , Redes Neurais de Computação , Neoplasias Cutâneas/classificação , Neoplasias Cutâneas/diagnóstico , Automação , Telefone Celular/estatística & dados numéricos , Conjuntos de Dados como Assunto , Humanos , Queratinócitos/patologia , Ceratose Seborreica/classificação , Ceratose Seborreica/diagnóstico , Ceratose Seborreica/patologia , Melanoma/classificação , Melanoma/diagnóstico , Melanoma/patologia , Nevo/classificação , Nevo/diagnóstico , Nevo/patologia , Fotografação , Reprodutibilidade dos Testes , Neoplasias Cutâneas/patologia
2.
Nature ; 546(7660): 686, 2017 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-28658222

RESUMO

This corrects the article DOI: 10.1038/nature21056.

3.
Ophthalmology ; 129(2): 139-146, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34352302

RESUMO

PURPOSE: To develop and evaluate an automated, portable algorithm to differentiate active corneal ulcers from healed scars using only external photographs. DESIGN: A convolutional neural network was trained and tested using photographs of corneal ulcers and scars. PARTICIPANTS: De-identified photographs of corneal ulcers were obtained from the Steroids for Corneal Ulcers Trial (SCUT), Mycotic Ulcer Treatment Trial (MUTT), and Byers Eye Institute at Stanford University. METHODS: Photographs of corneal ulcers (n = 1313) and scars (n = 1132) from the SCUT and MUTT were used to train a convolutional neural network (CNN). The CNN was tested on 2 different patient populations from eye clinics in India (n = 200) and the Byers Eye Institute at Stanford University (n = 101). Accuracy was evaluated against gold standard clinical classifications. Feature importances for the trained model were visualized using gradient-weighted class activation mapping. MAIN OUTCOME MEASURES: Accuracy of the CNN was assessed via F1 score. The area under the receiver operating characteristic (ROC) curve (AUC) was used to measure the precision-recall trade-off. RESULTS: The CNN correctly classified 115 of 123 active ulcers and 65 of 77 scars in patients with corneal ulcer from India (F1 score, 92.0% [95% confidence interval (CI), 88.2%-95.8%]; sensitivity, 93.5% [95% CI, 89.1%-97.9%]; specificity, 84.42% [95% CI, 79.42%-89.42%]; ROC: AUC, 0.9731). The CNN correctly classified 43 of 55 active ulcers and 42 of 46 scars in patients with corneal ulcers from Northern California (F1 score, 84.3% [95% CI, 77.2%-91.4%]; sensitivity, 78.2% [95% CI, 67.3%-89.1%]; specificity, 91.3% [95% CI, 85.8%-96.8%]; ROC: AUC, 0.9474). The CNN visualizations correlated with clinically relevant features such as corneal infiltrate, hypopyon, and conjunctival injection. CONCLUSIONS: The CNN classified corneal ulcers and scars with high accuracy and generalized to patient populations outside of its training data. The CNN focused on clinically relevant features when it made a diagnosis. The CNN demonstrated potential as an inexpensive diagnostic approach that may aid triage in communities with limited access to eye care.


Assuntos
Cicatriz/diagnóstico por imagem , Úlcera da Córnea/diagnóstico por imagem , Aprendizado Profundo , Infecções Oculares Bacterianas/diagnóstico por imagem , Infecções Oculares Fúngicas/diagnóstico por imagem , Fotografação , Cicatrização/fisiologia , Algoritmos , Área Sob a Curva , Cicatriz/fisiopatologia , Úlcera da Córnea/classificação , Úlcera da Córnea/microbiologia , Infecções Oculares Bacterianas/classificação , Infecções Oculares Bacterianas/microbiologia , Infecções Oculares Fúngicas/classificação , Infecções Oculares Fúngicas/microbiologia , Reações Falso-Positivas , Humanos , Valor Preditivo dos Testes , Curva ROC , Estudos Retrospectivos , Sensibilidade e Especificidade , Microscopia com Lâmpada de Fenda
4.
Nat Med ; 25(1): 24-29, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30617335

RESUMO

Here we present deep-learning techniques for healthcare, centering our discussion on deep learning in computer vision, natural language processing, reinforcement learning, and generalized methods. We describe how these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems. Our discussion of computer vision focuses largely on medical imaging, and we describe the application of natural language processing to domains such as electronic health record data. Similarly, reinforcement learning is discussed in the context of robotic-assisted surgery, and generalized deep-learning methods for genomics are reviewed.


Assuntos
Aprendizado Profundo , Atenção à Saúde , Diagnóstico por Imagem , Registros Eletrônicos de Saúde , Humanos , Processamento de Linguagem Natural
6.
IEEE Trans Pattern Anal Mach Intell ; 35(5): 1039-50, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23520250

RESUMO

We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a Time-of-Flight (ToF) camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology, they bear potential for economical production in big volumes. Our easy-to-use, cost-effective scanning solution, which is based on such a sensor, could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensor's level of random noise is substantial and there is a nontrivial systematic bias. In this paper, we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.

7.
IEEE Trans Biomed Eng ; 58(1): 159-71, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20934939

RESUMO

Recent advances in optical imaging have led to the development of miniature microscopes that can be brought to the patient for visualizing tissue structures in vivo. These devices have the potential to revolutionize health care by replacing tissue biopsy with in vivo pathology. One of the primary limitations of these microscopes, however, is that the constrained field of view can make image interpretation and navigation difficult. In this paper, we show that image mosaicing can be a powerful tool for widening the field of view and creating image maps of microanatomical structures. First, we present an efficient algorithm for pairwise image mosaicing that can be implemented in real time. Then, we address two of the main challenges associated with image mosaicing in medical applications: cumulative image registration errors and scene deformation. To deal with cumulative errors, we present a global alignment algorithm that draws upon techniques commonly used in probabilistic robotics. To accommodate scene deformation, we present a local alignment algorithm that incorporates deformable surface models into the mosaicing framework. These algorithms are demonstrated on image sequences acquired in vivo with various imaging devices including a hand-held dual-axes confocal microscope, a miniature two-photon microscope, and a commercially available confocal microendoscope.


Assuntos
Endoscópios , Processamento de Imagem Assistida por Computador/métodos , Microscopia Confocal , Algoritmos , Animais , Encéfalo/anatomia & histologia , Encéfalo/irrigação sanguínea , Endoscopia/métodos , Mãos , Humanos , Camundongos , Microscopia Confocal/instrumentação , Microscopia Confocal/métodos , Miniaturização , Robótica/instrumentação , Pele/anatomia & histologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA