Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Comput Biol Med ; 107: 18-29, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30771549

RESUMEN

About one in eight women in the U.S. will develop invasive breast cancer at some point in life. Breast cancer is the most common cancer found in women and if it is identified at an early stage by the use of mammograms, x-ray images of the breast, then the chances of successful treatment can be high. Typically, mammograms are screened by radiologists who determine whether a biopsy is necessary to ascertain the presence of cancer. Although historical screening methods have been effective, recent advances in computer vision and web technologies may be able to improve the accuracy, speed, cost, and accessibility of mammogram screenings. We propose a total screening solution comprised of three main components: a web service for uploading images and reviewing results, a machine learning algorithm for accepting or rejecting images as valid mammograms, and an artificial neural network for locating potential malignancies. Once an image is uploaded to our web service, an image acceptor determines whether or not the image is a mammogram. The image acceptor is primarily a one-class SVM built on features derived with a variational autoencoder. If an image is accepted as a mammogram, the malignancy identifier, a ResNet-101 Faster R-CNN, will locate tumors within the mammogram. On test data, the image acceptor had only 2 misclassifications out of 410 mammograms and 2 misclassifications out of 1,640 non-mammograms while the malignancy identifier achieved 0.951 AUROC when tested on BI-RADS 1, 5, and 6 images from the INbreast dataset.


Asunto(s)
Mamografía/métodos , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Telemedicina/métodos , Algoritmos , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Aprendizaje Automático
2.
Comput Biol Med ; 111: 103351, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31325742

RESUMEN

Automatic detection of anatomical landmarks and diseases in medical images is a challenging task which could greatly aid medical diagnosis and reduce the cost and time of investigational procedures. Also, two particular challenges of digital image processing in medical applications are the sparsity of annotated medical images and the lack of uniformity across images and image classes. This paper presents methodologies for maximizing classification accuracy on a small medical image dataset, the Kvasir dataset, by performing robust image preprocessing and applying state-of-the-art deep learning. Images are classified as being or involving an anatomical landmark (pylorus, z-line, cecum), a diseased state (esophagitis, ulcerative colitis, polyps), or a medical procedure (dyed lifted polyps, dyed resection margins). A framework for modular and automatic preprocessing of gastrointestinal tract images (MAPGI) is proposed, which applies edge removal, contrast enhancement, filtering, color mapping and scaling to each image in the dataset. Gamma correction values are automatically calculated for individual images such that the mean pixel value for each image is normalized to 90 ±â€¯1 in a 0-255 pixel value range. Three state-of-the-art neural networks architectures, Inception-ResNet-v2, Inception-v4, and NASNet, are trained on the Kvasir dataset, and their classification performance is juxtaposed on validation data. In each case, 85% of the images from the Kvasir dataset are used for training, while the other 15% are reserved for validation. The resulting accuracies achieved using Inception-v4, Inception-ResNet-v2, and NASNet were 0.9845, 0.9848, and 0.9735, respectively. In addition, Inception-v4 achieved an average of 0.938 precision, 0.939 recall, 0.991 specificity, 0.938 F1 score, and 0.929 Matthews correlation coefficient (MCC). Bootstrapping provided NASNet, the worst performing model, a lower bound of 0.9723 accuracy on the 95% confidence interval.


Asunto(s)
Puntos Anatómicos de Referencia , Aprendizaje Profundo , Endoscopía Gastrointestinal/métodos , Tracto Gastrointestinal , Procesamiento de Imagen Asistido por Computador/métodos , Puntos Anatómicos de Referencia/anatomía & histología , Puntos Anatómicos de Referencia/diagnóstico por imagen , Puntos Anatómicos de Referencia/patología , Bases de Datos Factuales , Tracto Gastrointestinal/anatomía & histología , Tracto Gastrointestinal/diagnóstico por imagen , Tracto Gastrointestinal/patología , Humanos , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA