Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Comput Biol Med ; 138: 104890, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34601391

RESUMEN

Cervical cancer is a disease of significant concern affecting women's health worldwide. Early detection of and treatment at the precancerous stage can help reduce mortality. High-grade cervical abnormalities and precancer are confirmed using microscopic analysis of cervical histopathology. However, manual analysis of cervical biopsy slides is time-consuming, needs expert pathologists, and suffers from reader variability errors. Prior work in the literature has suggested using automated image analysis algorithms for analyzing cervical histopathology images captured with the whole slide digital scanners (e.g., Aperio, Hamamatsu, etc.). However, whole-slide digital tissue scanners with good optical magnification and acceptable imaging quality are cost-prohibitive and difficult to acquire in low and middle-resource regions. Hence, the development of low-cost imaging systems and automated image analysis algorithms are of critical importance. Motivated by this, we conduct an experimental study to assess the feasibility of developing a low-cost diagnostic system with the H&E stained cervical tissue image analysis algorithm. In our imaging system, the image acquisition is performed by a smartphone affixing it on the top of a commonly available light microscope which magnifies the cervical tissues. The images are not captured in a constant optical magnification, and, unlike whole-slide scanners, our imaging system is unable to record the magnification. The images are mega-pixel images and are labeled based on the presence of abnormal cells. In our dataset, there are total 1331 (train: 846, validation: 116 test: 369) images. We formulate the classification task as a deep multiple instance learning problem and quantitatively evaluate the classification performance of four different types of multiple instance learning algorithms trained with five different architectures designed with varying instance sizes. Finally, we designed a sparse attention-based multiple instance learning framework that can produce a maximum of 84.55% classification accuracy on the test set.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias del Cuello Uterino , Algoritmos , Femenino , Humanos , Microscopía , Neoplasias del Cuello Uterino/diagnóstico por imagen
2.
IEEE Access ; 9: 53266-53275, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34178558

RESUMEN

Cervical cancer is caused by the persistent infection of certain types of the Human Papillomavirus (HPV) and is a leading cause of female mortality particularly in low and middle-income countries (LMIC). Visual inspection of the cervix with acetic acid (VIA) is a commonly used technique in cervical screening. While this technique is inexpensive, clinical assessment is highly subjective, and relatively poor reproducibility has been reported. A deep learning-based algorithm for automatic visual evaluation (AVE) of aceto-whitened cervical images was shown to be effective in detecting confirmed precancer (i.e. direct precursor to invasive cervical cancer). The images were selected from a large longitudinal study conducted by the National Cancer Institute in the Guanacaste province of Costa Rica. The training of AVE used annotation for cervix boundary, and the data scarcity challenge was dealt with manually optimized data augmentation. In contrast, we present a novel approach for cervical precancer detection using a deep metric learning-based (DML) framework which does not incorporate any effort for cervix boundary marking. The DML is an advanced learning strategy that can deal with data scarcity and bias training due to class imbalance data in a better way. Three different widely-used state-of-the-art DML techniques are evaluated- (a) Contrastive loss minimization, (b) N-pair embedding loss minimization, and, (c) Batch-hard loss minimization. Three popular Deep Convolutional Neural Networks (ResNet-50, MobileNet, NasNet) are configured for training with DML to produce class-separated (i.e. linearly separable) image feature descriptors. Finally, a K-Nearest Neighbor (KNN) classifier is trained with the extracted deep features. Both the feature quality and classification performance are quantitatively evaluated on the same data set as used in AVE. It shows that, unlike AVE, without using any data augmentation, the best model produced from our research improves specificity in disease detection without compromising sensitivity. The present research thus paves the way for new research directions for the related field.

3.
J Clin Med ; 10(5)2021 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-33804469

RESUMEN

Uterine cervical cancer is a leading cause of women's mortality worldwide. Cervical tissue ablation is an effective surgical excision of high grade lesions that are determined to be precancerous. Our prior work on the Automated Visual Examination (AVE) method demonstrated a highly effective technique to analyze digital images of the cervix for identifying precancer. Next step would be to determine if she is treatable using ablation. However, not all women are eligible for the therapy due to cervical characteristics. We present a machine learning algorithm that uses a deep learning object detection architecture to determine if a cervix is eligible for ablative treatment based on visual characteristics presented in the image. The algorithm builds on the well-known RetinaNet architecture to derive a simpler and novel architecture in which the last convolutional layer is constructed by upsampling and concatenating specific RetinaNet pretrained layers, followed by an output module consisting of a Global Average Pooling (GAP) layer and a fully connected layer. To explain the recommendation of the deep learning algorithm and determine if it is consistent with lesion presentation on the cervical anatomy, we visualize classification results using two techniques: our (i) Class-selective Relevance Map (CRM), which has been reported earlier, and (ii) Class Activation Map (CAM). The class prediction heatmaps are evaluated by a gynecologic oncologist with more than 20 years of experience. Based on our observation and the expert's opinion, the customized architecture not only outperforms the baseline RetinaNet network in treatability classification, but also provides insights about the features and regions considered significant by the network toward explaining reasons for treatment recommendation. Furthermore, by investigating the heatmaps on Gaussian-blurred images that serve as surrogates for out-of-focus cervical pictures we demonstrate the effect of image quality degradation on cervical treatability classification and underscoring the need for using images with good visual quality.

4.
Artículo en Inglés | MEDLINE | ID: mdl-35445152

RESUMEN

Visual inspection of the cervix with acetic acid (VIA), though error prone, has long been used for screening women and to guide management for cervical cancer. The automated visual evaluation (AVE) technique, in which deep learning is used to predict precancer based on a digital image of the acetowhitened cervix, has demonstrated its promise as a low-cost method to improve on human performance. However, there are several challenges in moving AVE beyond proof-of-concept and deploying it as a practical adjunct tool in visual screening. One of them is making AVE robust across images captured using different devices. We propose a new deep learning based clustering approach to investigate whether the images taken by three different devices (a common smartphone, a custom smartphone-based handheld device for cervical imaging, and a clinical colposcope equipped with SLR digital camera-based imaging capability) can be well distinguished from each other with respect to the visual appearance/content within their cervix regions. We argue that disparity in visual appearance of a cervix across devices could be a significant confounding factor in training and generalizing AVE performance. Our method consists of four components: cervix region detection, feature extraction, feature encoding, and clustering. Multiple experiments are conducted to demonstrate the effectiveness of each component and compare alternative methods in each component. Our proposed method achieves high clustering accuracy (97%) and significantly outperforms several representative deep clustering methods on our dataset. The high clustering performance indicates the images taken from these three devices are different with respect to visual appearance. Our results and analysis establish a need for developing a method that minimizes such variance among the images acquired from different devices. It also recognizes the need for large number of training images from different sources for robust device-independent AVE performance worldwide.

5.
Med Image Anal ; 67: 101816, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33080509

RESUMEN

Histopathological analysis is the present gold standard for precancerous lesion diagnosis. The goal of automated histopathological classification from digital images requires supervised training, which requires a large number of expert annotations that can be expensive and time-consuming to collect. Meanwhile, accurate classification of image patches cropped from whole-slide images is essential for standard sliding window based histopathology slide classification methods. To mitigate these issues, we propose a carefully designed conditional GAN model, namely HistoGAN, for synthesizing realistic histopathology image patches conditioned on class labels. We also investigate a novel synthetic augmentation framework that selectively adds new synthetic image patches generated by our proposed HistoGAN, rather than expanding directly the training set with synthetic images. By selecting synthetic images based on the confidence of their assigned labels and their feature similarity to real labeled images, our framework provides quality assurance to synthetic augmentation. Our models are evaluated on two datasets: a cervical histopathology image dataset with limited annotations, and another dataset of lymph node histopathology images with metastatic cancer. Here, we show that leveraging HistoGAN generated images with selective augmentation results in significant and consistent improvements of classification performance (6.7% and 2.8% higher accuracy, respectively) for cervical histopathology and metastatic cancer datasets.


Asunto(s)
Neoplasias , Humanos
6.
Diagnostics (Basel) ; 10(7)2020 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-32635269

RESUMEN

Automated Visual Examination (AVE) is a deep learning algorithm that aims to improve the effectiveness of cervical precancer screening, particularly in low- and medium-resource regions. It was trained on data from a large longitudinal study conducted by the National Cancer Institute (NCI) and has been shown to accurately identify cervices with early stages of cervical neoplasia for clinical evaluation and treatment. The algorithm processes images of the uterine cervix taken with a digital camera and alerts the user if the woman is a candidate for further evaluation. This requires that the algorithm be presented with images of the cervix, which is the object of interest, of acceptable quality, i.e., in sharp focus, with good illumination, without shadows or other occlusions, and showing the entire squamo-columnar transformation zone. Our prior work has addressed some of these constraints to help discard images that do not meet these criteria. In this work, we present a novel algorithm that determines that the image contains the cervix to a sufficient extent. Non-cervix or other inadequate images could lead to suboptimal or wrong results. Manual removal of such images is labor intensive and time-consuming, particularly in working with large retrospective collections acquired with inadequate quality control. In this work, we present a novel ensemble deep learning method to identify cervix images and non-cervix images in a smartphone-acquired cervical image dataset. The ensemble method combined the assessment of three deep learning architectures, RetinaNet, Deep SVDD, and a customized CNN (Convolutional Neural Network), each using a different strategy to arrive at its decision, i.e., object detection, one-class classification, and binary classification. We examined the performance of each individual architecture and an ensemble of all three architectures. An average accuracy and F-1 score of 91.6% and 0.890, respectively, were achieved on a separate test dataset consisting of more than 30,000 smartphone-captured images.

7.
Diagnostics (Basel) ; 10(1)2020 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-31947707

RESUMEN

Evidence from recent research shows that automatic visual evaluation (AVE) of photographic images of the uterine cervix using deep learning-based algorithms presents a viable solution for improving cervical cancer screening by visual inspection with acetic acid (VIA). However, a significant performance determinant in AVE is the photographic image quality. While this includes image sharpness and focus, an important criterion is the localization of the cervix region. Deep learning networks have been successfully applied for object localization and segmentation in images, providing impetus for studying their use for fine contour segmentation of the cervix. In this paper, we present an evaluation of two state-of-the-art deep learning-based object localization and segmentation methods, viz., Mask R-convolutional neural network (CNN) and MaskX R-CNN, for automatic cervix segmentation using three datasets. We carried out extensive experimental tests and algorithm comparisons on each individual dataset and across datasets, and achieved performance either notably higher than, or comparable to, that reported in the literature. The highest Dice and intersection-over-union (IoU) scores that we obtained using Mask R-CNN were 0.947 and 0.901, respectively.

8.
J Natl Cancer Inst ; 111(9): 923-932, 2019 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-30629194

RESUMEN

BACKGROUND: Human papillomavirus vaccination and cervical screening are lacking in most lower resource settings, where approximately 80% of more than 500 000 cancer cases occur annually. Visual inspection of the cervix following acetic acid application is practical but not reproducible or accurate. The objective of this study was to develop a "deep learning"-based visual evaluation algorithm that automatically recognizes cervical precancer/cancer. METHODS: A population-based longitudinal cohort of 9406 women ages 18-94 years in Guanacaste, Costa Rica was followed for 7 years (1993-2000), incorporating multiple cervical screening methods and histopathologic confirmation of precancers. Tumor registry linkage identified cancers up to 18 years. Archived, digitized cervical images from screening, taken with a fixed-focus camera ("cervicography"), were used for training/validation of the deep learning-based algorithm. The resultant image prediction score (0-1) could be categorized to balance sensitivity and specificity for detection of precancer/cancer. All statistical tests were two-sided. RESULTS: Automated visual evaluation of enrollment cervigrams identified cumulative precancer/cancer cases with greater accuracy (area under the curve [AUC] = 0.91, 95% confidence interval [CI] = 0.89 to 0.93) than original cervigram interpretation (AUC = 0.69, 95% CI = 0.63 to 0.74; P < .001) or conventional cytology (AUC = 0.71, 95% CI = 0.65 to 0.77; P < .001). A single visual screening round restricted to women at the prime screening ages of 25-49 years could identify 127 (55.7%) of 228 precancers (cervical intraepithelial neoplasia 2/cervical intraepithelial neoplasia 3/adenocarcinoma in situ [AIS]) diagnosed cumulatively in the entire adult population (ages 18-94 years) while referring 11.0% for management. CONCLUSIONS: The results support consideration of automated visual evaluation of cervical images from contemporary digital cameras. If achieved, this might permit dissemination of effective point-of-care cervical screening.


Asunto(s)
Cuello del Útero/diagnóstico por imagen , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Neoplasias del Cuello Uterino/diagnóstico por imagen , Neoplasias del Cuello Uterino/epidemiología , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Área Bajo la Curva , Estudios de Casos y Controles , Cuello del Útero/patología , Colposcopía , Detección Precoz del Cáncer , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tamizaje Masivo , Persona de Mediana Edad , Vigilancia de la Población , Sensibilidad y Especificidad , Índice de Severidad de la Enfermedad , Neoplasias del Cuello Uterino/patología , Adulto Joven , Displasia del Cuello del Útero/diagnóstico por imagen , Displasia del Cuello del Útero/epidemiología , Displasia del Cuello del Útero/patología
9.
Neuroinformatics ; 16(3-4): 383-392, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29725916

RESUMEN

Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Bases de Datos Factuales , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos
10.
Pattern Recognit ; 63: 468-475, 2017 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-28603299

RESUMEN

Cervical cancer is one of the most common types of cancer in women worldwide. Most deaths due to the disease occur in less developed areas of the world. In this work, we introduce a new image dataset along with expert annotated diagnoses for evaluating image-based cervical disease classification algorithms. A large number of Cervigram® images are selected from a database provided by the US National Cancer Institute. For each image, we extract three complementary pyramid features: Pyramid histogram in L*A*B* color space (PLAB), Pyramid Histogram of Oriented Gradients (PHOG), and Pyramid histogram of Local Binary Patterns (PLBP). Other than hand-crafted pyramid features, we investigate the performance of convolutional neural network (CNN) features for cervical disease classification. Our experimental results demonstrate the effectiveness of both our hand-crafted and our deep features. We intend to release this multi-feature dataset and our extensive evaluations using seven classic classifiers can serve as the baseline.

11.
IEEE Trans Med Imaging ; 34(1): 229-45, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25167547

RESUMEN

Cervical cancer is the second most common type of cancer for women. Existing screening programs for cervical cancer, such as Pap Smear, suffer from low sensitivity. Thus, many patients who are ill are not detected in the screening process. Using images of the cervix as an aid in cervical cancer screening has the potential to greatly improve sensitivity, and can be especially useful in resource-poor regions of the world. In this paper, we develop a data-driven computer algorithm for interpreting cervical images based on color and texture. We are able to obtain 74% sensitivity and 90% specificity when differentiating high-grade cervical lesions from low-grade lesions and normal tissue. On the same dataset, using Pap tests alone yields a sensitivity of 37% and specificity of 96%, and using HPV test alone gives a 57% sensitivity and 93% specificity. Furthermore, we develop a comprehensive algorithmic framework based on Multimodal Entity Coreference for combining various tests to perform disease classification and diagnosis. When integrating multiple tests, we adopt information gain and gradient-based approaches for learning the relative weights of different tests. In our evaluation, we present a novel algorithm that integrates cervical images, Pap, HPV, and patient age, which yields 83.21% sensitivity and 94.79% specificity, a statistically significant improvement over using any single source of information alone.


Asunto(s)
Diagnóstico por Computador/métodos , Displasia del Cuello del Útero/diagnóstico , Displasia del Cuello del Útero/patología , Displasia del Cuello del Útero/virología , Algoritmos , Cuello del Útero/patología , Cuello del Útero/virología , Femenino , Pruebas de ADN del Papillomavirus Humano , Humanos , Prueba de Papanicolaou , Fotograbar , Sensibilidad y Especificidad
12.
Comput Methods Programs Biomed ; 107(3): 538-56, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22436890

RESUMEN

This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems.


Asunto(s)
Carcinoma/diagnóstico , Diagnóstico por Computador/métodos , Detección Precoz del Cáncer/métodos , Técnicas Histológicas , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Automatización , Teorema de Bayes , Neoplasias de la Mama/diagnóstico , Procesamiento Automatizado de Datos , Femenino , Humanos , Neoplasias Pulmonares/diagnóstico , Masculino , Microscopía Electrónica de Rastreo/métodos , Microscopía Electrónica de Transmisión/métodos , Microscopía Fluorescente/métodos , Neoplasias de la Próstata/diagnóstico , Tomografía Computarizada por Rayos X , Neoplasias del Cuello Uterino/diagnóstico
13.
AMIA Annu Symp Proc ; 2012: 1023-9, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23304378

RESUMEN

Effective capability to search biomedical articles based on visual properties of article images may significantly augment information retrieval in the future. In this paper, we present a new method to classify the window setting types of brain CT images. Windowing is a technique frequently used in the evaluation of CT scans, and is used to enhance contrast for the particular tissue or abnormality type being evaluated. In particular, it provides radiologists with an enhanced view of certain types of cranial abnormalities, such as the skull lesions and bone dysplasia which are usually examined using the " bone window" setting and illustrated in biomedical articles using "bone window images". Due to the inherent large variations of images among articles, it is important that the proposed method is robust. Our algorithm attained 90% accuracy in classifying images as bone window or non-bone window in a 210 image data set.


Asunto(s)
Algoritmos , Encéfalo/diagnóstico por imagen , Cráneo/diagnóstico por imagen , Tomografía Computarizada por Rayos X/clasificación , Encéfalo/patología , Humanos , Cráneo/patología
14.
Comput Med Imaging Graph ; 34(8): 593-604, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-20510585

RESUMEN

Segmentation is a fundamental component of many medical image-processing applications, and it has long been recognized as a challenging problem. In this paper, we report our research and development efforts on analyzing and extracting clinically meaningful regions from uterine cervix images in a large database created for the study of cervical cancer. In addition to proposing new algorithms, we also focus on developing open source tools which are in synchrony with the research objectives. These efforts have resulted in three Web-accessible tools which address three important and interrelated sub-topics in medical image segmentation, respectively: the Boundary Marking Tool (BMT), Cervigram Segmentation Tool (CST), and Multi-Observer Segmentation Evaluation System (MOSES). The BMT is for manual segmentation, typically to collect "ground truth" image regions from medical experts. The CST is for automatic segmentation, and MOSES is for segmentation evaluation. These tools are designed to be a unified set in which data can be conveniently exchanged. They have value not only for improving the reliability and accuracy of algorithms of uterine cervix image segmentation, but also promoting collaboration between biomedical experts and engineers which are crucial to medical image-processing applications. Although the CST is designed for the unique characteristics of cervigrams, the BMT and MOSES are very general and extensible, and can be easily adapted to other biomedical image collections.


Asunto(s)
Cuello del Útero/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Cuello del Útero/anatomía & histología , Femenino , Humanos , Internet , Radiografía , Programas Informáticos , Interfaz Usuario-Computador
15.
Methods Inf Med ; 48(4): 371-80, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19621115

RESUMEN

OBJECTIVES: An increasing number of articles are published electronically in the scientific literature, but access is limited to alphanumerical search on title, author, or abstract, and may disregard numerous figures. In this paper, we estimate the benefits of using content-based image retrieval (CBIR) on article figures to augment traditional access to articles. METHODS: We selected four high-impact journals from the Journal Citations Report (JCR) 2005. Figures were automatically extracted from the PDF article files, and manually classified on their content and number of sub-figure panels. We make a quantitative estimate by projecting from data from the Cross-Language Evaluation Forum (ImageCLEF) campaigns, and qualitatively validate it through experiments using the Image Retrieval in Medical Applications (IRMA) project. RESULTS: Based on 2077 articles with 11,753 pages, 4493 figures, and 11,238 individual images, the predicted accuracy for article retrieval may reach 97.08%. CONCLUSIONS: Therefore, CBIR potentially has a high impact in medical literature search and retrieval.


Asunto(s)
Bases de Datos Bibliográficas , Diagnóstico por Imagen , Almacenamiento y Recuperación de la Información , Internet , Humanos
16.
Int J Med Inform ; 78 Suppl 1: S13-24, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-18996737

RESUMEN

PURPOSE: With the increasing use of images in disease research, education, and clinical medicine, the need for methods that effectively archive, query, and retrieve these images by their content is underscored. This paper describes the implementation of a Web-based retrieval system called SPIRS (Spine Pathology & Image Retrieval System), which permits exploration of a large biomedical database of digitized spine X-ray images and data from a national health survey using a combination of visual and textual queries. METHODS: SPIRS is a generalizable framework that consists of four components: a client applet, a gateway, an indexing and retrieval system, and a database of images and associated text data. The prototype system is demonstrated using text and imaging data collected as part of the second U.S. National Health and Nutrition Examination Survey (NHANES II). Users search the image data by providing a sketch of the vertebral outline or selecting an example vertebral image and some relevant text parameters. Pertinent pathology on the image/sketch can be annotated and weighted to indicate importance. RESULTS: During the course of development, we explored different algorithms to perform functions such as segmentation, indexing, and retrieval. Each algorithm was tested individually and then implemented as part of SPIRS. To evaluate the overall system, we first tested the system's ability to return similar vertebral shapes from the database given a query shape. Initial evaluations using visual queries only (no text) have shown that the system achieves up to 68% accuracy in finding images in the database that exhibit similar abnormality type and severity. Relevance feedback mechanisms have been shown to increase accuracy by an additional 22% after three iterations. While we primarily demonstrate this system in the context of retrieving vertebral shape, our framework has also been adapted to search a collection of 100,000 uterine cervix images to study the progression of cervical cancer. CONCLUSIONS: SPIRS is automated, easily accessible, and integratable with other complementary information retrieval systems. The system supports the ability for users to intuitively query large amounts of imaging data by providing visual examples and text keywords and has beneficial implications in the areas of research, education, and patient care.


Asunto(s)
Almacenamiento y Recuperación de la Información , Internet , Sistemas de Información Radiológica , Algoritmos , Humanos , Encuestas Nutricionales
17.
Int J Healthc Inf Syst Inform ; 4(1): 1-16, 2009 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-20523757

RESUMEN

Content-based image retrieval (CBIR) technology has been proposed to benefit not only the management of increasingly large image collections, but also to aid clinical care, biomedical research, and education. Based on a literature review, we conclude that there is widespread enthusiasm for CBIR in the engineering research community, but the application of this technology to solve practical medical problems is a goal yet to be realized. Furthermore, we highlight "gaps" between desired CBIR system functionality and what has been achieved to date, present for illustration a comparative analysis of four state-of-the-art CBIR implementations using the gap approach, and suggest that high-priority gaps to be overcome lie in CBIR interfaces and functionality that better serve the clinical and biomedical research communities.

18.
Stud Health Technol Inform ; 129(Pt 1): 188-92, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-17911704

RESUMEN

With the increasing use of medical images in clinical medicine, disease research, and education, the need for methods that effectively archive, query, and retrieve these images by their content is underscored. This paper presents the implementation of a Web-based retrieval system called SPIRS (Spine Pathology & Image Retrieval System) at the U. S. National Library of Medicine that demonstrates recent developments in shape representation and retrieval from a large dataset of 17,000 digitized x-ray images of the spine and associated text records. Users can search these images by providing a sketch of the vertebral outline or selecting an example vertebral image and some relevant text parameters. Pertinent pathology on the image/sketch can be annotated and weighted to indicate importance. This hybrid text-image query yields images containing similar vertebrae along with relevant fields from associated text records, which allows users to examine the pathologies of vertebral abnormalities. Initial experiments with SPIRS have demonstrated the potential for this system, particularly on a large dataset of clinical images.


Asunto(s)
Diagnóstico por Imagen , Almacenamiento y Recuperación de la Información , Sistemas de Información Radiológica , Indización y Redacción de Resúmenes , Algoritmos , Bases de Datos como Asunto , Internet , Interpretación de Imagen Radiográfica Asistida por Computador , Columna Vertebral/diagnóstico por imagen
19.
Stud Health Technol Inform ; 129(Pt 1): 493-7, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-17911766

RESUMEN

There is a significant increase in the use of medical images in clinical medicine, disease research, and education. While the literature lists several successful systems for content-based image retrieval and image management methods, they have been unable to make significant inroads in routine medical informatics. This can be attributed to the following: (i) the challenging nature of medical images, (ii) need for specialized methods specific to each image type and detail, (iii) lack of advances in image indexing methods, and (iv) lack of a uniform data and resource exchange framework between complementary systems. Most systems tend to focus on varying degrees of the first two items, making them very versatile in a small sampling of the variety of medical images but unable to share their strengths. This paper proposes to overcome these shortcomings by defining a data and resource exchange framework using open standards and software to develop geographically distributed toolkits. As proof-of-concept, we describe the coupling of two complementary geographically separated systems: the IRMA system at Aachen University of Technology in Germany, and the SPIRS system at the U. S. National Library of Medicine in the United States of America.


Asunto(s)
Redes de Comunicación de Computadores , Diagnóstico por Imagen , Almacenamiento y Recuperación de la Información , Sistemas de Información Radiológica , Acceso a la Información , Sistemas de Computación , Alemania , Humanos , Internet , Aplicaciones de la Informática Médica , Programas Informáticos , Estados Unidos
20.
AMIA Annu Symp Proc ; : 826-30, 2007 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-18693952

RESUMEN

The National Library of Medicine (NLM) and the National Cancer Institute (NCI) are creating a digital archive of 100,000 cervicographic images and clinical and diagnostic data obtained through two major longitudinal studies. In addition to developing tools for Web access to these data, we are conducting research in Content-Based Image Retrieval (CBIR) techniques for retrieving visually similar and pathologically relevant images. The resulting system of tools is expected to greatly benefit medical education and research into uterine cervical cancer which is the second most common cancer affecting women worldwide. Our current prototype system with fundamental CBIR functions operates on a small test subset of images and retrieves relevant cervix images containing tissue regions similar in color, texture, size, and/or location to a query image region marked by the user. Initial average precision result for retrieval by color of acetowhite lesions is 52%, and for the columnar epithelium is 64.2%, respectively.


Asunto(s)
Cuello del Útero/anatomía & histología , Bases de Datos como Asunto , Almacenamiento y Recuperación de la Información/métodos , Archivos , Cuello del Útero/patología , Gráficos por Computador , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Infecciones por Papillomavirus/patología , Interfaz Usuario-Computador , Neoplasias del Cuello Uterino/patología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...