Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sci Data ; 8(1): 151, 2021 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-34112812

RESUMO

Amidst the current health crisis and social distancing, telemedicine has become an important part of mainstream of healthcare, and building and deploying computational tools to support screening more efficiently is an increasing medical priority. The early identification of cervical cancer precursor lesions by Pap smear test can identify candidates for subsequent treatment. However, one of the main challenges is the accuracy of the conventional method, often subject to high rates of false negative. While machine learning has been highlighted to reduce the limitations of the test, the absence of high-quality curated datasets has prevented strategies development to improve cervical cancer screening. The Center for Recognition and Inspection of Cells (CRIC) platform enables the creation of CRIC Cervix collection, currently with 400 images (1,376 × 1,020 pixels) curated from conventional Pap smears, with manual classification of 11,534 cells. This collection has the potential to advance current efforts in training and testing machine learning algorithms for the automation of tasks as part of the cytopathological analysis in the routine work of laboratories.


Assuntos
Colo do Útero/patologia , Uso da Internet , Teste de Papanicolaou , Neoplasias do Colo do Útero/patologia , Detecção Precoce de Câncer , Feminino , Humanos , Aprendizado de Máquina
2.
Comput Methods Programs Biomed ; 182: 105053, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31521047

RESUMO

BACKGROUND AND OBJECTIVES: Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists. METHOD: We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells. RESULTS: The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies. CONCLUSIONS: ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.


Assuntos
Colo do Útero/patologia , Aprendizado Profundo , Redes Neurais de Computação , Feminino , Humanos , Teste de Papanicolaou
3.
Comput Med Imaging Graph ; 72: 13-21, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30763802

RESUMO

Ninety years after its invention, the Pap test continues to be the most used method for the early identification of cervical precancerous lesions. In this test, the cytopathologists look for microscopic abnormalities in and around the cells, which is a time-consuming and prone to human error task. This paper introduces computational tools for cytological analysis that incorporate cell segmentation deep learning techniques. These techniques are capable of processing both free-lying and clumps of abnormal cells with a high overlapping rate from digitized images of conventional Pap smears. Our methodology employs a preprocessing step that discards images with a low probability of containing abnormal cells without prior segmentation and, therefore, performs faster when compared with the existing methods. Also, it ranks outputs based on the likelihood of the images to contain abnormal cells. We evaluate our methodology on an image database of conventional Pap smears from real scenarios, with 108 fields-of-view containing at least one abnormal cell and 86 containing only normal cells, corresponding to millions of cells. Our results show that the proposed approach achieves accurate results (MAP = 0.936), runs faster than existing methods, and it is robust to the presence of white blood cells, and other contaminants.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Feminino , Humanos , Redes Neurais de Computação , Teste de Papanicolaou , Neoplasias do Colo do Útero/patologia
4.
IEEE J Biomed Health Inform ; 21(2): 441-450, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-26800556

RESUMO

In this paper, we introduce and evaluate the systems submitted to the first Overlapping Cervical Cytology Image Segmentation Challenge, held in conjunction with the IEEE International Symposium on Biomedical Imaging 2014. This challenge was organized to encourage the development and benchmarking of techniques capable of segmenting individual cells from overlapping cellular clumps in cervical cytology images, which is a prerequisite for the development of the next generation of computer-aided diagnosis systems for cervical cancer. In particular, these automated systems must detect and accurately segment both the nucleus and cytoplasm of each cell, even when they are clumped together and, hence, partially occluded. However, this is an unsolved problem due to the poor contrast of cytoplasm boundaries, the large variation in size and shape of cells, and the presence of debris and the large degree of cellular overlap. The challenge initially utilized a database of 16 high-resolution ( ×40 magnification) images of complex cellular fields of view, in which the isolated real cells were used to construct a database of 945 cervical cytology images synthesized with a varying number of cells and degree of overlap, in order to provide full access of the segmentation ground truth. These synthetic images were used to provide a reliable and comprehensive framework for quantitative evaluation on this segmentation problem. Results from the submitted methods demonstrate that all the methods are effective in the segmentation of clumps containing at most three cells, with overlap coefficients up to 0.3. This highlights the intrinsic difficulty of this challenge and provides motivation for significant future improvement.


Assuntos
Algoritmos , Colo do Útero/citologia , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Colo do Útero/diagnóstico por imagem , Feminino , Humanos , Teste de Papanicolaou/métodos , Neoplasias do Colo do Útero
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA