RESUMO
Fiber-reinforced ceramic-matrix composites are advanced, temperature resistant materials with applications in aerospace engineering. Their analysis involves the detection and separation of fibers, embedded in a fiber bed, from an imaged sample. Currently, this is mostly done using semi-supervised techniques. Here, we present an open, automated computational pipeline to detect fibers from a tomographically reconstructed X-ray volume. We apply our pipeline to a non-trivial dataset by Larson et al. To separate the fibers in these samples, we tested four different architectures of convolutional neural networks. When comparing our neural network approach to a semi-supervised one, we obtained Dice and Matthews coefficients reaching up to 98%, showing that these automated approaches can match human-supervised methods, in some cases separating fibers that human-curated algorithms could not find. The software written for this project is open source, released under a permissive license, and can be freely adapted and re-used in other domains.
RESUMO
Amidst the current health crisis and social distancing, telemedicine has become an important part of mainstream of healthcare, and building and deploying computational tools to support screening more efficiently is an increasing medical priority. The early identification of cervical cancer precursor lesions by Pap smear test can identify candidates for subsequent treatment. However, one of the main challenges is the accuracy of the conventional method, often subject to high rates of false negative. While machine learning has been highlighted to reduce the limitations of the test, the absence of high-quality curated datasets has prevented strategies development to improve cervical cancer screening. The Center for Recognition and Inspection of Cells (CRIC) platform enables the creation of CRIC Cervix collection, currently with 400 images (1,376 × 1,020 pixels) curated from conventional Pap smears, with manual classification of 11,534 cells. This collection has the potential to advance current efforts in training and testing machine learning algorithms for the automation of tasks as part of the cytopathological analysis in the routine work of laboratories.
Assuntos
Colo do Útero/patologia , Uso da Internet , Teste de Papanicolaou , Neoplasias do Colo do Útero/patologia , Detecção Precoce de Câncer , Feminino , Humanos , Aprendizado de MáquinaRESUMO
BACKGROUND AND OBJECTIVES: Saliency refers to the visual perception quality that makes objects in a scene to stand out from others and attract attention. While computational saliency models can simulate the expert's visual attention, there is little evidence about how these models perform when used to predict the cytopathologist's eye fixations. Saliency models may be the key to instrumenting fast object detection on large Pap smear slides under real noisy conditions, artifacts, and cell occlusions. This paper describes how our computational schemes retrieve regions of interest (ROI) of clinical relevance using visual attention models. We also compare the performance of different computed saliency models as part of cell screening tasks, aiming to design a computer-aided diagnosis systems that supports cytopathologists. METHOD: We record eye fixation maps from cytopathologists at work, and compare with 13 different saliency prediction algorithms, including deep learning. We develop cell-specific convolutional neural networks (CNN) to investigate the impact of bottom-up and top-down factors on saliency prediction from real routine exams. By combining the eye tracking data from pathologists with computed saliency models, we assess algorithms reliability in identifying clinically relevant cells. RESULTS: The proposed cell-specific CNN model outperforms all other saliency prediction methods, particularly regarding the number of false positives. Our algorithm also detects the most clinically relevant cells, which are among the three top salient regions, with accuracy above 98% for all diseases, except carcinoma (87%). Bottom-up methods performed satisfactorily, with saliency maps that enabled ROI detection above 75% for carcinoma and 86% for other pathologies. CONCLUSIONS: ROIs extraction using our saliency prediction methods enabled ranking the most relevant clinical areas within the image, a viable data reduction strategy to guide automatic analyses of Pap smear slides. Top-down factors for saliency prediction on cell images increases the accuracy of the estimated maps while bottom-up algorithms proved to be useful for predicting the cytopathologist's eye fixations depending on parameters, such as the number of false positive and negative. Our contributions are: comparison among 13 state-of-the-art saliency models to cytopathologists' visual attention and deliver a method that the associate the most conspicuous regions to clinically relevant cells.
Assuntos
Colo do Útero/patologia , Aprendizado Profundo , Redes Neurais de Computação , Feminino , Humanos , Teste de PapanicolaouRESUMO
Ninety years after its invention, the Pap test continues to be the most used method for the early identification of cervical precancerous lesions. In this test, the cytopathologists look for microscopic abnormalities in and around the cells, which is a time-consuming and prone to human error task. This paper introduces computational tools for cytological analysis that incorporate cell segmentation deep learning techniques. These techniques are capable of processing both free-lying and clumps of abnormal cells with a high overlapping rate from digitized images of conventional Pap smears. Our methodology employs a preprocessing step that discards images with a low probability of containing abnormal cells without prior segmentation and, therefore, performs faster when compared with the existing methods. Also, it ranks outputs based on the likelihood of the images to contain abnormal cells. We evaluate our methodology on an image database of conventional Pap smears from real scenarios, with 108 fields-of-view containing at least one abnormal cell and 86 containing only normal cells, corresponding to millions of cells. Our results show that the proposed approach achieves accurate results (MAPâ¯=â¯0.936), runs faster than existing methods, and it is robust to the presence of white blood cells, and other contaminants.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Feminino , Humanos , Redes Neurais de Computação , Teste de Papanicolaou , Neoplasias do Colo do Útero/patologiaRESUMO
BACKGROUND: Immunofluorescence (IF) plays a major role in quantifying protein expression in situ and understanding cell function. It is widely applied in assessing disease mechanisms and in drug discovery research. Automation of IF analysis can transform studies using experimental cell models. However, IF analysis of postmortem human tissue relies mostly on manual interaction, often subjected to low-throughput and prone to error, leading to low inter and intra-observer reproducibility. Human postmortem brain samples challenges neuroscientists because of the high level of autofluorescence caused by accumulation of lipofuscin pigment during aging, hindering systematic analyses. We propose a method for automating cell counting and classification in IF microscopy of human postmortem brains. Our algorithm speeds up the quantification task while improving reproducibility. NEW METHOD: Dictionary learning and sparse coding allow for constructing improved cell representations using IF images. These models are input for detection and segmentation methods. Classification occurs by means of color distances between cells and a learned set. RESULTS: Our method successfully detected and classified cells in 49 human brain images. We evaluated our results regarding true positive, false positive, false negative, precision, recall, false positive rate and F1 score metrics. We also measured user-experience and time saved compared to manual countings. COMPARISON WITH EXISTING METHODS: We compared our results to four open-access IF-based cell-counting tools available in the literature. Our method showed improved accuracy for all data samples. CONCLUSION: The proposed method satisfactorily detects and classifies cells from human postmortem brain IF images, with potential to be generalized for applications in other counting tasks.
Assuntos
Encéfalo/citologia , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Microscopia de Fluorescência/métodos , Reconhecimento Automatizado de Padrão/métodos , Doença de Alzheimer/patologia , Contagem de Células/métodos , Imunofluorescência/métodos , Humanos , Reprodutibilidade dos TestesRESUMO
Automated retinal screening relies on vasculature segmentation before the identification of other anatomical structures of the retina. Vasculature extraction can also be input to image quality ranking, neovascularization detection and image registration. An extensive related literature often excludes the inherent heterogeneity of ophthalmic clinical images. The contribution of this paper consists in an algorithm using front propagation to segment the vessel network, including a penalty on the wait queue to the fast marching method, which minimizes leakage of the evolving boundary. The algorithm requires no manual labeling of seeds, a minimum number of parameters and it is capable of segmenting color ocular fundus images in real scenarios, where multi-ethnicity and brightness variations are parts of the problem.
Assuntos
Algoritmos , Colorimetria/métodos , Retinopatia Diabética/patologia , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Vasos Retinianos/patologia , Retinoscopia/métodos , Inteligência Artificial , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.