Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Microsc ; 279(2): 98-113, 2020 08.
Article in English | MEDLINE | ID: mdl-32406521

ABSTRACT

This paper addresses the problem of creating a large quantity of high-quality training segmentation masks from scanning electron microscopy (SEM) images. The images are acquired from concrete samples that exhibit progressive amounts of degradation resulting from alkali-silica reaction (ASR), a leading cause of deterioration, cracking and loss of capacity in much of the nation's infrastructure. The target damage classes in concrete SEM images are defined as paste damage, aggregate damage, air voids and no damage. We approached the SEM segmentation problem by applying convolutional neural network (CNN)-based methods to predict the damage classes due to ASR for each image pixel. The challenges in using the CNN-based methods lie in preparing large numbers of high-quality training labelled images while having limited human resources. To address these challenges, we designed damage- and context-assisted approaches to lower the requirements on human resources. We then evaluated the accuracy of CNN-based segmentation methods using the datasets prepared with these two approaches. LAY DESCRIPTION: This work is about automated segmentation of Scanning Electron Microscopy (SEM) images taken from core and prism samples of concrete. The segmentation must detect several damage classes in each image in order to understand properties of concrete-made structures over time. The segmentation problem is approached with an artificial network (AI) based model. The training data for the AI model are created using damage- and context-assisted approaches to lower the requirements on human resources. The access to all training data and to a web-based validation system for scoring segmented images is available at https://isg.nist.gov/deepzoomweb/data/concreteScoring.

2.
J Microsc ; 260(1): 86-99, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26046924

ABSTRACT

New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/.


Subject(s)
Image Processing, Computer-Assisted , Myocytes, Smooth Muscle/ultrastructure , Pluripotent Stem Cells/ultrastructure , Animals , Cell Line , Datasets as Topic , Humans , Image Processing, Computer-Assisted/instrumentation , Image Processing, Computer-Assisted/methods , Mice , Microscopy, Fluorescence/methods , Microscopy, Phase-Contrast/methods , Models, Theoretical , Muscle, Smooth, Vascular/cytology , NIH 3T3 Cells
3.
J Microsc ; 257(3): 226-37, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25623496

ABSTRACT

Several computational challenges associated with large-scale background image correction of terabyte-sized fluorescent images are discussed and analysed in this paper. Dark current, flat-field and background correction models are applied over a mosaic of hundreds of spatially overlapping fields of view (FOVs) taken over the course of several days, during which the background diminishes as cell colonies grow. The motivation of our work comes from the need to quantify the dynamics of OCT-4 gene expression via a fluorescent reporter in human stem cell colonies. Our approach to background correction is formulated as an optimization problem over two image partitioning schemes and four analytical correction models. The optimization objective function is evaluated in terms of (1) the minimum root mean square (RMS) error remaining after image correction, (2) the maximum signal-to-noise ratio (SNR) reached after downsampling and (3) the minimum execution time. Based on the analyses with measured dark current noise and flat-field images, the most optimal GFP background correction is obtained by using a data partition based on forming a set of submosaic images with a polynomial surface background model. The resulting image after correction is characterized by an RMS of about 8, and an SNR value of a 4 × 4 downsampling above 5 by Rose criterion. The new technique generates an image with half RMS value and double SNR value when compared to an approach that assumes constant background throughout the mosaic. We show that the background noise in terabyte-sized fluorescent image mosaics can be corrected computationally with the optimized triplet (data partition, model, SNR driven downsampling) such that the total RMS value from background noise does not exceed the magnitude of the measured dark current noise. In this case, the dark current noise serves as a benchmark for the lowest noise level that an imaging system can achieve. In comparison to previous work, the past fluorescent image background correction methods have been designed for single FOV and have not been applied to terabyte-sized images with large mosaic FOVs, low SNR and diminishing access to background information over time as cell colonies span entirely multiple FOVs. The code is available as open-source from the following link https://isg.nist.gov/.


Subject(s)
Image Enhancement/methods , Image Processing, Computer-Assisted/methods , Microscopy, Fluorescence/methods , Time-Lapse Imaging/methods , Gene Expression Regulation , Humans , Octamer Transcription Factor-3/metabolism , Stem Cells
SELECTION OF CITATIONS
SEARCH DETAIL
...