Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Microsc ; 279(2): 98-113, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32406521

RESUMEN

This paper addresses the problem of creating a large quantity of high-quality training segmentation masks from scanning electron microscopy (SEM) images. The images are acquired from concrete samples that exhibit progressive amounts of degradation resulting from alkali-silica reaction (ASR), a leading cause of deterioration, cracking and loss of capacity in much of the nation's infrastructure. The target damage classes in concrete SEM images are defined as paste damage, aggregate damage, air voids and no damage. We approached the SEM segmentation problem by applying convolutional neural network (CNN)-based methods to predict the damage classes due to ASR for each image pixel. The challenges in using the CNN-based methods lie in preparing large numbers of high-quality training labelled images while having limited human resources. To address these challenges, we designed damage- and context-assisted approaches to lower the requirements on human resources. We then evaluated the accuracy of CNN-based segmentation methods using the datasets prepared with these two approaches. LAY DESCRIPTION: This work is about automated segmentation of Scanning Electron Microscopy (SEM) images taken from core and prism samples of concrete. The segmentation must detect several damage classes in each image in order to understand properties of concrete-made structures over time. The segmentation problem is approached with an artificial network (AI) based model. The training data for the AI model are created using damage- and context-assisted approaches to lower the requirements on human resources. The access to all training data and to a web-based validation system for scoring segmented images is available at https://isg.nist.gov/deepzoomweb/data/concreteScoring.

2.
J Microsc ; 260(3): 363-76, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26268699

RESUMEN

There is no segmentation method that performs perfectly with any dataset in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of three-dimensional (3D) image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate 'ground truth' of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations and (3) minimizing human labour needed to create surrogate 'truth' by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average, 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Microscopía Confocal/métodos , Células Madre/citología , Humanos
3.
J Microsc ; 260(1): 86-99, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26046924

RESUMEN

New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Miocitos del Músculo Liso/ultraestructura , Células Madre Pluripotentes/ultraestructura , Animales , Línea Celular , Conjuntos de Datos como Asunto , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Procesamiento de Imagen Asistido por Computador/métodos , Ratones , Microscopía Fluorescente/métodos , Microscopía de Contraste de Fase/métodos , Modelos Teóricos , Músculo Liso Vascular/citología , Células 3T3 NIH
4.
J Microsc ; 257(3): 226-37, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25623496

RESUMEN

Several computational challenges associated with large-scale background image correction of terabyte-sized fluorescent images are discussed and analysed in this paper. Dark current, flat-field and background correction models are applied over a mosaic of hundreds of spatially overlapping fields of view (FOVs) taken over the course of several days, during which the background diminishes as cell colonies grow. The motivation of our work comes from the need to quantify the dynamics of OCT-4 gene expression via a fluorescent reporter in human stem cell colonies. Our approach to background correction is formulated as an optimization problem over two image partitioning schemes and four analytical correction models. The optimization objective function is evaluated in terms of (1) the minimum root mean square (RMS) error remaining after image correction, (2) the maximum signal-to-noise ratio (SNR) reached after downsampling and (3) the minimum execution time. Based on the analyses with measured dark current noise and flat-field images, the most optimal GFP background correction is obtained by using a data partition based on forming a set of submosaic images with a polynomial surface background model. The resulting image after correction is characterized by an RMS of about 8, and an SNR value of a 4 × 4 downsampling above 5 by Rose criterion. The new technique generates an image with half RMS value and double SNR value when compared to an approach that assumes constant background throughout the mosaic. We show that the background noise in terabyte-sized fluorescent image mosaics can be corrected computationally with the optimized triplet (data partition, model, SNR driven downsampling) such that the total RMS value from background noise does not exceed the magnitude of the measured dark current noise. In this case, the dark current noise serves as a benchmark for the lowest noise level that an imaging system can achieve. In comparison to previous work, the past fluorescent image background correction methods have been designed for single FOV and have not been applied to terabyte-sized images with large mosaic FOVs, low SNR and diminishing access to background information over time as cell colonies span entirely multiple FOVs. The code is available as open-source from the following link https://isg.nist.gov/.


Asunto(s)
Aumento de la Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Fluorescente/métodos , Imagen de Lapso de Tiempo/métodos , Regulación de la Expresión Génica , Humanos , Factor 3 de Transcripción de Unión a Octámeros/metabolismo , Células Madre
5.
J Microsc ; 249(1): 41-52, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23126432

RESUMEN

We present a new method for segmenting phase contrast images of NIH 3T3 fibroblast cells that is accurate even when cells are physically in contact with each other. The problem of segmentation, when cells are in contact, poses a challenge to the accurate automation of cell counting, tracking and lineage modelling in cell biology. The segmentation method presented in this paper consists of (1) background reconstruction to obtain noise-free foreground pixels and (2) incorporation of biological insight about dividing and nondividing cells into the segmentation process to achieve reliable separation of foreground pixels defined as pixels associated with individual cells. The segmentation results for a time-lapse image stack were compared against 238 manually segmented images (8219 cells) provided by experts, which we consider as reference data. We chose two metrics to measure the accuracy of segmentation: the 'Adjusted Rand Index' which compares similarities at a pixel level between masks resulting from manual and automated segmentation, and the 'Number of Cells per Field' (NCF) which compares the number of cells identified in the field by manual versus automated analysis. Our results show that the automated segmentation compared to manual segmentation has an average adjusted rand index of 0.96 (1 being a perfect match), with a standard deviation of 0.03, and an average difference of the two numbers of cells per field equal to 5.39% with a standard deviation of 4.6%.


Asunto(s)
Fibroblastos/citología , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía de Contraste de Fase/métodos , Imagen de Lapso de Tiempo/métodos , Animales , Adhesión Celular , Recuento de Células , División Celular , Forma de la Célula , Biología Computacional , Ratones , Células 3T3 NIH , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
6.
J Microsc ; 221(Pt 1): 30-45, 2006 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-16438687

RESUMEN

The distribution of looping patterns of laminin in uveal melanomas and other tumours has been associated with adverse outcome. Moreover, these patterns are generated by highly invasive tumour cells through the process of vasculogenic mimicry and are not therefore blood vessels. Nevertheless, these extravascular matrix patterns conduct plasma. The three-dimensional (3D) configuration of these laminin-rich patterns compared with blood vessels has been the subject of speculation and intensive investigation. We have developed a method for the 3D reconstruction of volume for these extravascular matrix proteins from serial paraffin sections cut at 4 microm thicknesses and stained with a fluorescently labelled antibody to laminin (Maniotis et al., 2002). Each section was examined via confocal laser-scanning focal microscopy (CLSM) and 13 images were recorded in the Z-dimension for each slide. The input CLSM imagery is composed of a set of 3D sub-volumes (stacks of 2D images) acquired at multiple confocal depths, from a sequence of consecutive slides. Steps for automated reconstruction included (1) unsupervised methods for selecting an image frame from a sub-volume based on entropy and contrast criteria, (2) a fully automated registration technique for image alignment and (3) an improved histogram equalization method that compensates for spatially varying image intensities in CLSM imagery due to photo-bleaching. We compared image alignment accuracy of a fully automated method with registration accuracy achieved by human subjects using a manual method. Automated 3D volume reconstruction was found to provide significant improvement in accuracy, consistency of results and performance time for CLSM images acquired from serial paraffin sections.


Asunto(s)
Proteínas de la Matriz Extracelular/análisis , Procesamiento de Imagen Asistido por Computador/métodos , Melanoma/química , Microscopía Confocal/métodos , Neoplasias de la Úvea/química , Humanos , Microtomía , Adhesión en Parafina
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...