RESUMO
Neutron interferometry uniquely combines neutron imaging and scattering methods to enable characterization of multiple length scales from 1 nm to 10 µm. However, building, operating, and using such neutron imaging instruments poses constraints on the acquisition time and on the number of measured images per sample. Experiment time-constraints yield small quantities of measured images that are insufficient for automating image analyses using supervised artificial intelligence (AI) models. One approach alleviates this problem by supplementing annotated measured images with synthetic images. To this end, we create a data-driven simulation framework that supplements training data beyond typical data-driven augmentations by leveraging statistical intensity models, such as the Johnson family of probability density functions (PDFs). We follow the simulation framework steps for an image segmentation task including Estimate PDFs â Validate PDFs â Design Image Masks â Generate Intensities â Train AI Model for Segmentation. Our goal is to minimize the manual labor needed to execute the steps and maximize our confidence in simulations and segmentation accuracy. We report results for a set of nine known materials (calibration phantoms) that were imaged using a neutron interferometer acquiring four-dimensional images and segmented by AI models trained with synthetic and measured images and their masks.
RESUMO
We present a graph neural network (GNN)-based framework applied to large-scale microscopy image segmentation tasks. While deep learning models, like convolutional neural networks (CNNs), have become common for automating image segmentation tasks, they are limited by the image size that can fit in the memory of computational hardware. In a GNN framework, large-scale images are converted into graphs using superpixels (regions of pixels with similar color/intensity values), allowing us to input information from the entire image into the model. By converting images with hundreds of millions of pixels to graphs with thousands of nodes, we can segment large images using memory-limited computational resources. We compare the performance of GNN- and CNN-based segmentation in terms of accuracy, training time and required graphics processing unit memory. Based on our experiments with microscopy images of biological cells and cell colonies, GNN-based segmentation used one to three orders-of-magnitude fewer computational resources with only a change in accuracy of $-2\;%$ to $+0.3\;%$. Furthermore, errors due to superpixel generation can be reduced by either using better superpixel generation algorithms or increasing the number of superpixels, thereby allowing for improvement in the GNN framework's accuracy. This trade-off between accuracy and computational cost over CNN models makes the GNN framework attractive for many large-scale microscopy image segmentation tasks in biology.
RESUMO
In the field of tissue engineering, 3D scaffolds and cells are often combined to yield constructs that are used as therapeutics to repair or restore tissue function in patients. Viable cells are often required to achieve the intended mechanism of action for the therapy, where the live cells may build new tissue or may release factors that induce tissue regeneration. Thus, there is a need to reliably measure cell viability in 3D scaffolds as a quality attribute of a tissue-engineered medical product. Here, we developed a noninvasive, label-free, 3D optical coherence tomography (OCT) method to rapidly (2.5 min) image large sample volumes (1 mm3 ) to assess cell viability and distribution within scaffolds. OCT imaging was assessed using a model scaffold-cell system consisting of a polysaccharide-based hydrogel seeded with human Jurkat cells. Four test systems were used: hydrogel seeded with live cells, hydrogel seeded with heat-shocked or fixed dead cells and hydrogel without any cells. Time series OCT images demonstrated changes in the time-dependent speckle patterns due to refractive index (RI) variations within live cells that were not observed for pure hydrogel samples or hydrogels with dead cells. The changes in speckle patterns were used to generate live-cell contrast by image subtraction. In this way, objects with large changes in RI were binned as live cells. Using this approach, on average, OCT imaging measurements counted 326 ± 52 live cells per 0.288 mm3 for hydrogels that were seeded with 288 live cells (as determined by the acridine orange-propidium iodide cell counting method prior to seeding cells in gels). Considering the substantial uncertainties in fabricating the scaffold-cell constructs, such as the error from pipetting and counting cells, a 13% difference in the live-cell count is reasonable. Additionally, the 3D distribution of live cells was mapped within a hydrogel scaffold to assess the uniformity of their distribution across the volume. Our results demonstrate a real-time, noninvasive method to rapidly assess the spatial distribution of live cells within a 3D scaffold that could be useful for assessing tissue-engineered medical products.
Assuntos
Engenharia Tecidual , Tomografia de Coerência Óptica , Humanos , Engenharia Tecidual/métodos , Sobrevivência Celular , Alicerces Teciduais , Hidrogéis/farmacologiaRESUMO
The properties and structure of the cellular microenvironment can influence cell behavior. Sites of cell adhesion to the extracellular matrix (ECM) initiate intracellular signaling that directs cell functions such as proliferation, differentiation, and apoptosis. Electrospun fibers mimic the fibrous nature of native ECM proteins and cell culture in fibers affects cell shape and dimensionality, which can drive specific functions, such as the osteogenic differentiation of primary human bone marrow stromal cells (hBMSCs), by. In order to probe how scaffolds affect cell shape and behavior, cell-fiber contacts were imaged to assess their shape and dimensionality through a novel approach. Fluorescent polymeric fiber scaffolds were made so that they could be imaged by confocal fluorescence microscopy. Fluorescent polymer films were made as a planar control. hBSMCs were cultured on the fluorescent substrates and the cells and substrates were imaged. Two different image analysis approaches, one having geometrical assumptions and the other having statistical assumptions, were used to analyze the 3D structure of cell-scaffold contacts. The cells cultured in scaffolds contacted the fibers in multiple planes over the surface of the cell, while the cells cultured on films had contacts confined to the bottom surface of the cell. Shape metric analysis indicated that cell-fiber contacts had greater dimensionality and greater 3D character than the cell-film contacts. These results suggest that cell adhesion site-initiated signaling could emanate from multiple planes over the cell surface during culture in fibers, as opposed to emanating only from the cell's basal surface during culture on planar surfaces.
Assuntos
Células-Tronco Mesenquimais , Osteogênese , Humanos , Alicerces Teciduais/química , Diferenciação Celular , Matriz Extracelular/metabolismo , Células Cultivadas , Engenharia Tecidual/métodos , Células da Medula ÓsseaAssuntos
Metadados , Microscopia/instrumentação , Microscopia/métodos , Microscopia/normas , Animais , Pesquisa Biomédica/organização & administração , Calibragem , Coleta de Dados , Mineração de Dados/normas , Humanos , Controle de Qualidade , Reprodutibilidade dos Testes , Sociedades Científicas , Software , Integração de Sistemas , Interface Usuário-ComputadorRESUMO
This paper addresses the problem of designing trojan detectors in neural networks (NNs) using interactive simulations. Trojans in NNs are defined as triggers in inputs that cause misclassification of such inputs into a class (or classes) unintended by the design of a NN-based model. The goal of our work is to understand encodings of a variety of trojan types in fully connected layers of neural networks. Our approach is (1) to simulate nine types of trojan embeddings into dot patterns, (2) to devise measurements of NN states, and (3) to design trojan detectors in NN-based classification models. The interactive simulations are built on top of TensorFlow Playground with in-memory storage of data and NN coefficients. The simulations provide analytical, visualization, and output operations performed on training datasets and NN architectures. The measurements of a NN include (a) model inefficiency using modified Kullback-Liebler (KL) divergence from uniformly distributed states and (b) model sensitivity to variables related to data and NNs. Using the KL divergence measurements at each NN layer and per each predicted class label, a trojan detector is devised to discriminate NN models with or without trojans. To document robustness of such a trojan detector with respect to NN architectures, dataset perturbations, and trojan types, several properties of the KL divergence measurement are presented. For the general use, the web-based simulations is deployed via GitHub pages at https://github.com/usnistgov/nn-calculator.
RESUMO
A modern day light microscope has evolved from a tool devoted to making primarily empirical observations to what is now a sophisticated , quantitative device that is an integral part of both physical and life science research. Nowadays, microscopes are found in nearly every experimental laboratory. However, despite their prevalent use in capturing and quantifying scientific phenomena, neither a thorough understanding of the principles underlying quantitative imaging techniques nor appropriate knowledge of how to calibrate, operate and maintain microscopes can be taken for granted. This is clearly demonstrated by the well-documented and widespread difficulties that are routinely encountered in evaluating acquired data and reproducing scientific experiments. Indeed, studies have shown that more than 70% of researchers have tried and failed to repeat another scientist's experiments, while more than half have even failed to reproduce their own experiments. One factor behind the reproducibility crisis of experiments published in scientific journals is the frequent underreporting of imaging methods caused by a lack of awareness and/or a lack of knowledge of the applied technique. Whereas quality control procedures for some methods used in biomedical research, such as genomics (e.g. DNA sequencing, RNA-seq) or cytometry, have been introduced (e.g. ENCODE), this issue has not been tackled for optical microscopy instrumentation and images. Although many calibration standards and protocols have been published, there is a lack of awareness and agreement on common standards and guidelines for quality assessment and reproducibility. In April 2020, the QUality Assessment and REProducibility for instruments and images in Light Microscopy (QUAREP-LiMi) initiative was formed. This initiative comprises imaging scientists from academia and industry who share a common interest in achieving a better understanding of the performance and limitations of microscopes and improved quality control (QC) in light microscopy. The ultimate goal of the QUAREP-LiMi initiative is to establish a set of common QC standards, guidelines, metadata models and tools, including detailed protocols, with the ultimate aim of improving reproducible advances in scientific research. This White Paper (1) summarizes the major obstacles identified in the field that motivated the launch of the QUAREP-LiMi initiative; (2) identifies the urgent need to address these obstacles in a grassroots manner, through a community of stakeholders including, researchers, imaging scientists, bioimage analysts, bioimage informatics developers, corporate partners, funding agencies, standards organizations, scientific publishers and observers of such; (3) outlines the current actions of the QUAREP-LiMi initiative and (4) proposes future steps that can be taken to improve the dissemination and acceptance of the proposed guidelines to manage QC. To summarize, the principal goal of the QUAREP-LiMi initiative is to improve the overall quality and reproducibility of light microscope image data by introducing broadly accepted standard practices and accurately captured image data metrics.
Assuntos
Microscopia , Padrões de Referência , Reprodutibilidade dos TestesRESUMO
We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that once a model is trained, it can be applied on arbitrarily sized images, although it is still constrained by the available GPU memory. This work is motivated by overcoming the GPU memory size constraint without numerically impacting the final result. Our approach is to select a tile size that will fit into GPU memory with a halo border of half the network receptive field. Next, stride across the image by that tile size without the halo. The input tile halos will overlap, while the output tiles join exactly at the seams. Such an approach enables inference to be performed on whole slide microscopy images, such as those generated by a slide scanner. The novelty of this work is in documenting the formulas for determining tile size and stride and then validating them on U-Net and FC-DenseNet architectures. In addition, we quantify the errors due to tiling configurations which do not satisfy the constraints, and we explore the use of architecture effective receptive fields to estimate the tiling parameters.
RESUMO
Predicting Retinal Pigment Epithelium (RPE) cell functions in stem cell implants using non-invasive bright field microscopy imaging is a critical task for clinical deployment of stem cell therapies. Such cell function predictions can be carried out using Artificial Intelligence (AI) based models. In this paper we used Traditional Machine Learning (TML) and Deep Learning (DL) based AI models for cell function prediction tasks. TML models depend on feature engineering and DL models perform feature engineering automatically but have higher modeling complexity. This work aims at exploring the tradeoffs between three approaches using TML and DL based models for RPE cell function prediction from microscopy images and at understanding the accuracy relationship between pixel-, cell feature-, and implant label-level accuracies of models. Among the three compared approaches to cell function prediction, the direct approach to cell function prediction from images is slightly more accurate in comparison to indirect approaches using intermediate segmentation and/or feature engineering steps. We also evaluated accuracy variations with respect to model selections (five TML models and two DL models) and model configurations (with and without transfer learning). Finally, we quantified the relationships between segmentation accuracy and the number of samples used for training a model, segmentation accuracy and cell feature error, and cell feature error and accuracy of implant labels. We concluded that for the RPE cell data set, there is a monotonic relationship between the number of training samples and image segmentation accuracy, and between segmentation accuracy and cell feature error, but there is no such a relationship between segmentation accuracy and accuracy of RPE implant labels.
RESUMO
Increases in the number of cell therapies in the preclinical and clinical phases have prompted the need for reliable and noninvasive assays to validate transplant function in clinical biomanufacturing. We developed a robust characterization methodology composed of quantitative bright-field absorbance microscopy (QBAM) and deep neural networks (DNNs) to noninvasively predict tissue function and cellular donor identity. The methodology was validated using clinical-grade induced pluripotent stem cell-derived retinal pigment epithelial cells (iPSC-RPE). QBAM images of iPSC-RPE were used to train DNNs that predicted iPSC-RPE monolayer transepithelial resistance, predicted polarized vascular endothelial growth factor (VEGF) secretion, and matched iPSC-RPE monolayers to the stem cell donors. DNN predictions were supplemented with traditional machine-learning algorithms that identified shape and texture features of single cells that were used to predict tissue function and iPSC donor identity. These results demonstrate noninvasive cell therapy characterization can be achieved with QBAM and machine learning.
Assuntos
Diferenciação Celular , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Células-Tronco Pluripotentes Induzidas , Microscopia , Epitélio Pigmentado da Retina , Humanos , Células-Tronco Pluripotentes Induzidas/citologia , Células-Tronco Pluripotentes Induzidas/metabolismo , Epitélio Pigmentado da Retina/citologia , Epitélio Pigmentado da Retina/metabolismoRESUMO
In living systems, it is frequently stated that form follows function by virtue of evolutionary pressures on organism development, but in the study of how functions emerge at the cellular level, function often follows form. We study this chicken versus egg problem of emergent structure-property relationships in living systems in the context of primary human bone marrow stromal cells cultured in a variety of microenvironments that have been shown to cause distinct patterns of cell function and differentiation. Through analysis of a publicly available catalog of three-dimensional (3D) cell shape data, we introduce a family of metrics to characterize the 'form' of the cell populations that emerge from a variety of diverse microenvironments. In particular, measures of form are considered that are expected to have direct significance for cell function, signaling and metabolic activity: dimensionality, polarizability and capacitance. Dimensionality was assessed by an intrinsic measure of cell shape obtained from the polarizability tensor. This tensor defines ellipsoids for arbitrary cell shapes and the thinnest dimension of these ellipsoids, P 1, defines a reference minimal scale for cells cultured in a 3D microenvironment. Polarizability governs the electric field generated by a cell, and determines the cell's ability to detect electric fields. Capacitance controls the shape dependence of the rate at which diffusing molecules contact the surface of the cell, and this has great significance for inter-cellular signaling. These results invite new approaches for designing scaffolds which explicitly direct cell dimensionality, polarizability and capacitance to guide the emergence of new cell functions derived from the acquired form.
Assuntos
Técnicas de Cultura de Células/métodos , Diferenciação Celular/efeitos dos fármacos , Microambiente Celular , Células-Tronco Mesenquimais/citologia , Alicerces Teciduais/química , Algoritmos , Animais , Núcleo Celular/metabolismo , Forma Celular , Eletricidade , Fibrinogênio/química , Humanos , Camundongos , Microscopia Confocal , Nanofibras/química , Poliestirenos/química , Probabilidade , Transdução de Sinais , Trombina/químicaRESUMO
Light field cameras are an emerging imaging device for acquiring 3-D information of a scene by capturing a field of light rays traveling in space. As light field cameras become portable, hand-held, and affordable, their potential as a 3-D measurement instrument is growing in many applications, including 3-D evidence imaging in crime scene investigations. We evaluated the lateral resolution of commercially available light field cameras, which is one of the fundamental specifications of imaging instruments. For the evaluation of the camera's lateral resolution, we imaged Siemens stars under various imaging configurations and experimental conditions, including changes in distance between the camera and the resolution target plate, illumination, zoom level, location of the Siemens star in the camera's field-of-view, and cameras of the same model. The analysis results from a full factorial experiment showed that (i) when a lower zoom level of the camera was used, the lateral resolution tended not to be affected by distance; however, when a higher zoom level was used, it tended to decrease significantly with respect to the distance, (ii) the center region of the camera's field-of-view provided a better lateral resolution than the peripheral regions, (iii) a higher zoom level yielded a higher lateral resolution, (iv) the two cameras of the same model used in the study did not show a significant difference in the lateral resolution, and (v) changes in illumination did not affect the lateral resolution of the cameras.
RESUMO
BACKGROUND: Cell-scaffold contact measurements are derived from pairs of co-registered volumetric fluorescent confocal laser scanning microscopy (CLSM) images (z-stacks) of stained cells and three types of scaffolds (i.e., spun coat, large microfiber, and medium microfiber). Our analysis of the acquired terabyte-sized collection is motivated by the need to understand the nature of the shape dimensionality (1D vs 2D vs 3D) of cell-scaffold interactions relevant to tissue engineers that grow cells on biomaterial scaffolds. RESULTS: We designed five statistical and three geometrical contact models, and then down-selected them to one from each category using a validation approach based on physically orthogonal measurements to CLSM. The two selected models were applied to 414 z-stacks with three scaffold types and all contact results were visually verified. A planar geometrical model for the spun coat scaffold type was validated from atomic force microscopy images by computing surface roughness of 52.35 nm ±31.76 nm which was 2 to 8 times smaller than the CLSM resolution. A cylindrical model for fiber scaffolds was validated from multi-view 2D scanning electron microscopy (SEM) images. The fiber scaffold segmentation error was assessed by comparing fiber diameters from SEM and CLSM to be between 0.46% to 3.8% of the SEM reference values. For contact verification, we constructed a web-based visual verification system with 414 pairs of images with cells and their segmentation results, and with 4968 movies with animated cell, scaffold, and contact overlays. Based on visual verification by three experts, we report the accuracy of cell segmentation to be 96.4% with 94.3% precision, and the accuracy of cell-scaffold contact for a statistical model to be 62.6% with 76.7% precision and for a geometrical model to be 93.5% with 87.6% precision. CONCLUSIONS: The novelty of our approach lies in (1) representing cell-scaffold contact sites with statistical intensity and geometrical shape models, (2) designing a methodology for validating 3D geometrical contact models and (3) devising a mechanism for visual verification of hundreds of 3D measurements. The raw and processed data are publicly available from https://isg.nist.gov/deepzoomweb/data/ together with the web -based verification system.
Assuntos
Imageamento Tridimensional/métodos , Modelos Biológicos , Alicerces Teciduais/química , Algoritmos , Materiais Biocompatíveis/química , Células da Medula Óssea/citologia , Humanos , Internet , Masculino , Células-Tronco Mesenquimais/citologia , Microscopia de Força Atômica , Microscopia Confocal , Microscopia Eletrônica de Varredura , Interface Usuário-Computador , Microtomografia por Raio-X , Adulto JovemRESUMO
Automated microscopy can image specimens larger than the microscope's field of view (FOV) by stitching overlapping image tiles. It also enables time-lapse studies of entire cell cultures in multiple imaging modalities. We created MIST (Microscopy Image Stitching Tool) for rapid and accurate stitching of large 2D time-lapse mosaics. MIST estimates the mechanical stage model parameters (actuator backlash, and stage repeatability 'r') from computed pairwise translations and then minimizes stitching errors by optimizing the translations within a (4r)2 square area. MIST has a performance-oriented implementation utilizing multicore hybrid CPU/GPU computing resources, which can process terabytes of time-lapse multi-channel mosaics 15 to 100 times faster than existing tools. We created 15 reference datasets to quantify MIST's stitching accuracy. The datasets consist of three preparations of stem cell colonies seeded at low density and imaged with varying overlap (10 to 50%). The location and size of 1150 colonies are measured to quantify stitching accuracy. MIST generated stitched images with an average centroid distance error that is less than 2% of a FOV. The sources of these errors include mechanical uncertainties, specimen photobleaching, segmentation, and stitching inaccuracies. MIST produced higher stitching accuracy than three open-source tools. MIST is available in ImageJ at isg.nist.gov.
RESUMO
This paper addresses the problem of classifying materials from microspectroscopy at a pixel level. The challenges lie in identifying discriminatory spectral features and obtaining accurate and interpretable models relating spectra and class labels. We approach the problem by designing a supervised classifier from a tandem of Artificial Neural Network (ANN) models that identify relevant features in raw spectra and achieve high classification accuracy. The tandem of ANN models is meshed with classification rule extraction methods to lower the model complexity and to achieve interpretability of the resulting model. The contribution of the work is in designing each ANN model based on the microspectroscopy hypothesis about a discriminatory feature of a certain target class being composed of a linear combination of spectra. The novelty lies in meshing ANN and decision rule models into a tandem configuration to achieve accurate and interpretable classification results. The proposed method was evaluated using a set of broadband coherent anti-Stokes Raman scattering (BCARS) microscopy cell images (600 000 pixel-level spectra) and a reference four-class rule-based model previously created by biochemical experts. The generated classification rule-based model was on average 85% accurate measured by the DICE pixel label similarity metric, and on average 96% similar to the reference rules measured by the vector cosine metric.
Assuntos
Redes Neurais de Computação , Análise Espectral/métodos , Algoritmos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Análise Espectral/normasRESUMO
Many biomaterial scaffolds have been advanced to provide synthetic cell niches for tissue engineering and drug screening applications; however, current methods for comparing scaffold niches focus on cell functional outcomes or attempt to normalize materials properties between different scaffold formats. We demonstrate a three-dimensional (3D) cellular morphotyping strategy for comparing biomaterial scaffold cell niches between different biomaterial scaffold formats. Primary human bone marrow stromal cells (hBMSCs) were cultured on 8 different biomaterial scaffolds, including fibrous scaffolds, hydrogels, and porous sponges, in 10 treatment groups to compare a variety of biomaterial scaffolds and cell morphologies. A bioinformatics approach was used to determine the 3D cellular morphotype for each treatment group by using 82 shape metrics to analyze approximately 1000 cells. We found that hBMSCs cultured on planar substrates yielded planar cell morphotypes, while those cultured in 3D scaffolds had elongated or equiaxial cellular morphotypes with greater height. Multivariate analysis was effective at distinguishing mean shapes of cells in flat substrates from cells in scaffolds, as was the metric L1-depth (the cell height along its shortest axis after aligning cells with a characteristic ellipsoid). The 3D cellular morphotyping technique enables direct comparison of cellular microenvironments between widely different types of scaffolds and design of scaffolds based on cell structure-function relationships.
RESUMO
Recent work demonstrates that osteoprogenitor cell culture on nanofiber scaffolds can promote differentiation. This response may be driven by changes in cell morphology caused by the three-dimensional (3D) structure of nanofibers. We hypothesized that nanofiber effects on cell behavior may be mediated by changes in organelle structure and function. To test this hypothesis, human bone marrow stromal cells (hBMSCs) were cultured on poly(ε-caprolactone) (PCL) nanofibers scaffolds and on PCL flat spuncoat films. After 1 day-culture, hBMSCs were stained for actin, nucleus, mitochondria, and peroxisomes, and then imaged using 3D confocal microscopy. Imaging revealed that the hBMSC cell body (actin) and peroxisomal volume were reduced during culture on nanofibers. In addition, the nucleus and peroxisomes occupied a larger fraction of cell volume during culture on nanofibers than on films, suggesting enhancement of the nuclear and peroxisomal functional capacity. Organelles adopted morphologies with greater 3D-character on nanofibers, where the Z-Depth (a measure of cell thickness) was increased. Comparisons of organelle positions indicated that the nucleus, mitochondria, and peroxisomes were closer to the cell center (actin) for nanofibers, suggesting that nanofiber culture induced active organelle positioning. The smaller cell volume and more centralized organelle positioning would reduce the energy cost of inter-organelle vesicular transport during culture on nanofibers. Finally, hBMSC bioassay measurements (DNA, peroxidase, bioreductive potential, lactate, and adenosine triphosphate (ATP)) indicated that peroxidase activity may be enhanced during nanofiber culture. These results demonstrate that culture of hBMSCs on nanofibers caused changes in organelle structure and positioning, which may affect organelle functional capacity and transport. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. J Biomed Mater Res Part B: Appl Biomater, 2016. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 989-1001, 2017.