RESUMO
Quantification of in vitro osteoclast cultures (e.g. cell number) often relies on manual counting methods. These approaches are labour intensive, time consuming and result in substantial inter- and intra-user variability. This study aimed to develop and validate an automated workflow to robustly quantify in vitro osteoclast cultures. Using ilastik, a machine learning-based image analysis software, images of tartrate resistant acid phosphatase-stained mouse osteoclasts cultured on dentine discs were used to train the ilastik-based algorithm. Assessment of algorithm training showed that osteoclast numbers strongly correlated between manual- and automatically quantified values (r = 0.87). Osteoclasts were consistently faithfully segmented by the model when visually compared to the original reflective light images. The ability of this method to detect changes in osteoclast number in response to different treatments was validated using zoledronate, ticagrelor, and co-culture with MCF7 breast cancer cells. Manual and automated counting methods detected a 70% reduction (p < 0.05) in osteoclast number, when cultured with 10 nM zoledronate and a dose-dependent decrease with 1-10 µM ticagrelor (p < 0.05). Co-culture with MCF7 cells increased osteoclast number by ≥ 50% irrespective of quantification method. Overall, an automated image segmentation and analysis workflow, which consistently and sensitively identified in vitro osteoclasts, was developed. Advantages of this workflow are (1) significantly reduction in user variability of endpoint measurements (93%) and analysis time (80%); (2) detection of osteoclasts cultured on different substrates from different species; and (3) easy to use and freely available to use along with tutorial resources.
Assuntos
Reabsorção Óssea , Osteoclastos , Camundongos , Animais , Ácido Zoledrônico , Ticagrelor , Técnicas de Cocultura , Células Cultivadas , Fosfatase Ácida/análise , Fosfatase Ácida Resistente a Tartarato , Diferenciação CelularRESUMO
The demand for single-cell level data is constantly increasing within life sciences. In order to meet this demand, robust cell segmentation methods that can tackle challenging in vivo tissues with complex morphology are required. However, currently available cell segmentation and volumetric analysis methods perform poorly on 3D images. Here, we generated ShapeMetrics, a MATLAB-based script that segments cells in 3D and, by performing unbiased clustering using a heatmap, separates the cells into subgroups according to their volumetric and morphological differences. The cells can be accurately segregated according to different biologically meaningful features such as cell ellipticity, longest axis, cell elongation, or the ratio between cell volume and surface area. Our machine learning based script enables dissection of a large amount of novel data from microscope images in addition to the traditional information based on fluorescent biomarkers. Furthermore, the cells in different subgroups can be spatially mapped back to their original locations in the tissue image to help elucidate their roles in their respective morphological contexts. In order to facilitate the transition from bulk analysis to single-cell level accuracy, we emphasize the user-friendliness of our method by providing detailed step-by-step instructions through the pipeline hence aiming to reach users with less experience in computational biology.
Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Algoritmos , Animais , Biologia Computacional , Humanos , Microscopia , Software , Análise EspacialRESUMO
BACKGROUND: The model species Tetranychus urticae produces important plant injury and economic losses in the field. The current accepted method for the quantification of the spider mite damage in Arabidopsis whole rosettes is time consuming and entails a bottleneck for large-scale studies such as mutant screening or quantitative genetic analyses. Here, we describe an improved version of the existing method by designing an automatic protocol. The accuracy, precision, reproducibility and concordance of the new enhanced approach are validated in two Arabidopsis accessions with opposite damage phenotypes. Results are compared to the currently available manual method. RESULTS: Image acquisition experiments revealed that the automatic settings plus 10 values of brightness and the black background are the optimal conditions for a specific recognition of spider mite damage by software programs. Among the different tested methods, the Ilastik-Fiji tandem based on machine learning was the best procedure able to quantify the damage maintaining the differential range of damage between accessions. In addition, the Ilastik-Fiji tandem method showed the lowest variability within a set of conditions and the highest stability under different lighting or background surroundings. Bland-Altman concordance results pointed out a negative value for Ilastik-Fiji, which implies a minor estimation of the damage when compared to the manual standard method. CONCLUSIONS: The novel approach using Ilastik and Fiji programs entails a great improvement for the quantification of the specific spider mite damage in Arabidopsis whole rosettes. The automation of the proposed method based on interactive machine learning eliminates the subjectivity and inter-rater-variability of the previous manual protocol. Besides, this method offers a robust tool for time saving and to avoid the damage overestimation observed with other methods.
Assuntos
Agricultura/métodos , Automação/instrumentação , Herbivoria , Tetranychidae/fisiologia , Agricultura/instrumentação , Animais , Arabidopsis/fisiologia , Botânica/instrumentação , Botânica/métodos , Entomologia/instrumentação , Entomologia/métodosRESUMO
Phase-contrast micrographs are often used for confirmation of proliferation and viability assays. However, they are usually only a qualitative tool and fail to exclude with certainty the presence of assay interference by test substances. The complexity of image analysis workflows hinders life scientists from routinely utilizing micrograph data. Here, we present an open-source software-based, combined ilastik segmentation/ImageJ measurement of area (ISIMA) approach for cell monolayer segmentation and confluence percentage measurement of phase-contrast micrographs of Hep G2 cells. The aim of this study is to test whether the proposed approach is suitable for quantitative confirmation of proliferation data, acquired by the 3-(4,5-Dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay. Our results show that ISIMA is user-friendly and provides reproducible data, which strongly correlates to the results of the MTT assay. In conclusion, ISIMA is an affordable, simple, and fast approach for confluence quantification by researchers without image analysis background.
Assuntos
Técnicas de Cultura de Células/instrumentação , Processamento de Imagem Assistida por Computador , Microscopia de Contraste de Fase/métodos , Algoritmos , Proliferação de Células , Sobrevivência Celular , Células Hep G2 , HumanosRESUMO
There is a growing need for single-cell level data analysis in correlation with the advancements of microscopy techniques. Morphology-based statistics gathered from individual cells are essential for detection and quantification of even subtle changes within the complex tissues, yet the information available from high-resolution imaging is oftentimes sub-optimally utilized due to the lack of proper computational analysis software. Here we present ShapeMetrics, a 3D cell segmentation pipeline that we have developed to identify, analyze, and quantify single cells in an image. This MATLAB-based script enables users to extract morphological parameters, such as ellipticity, longest axis, cell elongation, or the ratio between cell volume and surface area. We have specifically invested in creating a user-friendly pipeline, aimed for biologists with a limited computational background. Our pipeline is presented with detailed stepwise instructions, starting from the establishment of machine learning-based prediction files of immuno-labeled cell membranes followed by the application of 3D cell segmentation and parameter extraction script, leading to the morphometric analysis and spatial visualization of cell clusters defined by their morphometric features.
Assuntos
Imageamento Tridimensional , Software , Imageamento Tridimensional/métodos , Microscopia/métodos , Ciclo Celular , Análise de Célula Única/métodos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Visualizing nerve cells has been fundamental for the systematic description of brain structure and function in humans and other species. Different approaches aimed to unravel the morphological features of neuron types and diversity. The inherent complexity of the human nervous tissue and the need for proper histological processing have made studying human dendrites and spines challenging in postmortem samples. In this study, we used Golgi data and open-source software for 3D image reconstruction of human neurons from the cortical amygdaloid nucleus to show different dendrites and pleomorphic spines at different angles. Procedures required minimal equipment and generated high-quality images for differently shaped cells. We used the "single-section" Golgi method adapted for the human brain to engender 3D reconstructed images of the neuronal cell body and the dendritic ramification by adopting a neuronal tracing procedure. In addition, we elaborated 3D reconstructions to visualize heterogeneous dendritic spines using a supervised machine learning-based algorithm for image segmentation. These tools provided an additional upgrade and enhanced visual display of information related to the spatial orientation of dendritic branches and for dendritic spines of varied sizes and shapes in these human subcortical neurons. This same approach can be adapted for other techniques, areas of the central or peripheral nervous system, and comparative analysis between species.
Assuntos
Dendritos , Córtex Olfatório , Humanos , Dendritos/fisiologia , Imageamento Tridimensional , Neurônios , Software , Espinhas Dendríticas/fisiologiaRESUMO
Trichomes are unicellular or multicellular hair-like appendages developed on the aerial plant epidermis of most plant species that act as a protective barrier against natural hazards. For this reason, evaluating the density of trichomes is a valuable approach for elucidating plant defence responses to a continuous challenging environment. However, previous methods for trichome counting, although reliable, require the use of specialised equipment, software or previous manipulation steps of the plant tissue, which poses a complicated hurdle for many laboratories. Here, we propose a new fast, accessible and user-friendly method to quantify trichomes that overcomes all these drawbacks and makes trichome quantification a reachable option for the scientific community. Particularly, this new method is based on the use of machine learning as a reliable tool for quantifying trichomes, following an Ilastik-Fiji tandem approach directly performed on 2D images. Our method shows high reliability and efficacy on trichome quantification in Arabidopsis thaliana by comparing manual and automated results in Arabidopsis accessions with diverse trichome densities. Due to the plasticity that machine learning provides, this method also showed adaptability to other plant species, demonstrating the ability of the method to spread its scope to a greater scientific community.
Assuntos
Arabidopsis/anatomia & histologia , Aprendizado de Máquina , Tricomas/anatomia & histologia , Proteínas de Arabidopsis/análise , Aprendizado de Máquina/normas , Aprendizado de Máquina/tendências , Epiderme Vegetal/anatomia & histologia , Reprodutibilidade dos Testes , Tricomas/crescimento & desenvolvimentoRESUMO
The proper inspection of a cracks pattern over time is a critical diagnosis step to provide a thorough knowledge of the health state of a structure. When monitoring cracks propagating on a planar surface, adopting a single-image-based approach is a more convenient (costly and logistically) solution compared to subjective operators-based solutions. Machine learning (ML)- based monitoring solutions offer the advantage of automation in crack detection; however, complex and time-consuming training must be carried out. This study presents a simple and automated ML-based crack monitoring approach implemented in open sources software that only requires a single image for training. The effectiveness of the approach is assessed conducting work in controlled and real case study sites. For both sites, the generated outputs are significant in terms of accuracy (~1 mm), repeatability (sub-mm) and precision (sub-pixel). The presented results highlight that the successful detection of cracks is achievable with only a straightforward ML-based training procedure conducted on only a single image of the multi-temporal sequence. Furthermore, the use of an innovative camera kit allowed exploiting automated acquisition and transmission fundamental for Internet of Things (IoTs) for structural health monitoring and to reduce user-based operations and increase safety.
RESUMO
The damage that herbivores inflict on plants is a key component of their interaction. Several methods have been proposed to quantify the damage caused by chewing insects, but such methods are not very successful when the damage is inflicted by a cell-sucking organism. Here, we present a protocol that allows a non-destructive quantification of the damage inflicted by cell-sucking arthropods, robustly filtering out leaf vascular structures that might be mistakenly classified as damage in many plant species. The protocol is set for the laboratory environment and uses Fiji and ilastik, two free software packages.
Assuntos
Artrópodes , Herbivoria , Animais , Insetos , Folhas de Planta , PlantasRESUMO
Measuring the concentration and viability of fungal cells is an important and fundamental procedure in scientific research and industrial fermentation. In consideration of the drawbacks of manual cell counting, large quantities of fungal cells require methods that provide easy, objective and reproducible high-throughput calculations, especially for samples in complicated backgrounds. To answer this challenge, we explored and developed an easy-to-use fungal cell counting pipeline that combined the machine learning-based ilastik tool with the freeware ImageJ, as well as a conventional photomicroscope. Briefly, learning from labels provided by the user, ilastik performs segmentation and classification automatically in batch processing mode and thus discriminates fungal cells from complex backgrounds. The files processed through ilastik can be recognized by ImageJ, which can compute the numeric results with the macro 'Fungal Cell Counter'. Taking the yeast Cryptococccus deneoformans and the filamentous fungus Pestalotiopsis microspora as examples, we observed that the customizable software algorithm reduced inter-operator errors significantly and achieved accurate and objective results, while manual counting with a haemocytometer exhibited some errors between repeats and required more time. In summary, a convenient, rapid, reproducible and extremely low-cost method to count yeast cells and fungal spores is described here, which can be applied to multiple kinds of eucaryotic microorganisms in genetics, cell biology and industrial fermentation.
RESUMO
Tracking cells is one of the main challenges in biology, as it often requires time-consuming annotations and the images can have a low signal-to-noise ratio while containing a large number of cells. Here we present two methods for detecting and tracking cells using the open-source Fiji and ilastik frameworks. A straightforward approach is described using Fiji, consisting of a pre-processing and segmentation phase followed by a tracking phase, based on the overlapping of objects along the image sequence. Using ilastik, a classifier is trained through manual annotations to both detect cells over the background and be able to recognize false detections and merging cells. We describe these two methods in a step-by-step fashion, using as example a time-lapse microscopy movie of HeLa cells.
Assuntos
Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Microscopia Intravital/métodos , Software , Imagem com Lapso de Tempo/métodos , Técnicas de Cultura de Células , Células HeLa , Humanos , Microscopia Intravital/instrumentação , Razão Sinal-Ruído , Imagem com Lapso de Tempo/instrumentaçãoRESUMO
Segmentation is one of the most ubiquitous problems in biological image analysis. Here we present a machine learning-based solution to it as implemented in the open source ilastik toolkit. We give a broad description of the underlying theory and demonstrate two workflows: Pixel Classification and Autocontext. We illustrate their use on a challenging problem in electron microscopy image segmentation. After following this walk-through, we expect the readers to be able to apply the necessary steps to their own data and segment their images by either workflow.
Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Software , Animais , Conjuntos de Dados como Assunto , Camundongos , Microscopia Eletrônica/métodos , Mitocôndrias , Córtex Somatossensorial/citologia , Córtex Somatossensorial/diagnóstico por imagem , Fluxo de TrabalhoRESUMO
Connectomics-the study of how neurons wire together in the brain-is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes.