Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 197
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Brief Bioinform ; 25(5)2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39154193

RESUMO

Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Biologia Computacional/métodos , Algoritmos , Núcleo Celular
2.
Brief Bioinform ; 25(2)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38271484

RESUMO

Accurate approaches for quantifying muscle fibers are essential in biomedical research and meat production. In this study, we address the limitations of existing approaches for hematoxylin and eosin-stained muscle fibers by manually and semiautomatically labeling over 660 000 muscle fibers to create a large dataset. Subsequently, an automated image segmentation and quantification tool named MyoV is designed using mask regions with convolutional neural networks and a residual network and feature pyramid network as the backbone network. This design enables the tool to allow muscle fiber processing with different sizes and ages. MyoV, which achieves impressive detection rates of 0.93-0.96 and precision levels of 0.91-0.97, exhibits a superior performance in quantification, surpassing both manual methods and commonly employed algorithms and software, particularly for whole slide images (WSIs). Moreover, MyoV is proven as a powerful and suitable tool for various species with different muscle development, including mice, which are a crucial model for muscle disease diagnosis, and agricultural animals, which are a significant meat source for humans. Finally, we integrate this tool into visualization software with functions, such as segmentation, area determination and automatic labeling, allowing seamless processing for over 400 000 muscle fibers within a WSI, eliminating the model adjustment and providing researchers with an easy-to-use visual interface to browse functional options and realize muscle fiber quantification from WSIs.


Assuntos
Aprendizado Profundo , Humanos , Animais , Camundongos , Processamento de Imagem Assistida por Computador/métodos , Fibras Musculares Esqueléticas , Redes Neurais de Computação , Algoritmos
3.
Cytometry A ; 105(4): 266-275, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38111162

RESUMO

In biomedicine, the automatic processing of medical microscope images plays a key role in the subsequent analysis and diagnosis. Cell or nucleus segmentation is one of the most challenging tasks for microscope image processing. Due to the frequently occurred overlapping, few segmentation methods can achieve satisfactory segmentation accuracy yet. In this paper, we propose an approach to separate the overlapped cells or nuclei based on the outer Canny edges and morphological erosion. The threshold selection is first used to segment the foreground and background of cell or nucleus images. For each binary connected domain in the segmentation image, an intersection based edge selection method is proposed to choose the outer Canny edges of the overlapped cells or nuclei. The outer Canny edges are used to generate a binary cell or nucleus image that is then used to compute the cell or nucleus seeds by the proposed morphological erosion method. The nuclei of the Human U2OS cells, the mouse NIH3T3 cells and the synthetic cells are used for evaluating our proposed approach. The quantitative quantification accuracy is computed by the Dice score and 95.53% is achieved by the proposed approach. Both the quantitative and the qualitative comparisons show that the accuracy of the proposed approach is better than those of the area constrained morphological erosion (ACME) method, the iterative erosion (IE) method, the morphology and watershed (MW) method, the Generalized Laplacian of Gaussian filters (GLGF) method and ellipse fitting (EF) method in separating the cells or nuclei in three publicly available datasets.


Assuntos
Algoritmos , Núcleo Celular , Humanos , Animais , Camundongos , Células NIH 3T3 , Microscopia , Processamento de Imagem Assistida por Computador/métodos
4.
Cytometry A ; 105(7): 536-546, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38420862

RESUMO

The gold standard of leukocyte differentiation is a manual examination of blood smears, which is not only time and labor intensive but also susceptible to human error. As to automatic classification, there is still no comparative study of cell segmentation, feature extraction, and cell classification, where a variety of machine and deep learning models are compared with home-developed approaches. In this study, both traditional machine learning of K-means clustering versus deep learning of U-Net, U-Net + ResNet18, and U-Net + ResNet34 were used for cell segmentation, producing segmentation accuracies of 94.36% versus 99.17% for the dataset of CellaVision and 93.20% versus 98.75% for the dataset of BCCD, confirming that deep learning produces higher performance than traditional machine learning in leukocyte classification. In addition, a series of deep-learning approaches, including AlexNet, VGG16, and ResNet18, was adopted to conduct feature extraction and cell classification of leukocytes, producing classification accuracies of 91.31%, 97.83%, and 100% of CellaVision as well as 81.18%, 91.64% and 97.82% of BCCD, confirming the capability of the increased deepness of neural networks in leukocyte classification. As to the demonstrations, this study further conducted cell-type classification of ALL-IDB2 and PCB-HBC datasets, producing high accuracies of 100% and 98.49% among all literature, validating the deep learning model used in this study.


Assuntos
Aprendizado Profundo , Leucócitos , Redes Neurais de Computação , Humanos , Leucócitos/citologia , Leucócitos/classificação , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
5.
J Microsc ; 296(1): 79-93, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38994744

RESUMO

Micropatterning is reliable method for quantifying pluripotency of human-induced pluripotent stem cells (hiPSCs) that differentiate to form a spatial pattern of sorted, ordered and nonoverlapped three germ layers on the micropattern. In this study, we propose a deep learning method to quantify spatial patterning of the germ layers in the early differentiation stage of hiPSCs using micropattern images. We propose decoding and encoding U-net structures learning labelled Hoechst (DNA-stained) hiPSC regions with corresponding Hoechst and bright-field micropattern images to segment hiPSCs on Hoechst or bright-field images. We also propose a U-net structure to extract extraembryonic regions on a micropattern, and an algorithm to compares intensities of the fluorescence images staining respective germ-layer cells and extract their regions. The proposed method thus can quantify the pluripotency of a hiPSC line with spatial patterning including cell numbers, areas and distributions of germ-layer and extraembryonic cells on a micropattern, and reveal the formation process of hiPSCs and germ layers in the early differentiation stage by segmenting live-cell bright-field images. In our assay, the cell-number accuracy achieved 86% and 85%, and the cell region accuracy 89% and 81% for segmenting Hoechst and bright-field micropattern images, respectively. Applications to micropattern images of multiple hiPSC lines, micropattern sizes, groups of markers, living and fixed cells show the proposed method can be expected to be a useful protocol and tool to quantify pluripotency of a new hiPSC line before providing it to the scientific community.


Assuntos
Diferenciação Celular , Aprendizado Profundo , Células-Tronco Pluripotentes Induzidas , Humanos , Células-Tronco Pluripotentes Induzidas/citologia , Processamento de Imagem Assistida por Computador/métodos , Camadas Germinativas/citologia
6.
J Microsc ; 294(1): 5-13, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38196346

RESUMO

Quantitative phase imaging (QPI) is a powerful tool for label-free visualisation of living cells. Here, we compare two QPI microscopes - the Telight Q-Phase microscope and the Nanolive 3D Cell Explorer-fluo microscope. Both systems provide unbiased information about cell morphology, such as individual cell dry mass, perimeter and area. The Q-Phase microscope uses artefact-free, coherence-controlled holographic imaging technology to visualise cells in real time with minimal phototoxicity. The 3D Cell Explorer-fluo employs laser-based holotomography to reconstruct 3D images of living cells, visualising their internal structures and dynamics. Here, we analysed the strengths and limitations of both microscopes when examining two morphologically distinct cell lines - the cuboidal epithelial MDCK cells which form multicellular clusters and solitary growing Rat2 fibroblasts. We focus mainly on the ability of the devices to generate images suitable for single-cell segmentation by the built-in software, and we discuss the segmentation results and quantitative data generated from the segmented images. We show that both microscopes offer slightly different advantages, and the choice between them depends on the specific requirements and goals of the user.


Assuntos
Holografia , Microscopia , Microscopia/métodos , Imageamento Quantitativo de Fase , Linhagem Celular , Holografia/métodos , Lasers
7.
BMC Med Inform Decis Mak ; 24(1): 124, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750526

RESUMO

BACKGROUND: Spatial molecular profiling depends on accurate cell segmentation. Identification and quantitation of individual cells in dense tissues, e.g. highly inflamed tissue caused by viral infection or immune reaction, remains a challenge. METHODS: We first assess the performance of 18 deep learning-based cell segmentation models, either pre-trained or trained by us using two public image sets, on a set of immunofluorescence images stained with immune cell surface markers in skin tissue obtained during human herpes simplex virus (HSV) infection. We then further train eight of these models using up to 10,000+ training instances from the current image set. Finally, we seek to improve performance by tuning parameters of the most successful method from the previous step. RESULTS: The best model before fine-tuning achieves a mean Average Precision (mAP) of 0.516. Prediction performance improves substantially after training. The best model is the cyto model from Cellpose. After training, it achieves an mAP of 0.694; with further parameter tuning, the mAP reaches 0.711. CONCLUSION: Selecting the best model among the existing approaches and further training the model with images of interest produce the most gain in prediction performance. The performance of the resulting model compares favorably to human performance. The imperfection of the final model performance can be attributed to the moderate signal-to-noise ratio in the imageset.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Herpes Simples , Pele/diagnóstico por imagem , Biomarcadores
8.
Expert Syst Appl ; 238(Pt D)2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38646063

RESUMO

Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.

9.
BMC Bioinformatics ; 24(1): 388, 2023 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-37828466

RESUMO

BACKGROUND: Image segmentation pipelines are commonly used in microscopy to identify cellular compartments like nucleus and cytoplasm, but there are few standards for comparing segmentation accuracy across pipelines. The process of selecting a segmentation assessment pipeline can seem daunting to researchers due to the number and variety of metrics available for evaluating segmentation quality. RESULTS: Here we present automated pipelines to obtain a comprehensive set of 69 metrics to evaluate segmented data and propose a selection methodology for models based on quantitative analysis, dimension reduction or unsupervised classification techniques and informed selection criteria. CONCLUSION: We show that the metrics used here can often be reduced to a small number of metrics that give a more complete understanding of segmentation accuracy, with different groups of metrics providing sensitivity to different types of segmentation error. These tools are delivered as easy to use python libraries, command line tools, Common Workflow Language Tools, and as Web Image Processing Pipeline interactive plugins to ensure a wide range of users can access and use them. We also present how our evaluation methods can be used to observe the changes in segmentations across modern machine learning/deep learning workflows and use cases.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Microscopia , Aprendizado de Máquina , Citoplasma
10.
Cytometry A ; 103(3): 189-192, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36602064

RESUMO

The purpose of this 20-target imaging mass cytometry (IMC) panel is to identify the main cell types in formalin fixed paraffin embedded (FFPE) mouse liver tissue with the Hyperion™ mass cytometer from Standard BioTools (formerly Fluidigm). The antibody panel includes markers to identify hepatocytes (E-cadherin, HNF4α (hepatocyte nuclear factor 4 alpha), Arginase-1), liver sinusoidal endothelial cells (LSECs; CD206), Kupffer cells (F4/80, CD206), neutrophils (Ly6G, CD11b), bone marrow derived myeloid cells (BMDMs; CD11b), cholangiocytes (E-cadherin high), endothelial cells (CD31, α-SMA), plasmacytoid dendritic cells (CD317), B cells (CD19), T cells (CD3e, CD4, CD8a), NK cells (CD161) as well markers of cell activation (CD44, CD74), proliferation (Ki-67) and to aid in cell segmentation (Pan-Actin, E-cadherin, histone H3). The panel has been tested in other mouse tissues, namely the spleen, colon and lung, and therefore is likely to work across various mouse FFPE samples of interest. It has not been tested using human samples, frozen samples or in suspension mass cytometry because FFPE treatment profoundly changes epitope conformation. In summary, this panel is a powerful tool for pre-clinical research to determine cellular abundance and spatial distribution within mouse tissues and serves as a scaffold, to which more targets can be added for project specific requirements.


Assuntos
Células Endoteliais , Fígado , Humanos , Camundongos , Animais , Inclusão em Parafina/métodos , Fígado/metabolismo , Formaldeído/metabolismo , Citometria por Imagem , Fixação de Tecidos/métodos
11.
BMC Med Imaging ; 23(1): 137, 2023 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-37735354

RESUMO

BACKGROUND: Cervical cell segmentation is a fundamental step in automated cervical cancer cytology screening. The aim of this study was to develop and evaluate a deep ensemble model for cervical cell segmentation including both cytoplasm and nucleus segmentation. METHODS: The Cx22 dataset was used to develop the automated cervical cell segmentation algorithm. The U-Net, U-Net + + , DeepLabV3, DeepLabV3Plus, Transunet, and Segformer were used as candidate model architectures, and each of the first four architectures adopted two different encoders choosing from resnet34, resnet50 and denseNet121. Models were trained under two settings: trained from scratch, encoders initialized from ImageNet pre-trained models and then all layers were fine-tuned. For every segmentation task, four models were chosen as base models, and Unweighted average was adopted as the model ensemble method. RESULTS: U-Net and U-Net + + with resnet34 and denseNet121 encoders trained using transfer learning consistently performed better than other models, so they were chosen as base models. The ensemble model obtained the Dice similarity coefficient, sensitivity, specificity of 0.9535 (95% CI:0.9534-0.9536), 0.9621 (0.9619-0.9622),0.9835 (0.9834-0.9836) and 0.7863 (0.7851-0.7876), 0.9581 (0.9573-0.959), 0.9961 (0.9961-0.9962) on cytoplasm segmentation and nucleus segmentation, respectively. The Dice, sensitivity, specificity of baseline models for cytoplasm segmentation and nucleus segmentation were 0.948, 0.954, 0.9823 and 0.750, 0.713, 0.9988, respectively. Except for the specificity of cytoplasm segmentation, all metrics outperformed the best baseline models (P < 0.05) with a moderate margin. CONCLUSIONS: The proposed algorithm achieved better performances on cervical cell segmentation than baseline models. It can be potentially used in automated cervical cancer cytology screening system.


Assuntos
Neoplasias do Colo do Útero , Humanos , Feminino , Neoplasias do Colo do Útero/diagnóstico por imagem , Algoritmos , Pescoço , Aprendizado de Máquina
12.
BMC Biol ; 20(1): 174, 2022 08 05.
Artigo em Inglês | MEDLINE | ID: mdl-35932043

RESUMO

BACKGROUND: High-throughput live-cell imaging is a powerful tool to study dynamic cellular processes in single cells but creates a bottleneck at the stage of data analysis, due to the large amount of data generated and limitations of analytical pipelines. Recent progress on deep learning dramatically improved cell segmentation and tracking. Nevertheless, manual data validation and correction is typically still required and tools spanning the complete range of image analysis are still needed. RESULTS: We present Cell-ACDC, an open-source user-friendly GUI-based framework written in Python, for segmentation, tracking and cell cycle annotations. We included state-of-the-art deep learning models for single-cell segmentation of mammalian and yeast cells alongside cell tracking methods and an intuitive, semi-automated workflow for cell cycle annotation of single cells. Using Cell-ACDC, we found that mTOR activity in hematopoietic stem cells is largely independent of cell volume. By contrast, smaller cells exhibit higher p38 activity, consistent with a role of p38 in regulation of cell size. Additionally, we show that, in S. cerevisiae, histone Htb1 concentrations decrease with replicative age. CONCLUSIONS: Cell-ACDC provides a framework for the application of state-of-the-art deep learning models to the analysis of live cell imaging data without programming knowledge. Furthermore, it allows for visualization and correction of segmentation and tracking errors as well as annotation of cell cycle stages. We embedded several smart algorithms that make the correction and annotation process fast and intuitive. Finally, the open-source and modularized nature of Cell-ACDC will enable simple and fast integration of new deep learning-based and traditional methods for cell segmentation, tracking, and downstream image analysis. Source code: https://github.com/SchmollerLab/Cell_ACDC.


Assuntos
Processamento de Imagem Assistida por Computador , Saccharomyces cerevisiae , Ciclo Celular , Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Software
13.
BMC Biol ; 20(1): 263, 2022 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-36447211

RESUMO

BACKGROUND: Deep-learning-based image segmentation models are required for accurate processing of high-throughput timelapse imaging data of bacterial cells. However, the performance of any such model strictly depends on the quality and quantity of training data, which is difficult to generate for bacterial cell images. Here, we present a novel method of bacterial image segmentation using machine learning models trained with Synthetic Micrographs of Bacteria (SyMBac). RESULTS: We have developed SyMBac, a tool that allows for rapid, automatic creation of arbitrary amounts of training data, combining detailed models of cell growth, physical interactions, and microscope optics to create synthetic images which closely resemble real micrographs, and is capable of training accurate image segmentation models. The major advantages of our approach are as follows: (1) synthetic training data can be generated virtually instantly and on demand; (2) these synthetic images are accompanied by perfect ground truth positions of cells, meaning no data curation is required; (3) different biological conditions, imaging platforms, and imaging modalities can be rapidly simulated, meaning any change in one's experimental setup no longer requires the laborious process of manually generating new training data for each change. Deep-learning models trained with SyMBac data are capable of analysing data from various imaging platforms and are robust to drastic changes in cell size and morphology. Our benchmarking results demonstrate that models trained on SyMBac data generate more accurate cell identifications and precise cell masks than those trained on human-annotated data, because the model learns the true position of the cell irrespective of imaging artefacts. We illustrate the approach by analysing the growth and size regulation of bacterial cells during entry and exit from dormancy, which revealed novel insights about the physiological dynamics of cells under various growth conditions. CONCLUSIONS: The SyMBac approach will help to adapt and improve the performance of deep-learning-based image segmentation models for accurate processing of high-throughput timelapse image data.


Assuntos
Microscopia , Redes Neurais de Computação , Humanos , Bactérias , Aprendizado de Máquina , Ciclo Celular
14.
Cytometry A ; 101(6): 521-528, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35084791

RESUMO

Increasingly, highly multiplexed tissue imaging methods are used to profile protein expression at the single-cell level. However, a critical limitation is the lack of robust cell segmentation tools for tissue sections. We present Multiplexed Image Resegmentation of Internal Aberrant Membranes (MIRIAM) that combines (a) a pipeline for cell segmentation and quantification that incorporates machine learning-based pixel classification to define cellular compartments, (b) a novel method for extending incomplete cell membranes, and (c) a deep learning-based cell shape descriptor. Using human colonic adenomas as an example, we show that MIRIAM is superior to widely utilized segmentation methods and provides a pipeline that is broadly applicable to different imaging platforms and tissue types.


Assuntos
Aprendizado Profundo , Forma Celular , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina
15.
Cytometry A ; 101(8): 658-674, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35388957

RESUMO

The development of mouse spermatozoa is a continuous process from spermatogonia, spermatocytes, spermatids to mature sperm. Those developing germ cells (spermatogonia, spermatocyte, and spermatids) together with supporting sertoli cells are all enclosed inside seminiferous tubules of the testis, their identification is key to testis histology and pathology analysis. Automated segmentation of all these cells is a challenging task because of their dynamical changes in different stages. The accurate segmentation of testicular cells is critical in developing computerized spermatogenesis staging. In this paper, we present a novel segmentation model, SED-Net, which incorporates a squeeze-and-excitation (SE) module and a dense unit. The SE module optimizes and obtains features from different channels, whereas the dense unit uses fewer parameters to enhance the use of features. A human-in-the-loop strategy, named deep interactive learning, is developed to achieve better segmentation performance while reducing the workload of manual annotation and time consumption. Across a cohort of 274 seminiferous tubules from stages VI to VIII, the SED-Net achieved a pixel accuracy of 0.930, a mean pixel accuracy of 0.866, a mean intersection over union of 0.710, and a frequency weighted intersection over union of 0.878, respectively, in terms of four types of testicular cell segmentation. There is no significant difference between manual annotated tubules and segmentation results by SED-Net in cell composition analysis for tubules from stages VI to VIII. In addition, we performed cell composition analysis on 2346 segmented seminiferous tubule images from 12 segmented testicular section results. The results provided quantitation of cells of various testicular cell types across 12 stages. The rule reflects the cell variation tendency across 12 stages during development of mouse spermatozoa. The method could enable us to not only analyze cell morphology and staging during the development of mouse spermatozoa but also potentially could be applied to the study of reproductive diseases such as infertility.


Assuntos
Treinamento por Simulação , Testículo , Animais , Humanos , Masculino , Camundongos , Sêmen , Túbulos Seminíferos/anatomia & histologia , Túbulos Seminíferos/metabolismo , Células de Sertoli/metabolismo , Espermátides , Espermatogênese , Espermatozoides
16.
Mol Syst Biol ; 17(6): e10108, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34057817

RESUMO

RNA hybridization-based spatial transcriptomics provides unparalleled detection sensitivity. However, inaccuracies in segmentation of image volumes into cells cause misassignment of mRNAs which is a major source of errors. Here, we develop JSTA, a computational framework for joint cell segmentation and cell type annotation that utilizes prior knowledge of cell type-specific gene expression. Simulation results show that leveraging existing cell type taxonomy increases RNA assignment accuracy by more than 45%. Using JSTA, we were able to classify cells in the mouse hippocampus into 133 (sub)types revealing the spatial organization of CA1, CA3, and Sst neuron subtypes. Analysis of within cell subtype spatial differential gene expression of 80 candidate genes identified 63 with statistically significant spatial differential gene expression across 61 (sub)types. Overall, our work demonstrates that known cell type expression patterns can be leveraged to improve the accuracy of RNA hybridization-based spatial transcriptomics while providing highly granular cell (sub)type information. The large number of newly discovered spatial gene expression patterns substantiates the need for accurate spatial transcriptomic measurements that can provide information beyond cell (sub)type labels.


Assuntos
Perfilação da Expressão Gênica , Transcriptoma , Animais , Simulação por Computador , Camundongos , Neurônios , RNA Mensageiro , Transcriptoma/genética
17.
Graefes Arch Clin Exp Ophthalmol ; 260(4): 1215-1224, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34741660

RESUMO

PURPOSE: Specular microscopy is an indispensable tool for clinicians seeking to monitor the corneal endothelium. Automated methods of determining endothelial cell density (ECD) are limited in their ability to analyze images of poor quality. We describe and assess an image processing algorithm to analyze corneal endothelial images. METHODS: A set of corneal endothelial images acquired with a Konan CellChek specular microscope was analyzed using three methods: flex-center, Konan Auto Tracer, and the proposed method. In this technique, the algorithm determines the region of interest, filters the image to differentiate cell boundaries from their interiors, and utilizes stochastic watershed segmentation to draw cell boundaries and assess ECD based on the masked region. We compared ECD measured by the algorithm with manual and automated results from the specular microscope. RESULTS: We analyzed a total of 303 images manually, using the Auto Tracer, and with the proposed image processing method. Relative to manual analysis across all images, the mean error was 0.04% in the proposed method (p = 0.23 for difference) whereas Auto Tracer demonstrated a bias towards overestimation, with a mean error of 5.7% (p = 2.06× 10-8). The relative mean absolute errors were 6.9% and 7.9%, respectively, for the proposed and Auto Tracer. The average time for analysis of each image using the proposed method was 2.5 s. CONCLUSION: We demonstrate a computationally efficient algorithm to analyze corneal endothelial cell density that can be implemented on devices for clinical and research use.


Assuntos
Endotélio Corneano , Microscopia , Contagem de Células , Humanos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Reprodutibilidade dos Testes
18.
BMC Biol ; 19(1): 99, 2021 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-33975602

RESUMO

BACKGROUND: Visualizing and quantifying cellular heterogeneity is of central importance to study tissue complexity, development, and physiology and has a vital role in understanding pathologies. Mass spectrometry-based methods including imaging mass cytometry (IMC) have in recent years emerged as powerful approaches for assessing cellular heterogeneity in tissues. IMC is an innovative multiplex imaging method that combines imaging using up to 40 metal conjugated antibodies and provides distributions of protein markers in tissues with a resolution of 1 µm2 area. However, resolving the output signals of individual cells within the tissue sample, i.e., single cell segmentation, remains challenging. To address this problem, we developed MATISSE (iMaging mAss cyTometry mIcroscopy Single cell SegmEntation), a method that combines high-resolution fluorescence microscopy with the multiplex capability of IMC into a single workflow to achieve improved segmentation over the current state-of-the-art. RESULTS: MATISSE results in improved quality and quantity of segmented cells when compared to IMC-only segmentation in sections of heterogeneous tissues. Additionally, MATISSE enables more complete and accurate identification of epithelial cells, fibroblasts, and infiltrating immune cells in densely packed cellular areas in tissue sections. MATISSE has been designed based on commonly used open-access tools and regular fluorescence microscopy, allowing easy implementation by labs using multiplex IMC into their analysis methods. CONCLUSION: MATISSE allows segmentation of densely packed cellular areas and provides a qualitative and quantitative improvement when compared to IMC-based segmentation. We expect that implementing MATISSE into tissue section analysis pipelines will yield improved cell segmentation and enable more accurate analysis of the tissue microenvironment in epithelial tissue pathologies, such as autoimmunity and cancer.


Assuntos
Citometria por Imagem , Biomarcadores , Espectrometria de Massas , Microscopia de Fluorescência
19.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 39(3): 471-479, 2022 Jun 25.
Artigo em Zh | MEDLINE | ID: mdl-35788516

RESUMO

The count and recognition of white blood cells in blood smear images play an important role in the diagnosis of blood diseases including leukemia. Traditional manual test results are easily disturbed by many factors. It is necessary to develop an automatic leukocyte analysis system to provide doctors with auxiliary diagnosis, and blood leukocyte segmentation is the basis of automatic analysis. In this paper, we improved the U-Net model and proposed a segmentation algorithm of leukocyte image based on dual path and atrous spatial pyramid pooling. Firstly, the dual path network was introduced into the feature encoder to extract multi-scale leukocyte features, and the atrous spatial pyramid pooling was used to enhance the feature extraction ability of the network. Then the feature decoder composed of convolution and deconvolution was used to restore the segmented target to the original image size to realize the pixel level segmentation of blood leukocytes. Finally, qualitative and quantitative experiments were carried out on three leukocyte data sets to verify the effectiveness of the algorithm. The results showed that compared with other representative algorithms, the proposed blood leukocyte segmentation algorithm had better segmentation results, and the mIoU value could reach more than 0.97. It is hoped that the method could be conducive to the automatic auxiliary diagnosis of blood diseases in the future.


Assuntos
Algoritmos , Leucócitos
20.
Dev Biol ; 462(1): 7-19, 2020 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-32061886

RESUMO

The demand for single-cell level data is constantly increasing within life sciences. In order to meet this demand, robust cell segmentation methods that can tackle challenging in vivo tissues with complex morphology are required. However, currently available cell segmentation and volumetric analysis methods perform poorly on 3D images. Here, we generated ShapeMetrics, a MATLAB-based script that segments cells in 3D and, by performing unbiased clustering using a heatmap, separates the cells into subgroups according to their volumetric and morphological differences. The cells can be accurately segregated according to different biologically meaningful features such as cell ellipticity, longest axis, cell elongation, or the ratio between cell volume and surface area. Our machine learning based script enables dissection of a large amount of novel data from microscope images in addition to the traditional information based on fluorescent biomarkers. Furthermore, the cells in different subgroups can be spatially mapped back to their original locations in the tissue image to help elucidate their roles in their respective morphological contexts. In order to facilitate the transition from bulk analysis to single-cell level accuracy, we emphasize the user-friendliness of our method by providing detailed step-by-step instructions through the pipeline hence aiming to reach users with less experience in computational biology.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Algoritmos , Animais , Biologia Computacional , Humanos , Microscopia , Software , Análise Espacial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA