Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 191
Filtrar
1.
Brief Bioinform ; 25(5)2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39154193

RESUMO

Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Biologia Computacional/métodos , Algoritmos , Núcleo Celular
2.
MethodsX ; 13: 102855, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39105087

RESUMO

Study of morphogenesis and its regulation requires analytical tools that enable simultaneous assessment of processes operating at cellular level, such as synthesis of transcription factors (TF), with their effects at the tissue scale. Most current studies conduct histological, cellular and immunochemical (IHC) analyses in separate steps, introducing inevitable biases in finding and alignment of areas of interest at vastly distinct scales of organization, as well as image distortion associated with image repositioning or file modifications. These problems are particularly severe for longitudinal analyses of growing structures that change size and shape. Here we introduce a python-based application for automated and complete whole-slide measurement of expression of multiple TFs and associated cellular morphology. The plugin collects data at customizable scale from the cell-level to the entire structure, records each data point with positional information, accounts for ontogenetic transformation of structures and variation in slide positioning with scalable grid, and includes a customizable file manager that outputs collected data in association with full details of image classification (e.g., ontogenetic stage, population, IHC assay). We demonstrate the utility and accuracy of this application by automated measurement of morphology and associated expression of eight TFs for more than six million cells recorded with full positional information in beak tissues across 12 developmental stages and 25 study populations of a wild passerine bird. Our script is freely available as an open-source Fiji plugin and can be applied to IHC slides from any imaging platforms and transcriptional factors.

3.
J Microsc ; 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38994744

RESUMO

Micropatterning is reliable method for quantifying pluripotency of human-induced pluripotent stem cells (hiPSCs) that differentiate to form a spatial pattern of sorted, ordered and nonoverlapped three germ layers on the micropattern. In this study, we propose a deep learning method to quantify spatial patterning of the germ layers in the early differentiation stage of hiPSCs using micropattern images. We propose decoding and encoding U-net structures learning labelled Hoechst (DNA-stained) hiPSC regions with corresponding Hoechst and bright-field micropattern images to segment hiPSCs on Hoechst or bright-field images. We also propose a U-net structure to extract extraembryonic regions on a micropattern, and an algorithm to compares intensities of the fluorescence images staining respective germ-layer cells and extract their regions. The proposed method thus can quantify the pluripotency of a hiPSC line with spatial patterning including cell numbers, areas and distributions of germ-layer and extraembryonic cells on a micropattern, and reveal the formation process of hiPSCs and germ layers in the early differentiation stage by segmenting live-cell bright-field images. In our assay, the cell-number accuracy achieved 86% and 85%, and the cell region accuracy 89% and 81% for segmenting Hoechst and bright-field micropattern images, respectively. Applications to micropattern images of multiple hiPSC lines, micropattern sizes, groups of markers, living and fixed cells show the proposed method can be expected to be a useful protocol and tool to quantify pluripotency of a new hiPSC line before providing it to the scientific community.

4.
Med Image Anal ; 97: 103243, 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38954941

RESUMO

Instance segmentation of biological cells is important in medical image analysis for identifying and segmenting individual cells, and quantitative measurement of subcellular structures requires further cell-level subcellular part segmentation. Subcellular structure measurements are critical for cell phenotyping and quality analysis. For these purposes, instance-aware part segmentation network is first introduced to distinguish individual cells and segment subcellular structures for each detected cell. This approach is demonstrated on human sperm cells since the World Health Organization has established quantitative standards for sperm quality assessment. Specifically, a novel Cell Parsing Net (CP-Net) is proposed for accurate instance-level cell parsing. An attention-based feature fusion module is designed to alleviate contour misalignments for cells with an irregular shape by using instance masks as spatial cues instead of as strict constraints to differentiate various instances. A coarse-to-fine segmentation module is developed to effectively segment tiny subcellular structures within a cell through hierarchical segmentation from whole to part instead of directly segmenting each cell part. Moreover, a sperm parsing dataset is built including 320 annotated sperm images with five semantic subcellular part labels. Extensive experiments on the collected dataset demonstrate that the proposed CP-Net outperforms state-of-the-art instance-aware part segmentation networks.

5.
J Pathol Inform ; 15: 100384, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39027045

RESUMO

Analysis of gene expression at the single-cell level could help predict the effectiveness of therapies in the field of chronic inflammatory diseases such as arthritis. Here, we demonstrate an adopted approach for processing images from the Slide-seq method. Using a puck, which consists of about 50,000 DNA barcode beads, an RNA sequence of a cell is to be read. The pucks are repeatedly brought into contact with liquids and then recorded with a conventional epifluorescence microscope. The image analysis initially consists of stitching the partial images of a sequence recording, registering images from different sequences, and finally reading out the bases. The new method enables the use of an inexpensive epifluorescence microscope instead of a confocal microscope.

6.
Sci Rep ; 14(1): 16389, 2024 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-39013980

RESUMO

Fluorescence polarization (Fpol) imaging of methylene blue (MB) is a promising quantitative approach to thyroid cancer detection. Clinical translation of MB Fpol technology requires reduction of the data analysis time that can be achieved via deep learning-based automated cell segmentation with a 2D U-Net convolutional neural network. The model was trained and tested using images of pathologically diverse human thyroid cells and evaluated by comparing the number of cells selected, segmented areas, and Fpol values obtained using automated (AU) and manual (MA) data processing methods. Overall, the model segmented 15.8% more cells than the human operator. Differences in AU and MA segmented cell areas varied between - 55.2 and + 31.0%, whereas differences in Fpol values varied from - 20.7 and + 10.7%. No statistically significant differences between AU and MA derived Fpol data were observed. The largest differences in Fpol values correlated with greatest discrepancies in AU versus MA segmented cell areas. Time required for auto-processing was reduced to 10 s versus one hour required for MA data processing. Implementation of the automated cell analysis makes quantitative fluorescence polarization-based diagnosis clinically feasible.


Assuntos
Aprendizado Profundo , Neoplasias da Glândula Tireoide , Humanos , Neoplasias da Glândula Tireoide/patologia , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Neoplasias da Glândula Tireoide/diagnóstico , Azul de Metileno , Polarização de Fluorescência/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Glândula Tireoide/patologia , Glândula Tireoide/diagnóstico por imagem , Citologia
7.
J Imaging ; 10(7)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39057743

RESUMO

Deep-learning algorithms for cell segmentation typically require large data sets with high-quality annotations to be trained with. However, the annotation cost for obtaining such sets may prove to be prohibitively expensive. Our work aims to reduce the time necessary to create high-quality annotations of cell images by using a relatively small well-annotated data set for training a convolutional neural network to upgrade lower-quality annotations, produced at lower annotation costs. We investigate the performance of our solution when upgrading the annotation quality for labels affected by three types of annotation error: omission, inclusion, and bias. We observe that our method can upgrade annotations affected by high error levels from 0.3 to 0.9 Dice similarity with the ground-truth annotations. We also show that a relatively small well-annotated set enlarged with samples with upgraded annotations can be used to train better-performing cell segmentation networks compared to training only on the well-annotated set. Moreover, we present a use case where our solution can be successfully employed to increase the quality of the predictions of a segmentation network trained on just 10 annotated samples.

8.
Neurosci Lett ; 836: 137871, 2024 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-38857698

RESUMO

Parkinson's disease (PD) entails the progressive loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNc), leading to movement-related impairments. Accurate assessment of DA neuron health is vital for research applications. Manual analysis, however, is laborious and subjective. To address this, we introduce TrueTH, a user-friendly and robust pipeline for unbiased quantification of DA neurons. Existing deep learning tools for tyrosine hydroxylase-positive (TH+) neuron counting often lack accessibility or require advanced programming skills. TrueTH bridges this gap by offering an open-sourced and user-friendly solution for PD research. We demonstrate TrueTH's performance across various PD rodent models, showcasing its accuracy and ease of use. TrueTH exhibits remarkable resilience to staining variations and extreme conditions, accurately identifying TH+ neurons even in lightly stained images and distinguishing brain section fragments from neurons. Furthermore, the evaluation of our pipeline's performance in segmenting fluorescence images shows strong correlation with ground truth and outperforms existing models in accuracy. In summary, TrueTH offers a user-friendly interface and is pretrained with a diverse range of images, providing a practical solution for DA neuron quantification in Parkinson's disease research.


Assuntos
Aprendizado Profundo , Neurônios Dopaminérgicos , Neurônios Dopaminérgicos/metabolismo , Animais , Tirosina 3-Mono-Oxigenase/metabolismo , Doença de Parkinson/metabolismo , Doença de Parkinson/patologia , Masculino , Camundongos , Ratos
9.
Dev Cell ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38848718

RESUMO

Characterizing cellular features during seed germination is crucial for understanding the complex biological functions of different embryonic cells in regulating seed vigor and seedling establishment. We performed spatially enhanced resolution omics sequencing (Stereo-seq) and single-cell RNA sequencing (scRNA-seq) to capture spatially resolved single-cell transcriptomes of germinating rice embryos. An automated cell-segmentation model, employing deep learning, was developed to accommodate the analysis requirements. The spatial transcriptomes of 6, 24, 36, and 48 h after imbibition unveiled both known and previously unreported embryo cell types, including two unreported scutellum cell types, corroborated by in situ hybridization and functional exploration of marker genes. Temporal transcriptomic profiling delineated gene expression dynamics in distinct embryonic cell types during seed germination, highlighting key genes involved in nutrient metabolism, biosynthesis, and signaling of phytohormones, reprogrammed in a cell-type-specific manner. Our study provides a detailed spatiotemporal transcriptome of rice embryo and presents a previously undescribed methodology for exploring the roles of different embryonic cells in seed germination.

10.
Synth Syst Biotechnol ; 9(4): 627-637, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38798889

RESUMO

Fluorescent cell imaging technology is fundamental in life science research, offering a rich source of image data crucial for understanding cell spatial positioning, differentiation, and decision-making mechanisms. As the volume of this data expands, precise image analysis becomes increasingly critical. Cell segmentation, a key analysis step, significantly influences quantitative analysis outcomes. However, selecting the most effective segmentation method is challenging, hindered by existing evaluation methods' inaccuracies, lack of graded evaluation, and narrow assessment scope. Addressing this, we developed a novel framework with two modules: StyleGAN2-based contour generation and Pix2PixHD-based image rendering, producing diverse, graded-density cell images. Using this dataset, we evaluated three leading cell segmentation methods: DeepCell, CellProfiler, and CellPose. Our comprehensive comparison revealed CellProfiler's superior accuracy in segmenting cytoplasm and nuclei. Our framework diversifies cell image data generation and systematically addresses evaluation challenges in cell segmentation technologies, establishing a solid foundation for advancing research and applications in cell image analysis.

11.
Heliyon ; 10(9): e30239, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38707416

RESUMO

Classification of live or fixed cells based on their unlabeled microscopic images would be a powerful tool for cell biology and pathology. For such software, the first step is the generation of a ground truth database that can be used for training and testing AI classification algorithms. The Application of cells expressing fluorescent reporter proteins allows the building of ground truth datasets in a straightforward way. In this study, we present an automated imaging pipeline utilizing the Cellpose algorithm for the precise cell segmentation and measurement of fluorescent cellular intensities across multiple channels. We analyzed the cell cycle of HeLa-FUCCI cells expressing fluorescent red and green reporter proteins at various levels depending on the cell cycle state. To build the dataset, 37,000 fixed cells were automatically scanned using a standard motorized microscope, capturing phase contrast and fluorescent red/green images. The fluorescent pixel intensity of each cell was integrated to calculate the total fluorescence of cells based on cell segmentation in the phase contrast channel. It resulted in a precise intensity value for each cell in both channels. Furthermore, we conducted a comparative analysis of Cellpose 1.0 and Cellpose 2.0 in cell segmentation performance. Cellpose 2.0 demonstrated notable improvements, achieving a significantly reduced false positive rate of 2.7 % and 1.4 % false negative. The cellular fluorescence was visualized in a 2D plot (map) based on the red and green intensities of the FUCCI construct revealing the continuous distribution of cells in the cell cycle. This 2D map enables the selection and potential isolation of single cells in a specific phase. In the corresponding heatmap, two clusters appeared representing cells in the red and green states. Our pipeline allows the high-throughput and accurate measurement of cellular fluorescence providing extensive statistical information on thousands of cells with potential applications in developmental and cancer biology. Furthermore, our method can be used to build ground truth datasets automatically for training and testing AI cell classification. Our automated pipeline can be used to analyze thousands of cells within 2 h after putting the sample onto the microscope.

12.
bioRxiv ; 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38766074

RESUMO

Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.

13.
BMC Med Inform Decis Mak ; 24(1): 124, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38750526

RESUMO

BACKGROUND: Spatial molecular profiling depends on accurate cell segmentation. Identification and quantitation of individual cells in dense tissues, e.g. highly inflamed tissue caused by viral infection or immune reaction, remains a challenge. METHODS: We first assess the performance of 18 deep learning-based cell segmentation models, either pre-trained or trained by us using two public image sets, on a set of immunofluorescence images stained with immune cell surface markers in skin tissue obtained during human herpes simplex virus (HSV) infection. We then further train eight of these models using up to 10,000+ training instances from the current image set. Finally, we seek to improve performance by tuning parameters of the most successful method from the previous step. RESULTS: The best model before fine-tuning achieves a mean Average Precision (mAP) of 0.516. Prediction performance improves substantially after training. The best model is the cyto model from Cellpose. After training, it achieves an mAP of 0.694; with further parameter tuning, the mAP reaches 0.711. CONCLUSION: Selecting the best model among the existing approaches and further training the model with images of interest produce the most gain in prediction performance. The performance of the resulting model compares favorably to human performance. The imperfection of the final model performance can be attributed to the moderate signal-to-noise ratio in the imageset.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Herpes Simples , Pele/diagnóstico por imagem , Biomarcadores
14.
Comput Methods Programs Biomed ; 252: 108215, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38781811

RESUMO

BACKGROUND AND OBJECTIVE: Cell segmentation in bright-field histological slides is a crucial topic in medical image analysis. Having access to accurate segmentation allows researchers to examine the relationship between cellular morphology and clinical observations. Unfortunately, most segmentation methods known today are limited to nuclei and cannot segment the cytoplasm. METHODS: We present a new network architecture Cyto R-CNN that is able to accurately segment whole cells (with both the nucleus and the cytoplasm) in bright-field images. We also present a new dataset CytoNuke, consisting of multiple thousand manual annotations of head and neck squamous cell carcinoma cells. Utilizing this dataset, we compared the performance of Cyto R-CNN to other popular cell segmentation algorithms, including QuPath's built-in algorithm, StarDist, Cellpose and a multi-scale Attention Deeplabv3+. To evaluate segmentation performance, we calculated AP50, AP75 and measured 17 morphological and staining-related features for all detected cells. We compared these measurements to the gold standard of manual segmentation using the Kolmogorov-Smirnov test. RESULTS: Cyto R-CNN achieved an AP50 of 58.65% and an AP75 of 11.56% in whole-cell segmentation, outperforming all other methods (QuPath 19.46/0.91%; StarDist 45.33/2.32%; Cellpose 31.85/5.61%, Deeplabv3+ 3.97/1.01%). Cell features derived from Cyto R-CNN showed the best agreement to the gold standard (D¯=0.15) outperforming QuPath (D¯=0.22), StarDist (D¯=0.25), Cellpose (D¯=0.23) and Deeplabv3+ (D¯=0.33). CONCLUSION: Our newly proposed Cyto R-CNN architecture outperforms current algorithms in whole-cell segmentation while providing more reliable cell measurements than any other model. This could improve digital pathology workflows, potentially leading to improved diagnosis. Moreover, our published dataset can be used to develop further models in the future.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Núcleo Celular , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/patologia , Carcinoma de Células Escamosas de Cabeça e Pescoço/diagnóstico por imagem , Carcinoma de Células Escamosas de Cabeça e Pescoço/patologia , Citoplasma , Reprodutibilidade dos Testes , Carcinoma de Células Escamosas/diagnóstico por imagem , Carcinoma de Células Escamosas/patologia
15.
bioRxiv ; 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38559093

RESUMO

Background: Cell segmentation is crucial in bioimage informatics, as its accuracy directly impacts conclusions drawn from cellular analyses. While many approaches to 2D cell segmentation have been described, 3D cell segmentation has received much less attention. 3D segmentation faces significant challenges, including limited training data availability due to the difficulty of the task for human annotators, and inherent three-dimensional complexity. As a result, existing 3D cell segmentation methods often lack broad applicability across different imaging modalities. Results: To address this, we developed a generalizable approach for using 2D cell segmentation methods to produce accurate 3D cell segmentations. We implemented this approach in 3DCellComposer, a versatile, open-source package that allows users to choose any existing 2D segmentation model appropriate for their tissue or cell type(s) without requiring any additional training. Importantly, we have enhanced our open source CellSegmentationEvaluator quality evaluation tool to support 3D images. It provides metrics that allow selection of the best approach for a given imaging source and modality, without the need for human annotations to assess performance. Using these metrics, we demonstrated that our approach produced high-quality 3D segmentations of tissue images, and that it could outperform an existing 3D segmentation method on the cell culture images with which it was trained. Conclusions: 3DCellComposer, when paired with well-trained 2D segmentation models, provides an important alternative to acquiring human-annotated 3D images for new sample types or imaging modalities and then training 3D segmentation models using them. It is expected to be of significant value for large scale projects such as the Human BioMolecular Atlas Program.

16.
Expert Syst Appl ; 238(Pt D)2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38646063

RESUMO

Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.

17.
Front Neurosci ; 18: 1363288, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38601089

RESUMO

Background: Automatic segmentation of corneal stromal cells can assist ophthalmologists to detect abnormal morphology in confocal microscopy images, thereby assessing the virus infection or conical mutation of corneas, and avoiding irreversible pathological damage. However, the corneal stromal cells often suffer from uneven illumination and disordered vascular occlusion, resulting in inaccurate segmentation. Methods: In response to these challenges, this study proposes a novel approach: a nnUNet and nested Transformer-based network integrated with dual high-order channel attention, named U-NTCA. Unlike nnUNet, this architecture allows for the recursive transmission of crucial contextual features and direct interaction of features across layers to improve the accuracy of cell recognition in low-quality regions. The proposed methodology involves multiple steps. Firstly, three underlying features with the same channel number are sent into an attention channel named gnConv to facilitate higher-order interaction of local context. Secondly, we leverage different layers in U-Net to integrate Transformer nested with gnConv, and concatenate multiple Transformers to transmit multi-scale features in a bottom-up manner. We encode the downsampling features, corresponding upsampling features, and low-level feature information transmitted from lower layers to model potential correlations between features of varying sizes and resolutions. These multi-scale features play a pivotal role in refining the position information and morphological details of the current layer through recursive transmission. Results: Experimental results on a clinical dataset including 136 images show that the proposed method achieves competitive performance with a Dice score of 82.72% and an AUC (Area Under Curve) of 90.92%, which are higher than the performance of nnUNet. Conclusion: The experimental results indicate that our model provides a cost-effective and high-precision segmentation solution for corneal stromal cells, particularly in challenging image scenarios.

18.
J Biomed Opt ; 29(Suppl 2): S22706, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38638450

RESUMO

Significance: Three-dimensional quantitative phase imaging (QPI) has rapidly emerged as a complementary tool to fluorescence imaging, as it provides an objective measure of cell morphology and dynamics, free of variability due to contrast agents. It has opened up new directions of investigation by providing systematic and correlative analysis of various cellular parameters without limitations of photobleaching and phototoxicity. While current QPI systems allow the rapid acquisition of tomographic images, the pipeline to analyze these raw three-dimensional (3D) tomograms is not well-developed. We focus on a critical, yet often underappreciated, step of the analysis pipeline that of 3D cell segmentation from the acquired tomograms. Aim: We report the CellSNAP (Cell Segmentation via Novel Algorithm for Phase Imaging) algorithm for the 3D segmentation of QPI images. Approach: The cell segmentation algorithm mimics the gemstone extraction process, initiating with a coarse 3D extrusion from a two-dimensional (2D) segmented mask to outline the cell structure. A 2D image is generated, and a segmentation algorithm identifies the boundary in the x-y plane. Leveraging cell continuity in consecutive z-stacks, a refined 3D segmentation, akin to fine chiseling in gemstone carving, completes the process. Results: The CellSNAP algorithm outstrips the current gold standard in terms of speed, robustness, and implementation, achieving cell segmentation under 2 s per cell on a single-core processor. The implementation of CellSNAP can easily be parallelized on a multi-core system for further speed improvements. For the cases where segmentation is possible with the existing standard method, our algorithm displays an average difference of 5% for dry mass and 8% for volume measurements. We also show that CellSNAP can handle challenging image datasets where cells are clumped and marred by interferogram drifts, which pose major difficulties for all QPI-focused AI-based segmentation tools. Conclusion: Our proposed method is less memory intensive and significantly faster than existing methods. The method can be easily implemented on a student laptop. Since the approach is rule-based, there is no need to collect a lot of imaging data and manually annotate them to perform machine learning based training of the model. We envision our work will lead to broader adoption of QPI imaging for high-throughput analysis, which has, in part, been stymied by a lack of suitable image segmentation tools.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento Quantitativo de Fase , Algoritmos , Imagem Óptica
19.
Math Biosci Eng ; 21(2): 2163-2188, 2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38454678

RESUMO

An automatic recognizing system of white blood cells can assist hematologists in the diagnosis of many diseases, where accuracy and efficiency are paramount for computer-based systems. In this paper, we presented a new image processing system to recognize the five types of white blood cells in peripheral blood with marked improvement in efficiency when juxtaposed against mainstream methods. The prevailing deep learning segmentation solutions often utilize millions of parameters to extract high-level image features and neglect the incorporation of prior domain knowledge, which consequently consumes substantial computational resources and increases the risk of overfitting, especially when limited medical image samples are available for training. To address these challenges, we proposed a novel memory-efficient strategy that exploits graph structures derived from the images. Specifically, we introduced a lightweight superpixel-based graph neural network (GNN) and broke new ground by introducing superpixel metric learning to segment nucleus and cytoplasm. Remarkably, our proposed segmentation model superpixel metric graph neural network (SMGNN) achieved state of the art segmentation performance while utilizing at most 10000$ \times $ less than the parameters compared to existing approaches. The subsequent segmentation-based cell type classification processes showed satisfactory results that such automatic recognizing algorithms are accurate and efficient to execeute in hematological laboratories. Our code is publicly available at https://github.com/jyh6681/SPXL-GNN.


Assuntos
Algoritmos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Leucócitos , Citoplasma
20.
Methods Mol Biol ; 2779: 407-423, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38526797

RESUMO

The complexities and cellular heterogeneity associated with tissues necessitate the concurrent detection of markers beyond the limitations of conventional imaging approaches in order to spatially resolve the relationships between immune cell populations and their environments. This is a necessary complement to single-cell suspension-based methods to inform a better understanding of the events that may underlie pathological conditions. Imaging mass cytometry is a high-dimensional imaging modality that allows for the concurrent detection of up to 40 protein markers of interest across tissues at subcellular resolution. Here, we present an optimized staining protocol for imaging mass cytometry with modifications that integrate RNAscope. This unique addition enables combined protein and single-molecule RNA detection, thereby expanding the utility of imaging mass cytometry to researchers investigating low abundance or noncoding targets. In general, the procedure described is broadly applicable for comprehensive immune profiling of host-pathogen interactions, tumor microenvironments and inflammatory conditions, all within the tissue contexture.


Assuntos
Proteínas , RNA , Coloração e Rotulagem , Citometria por Imagem/métodos , Citometria de Fluxo/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA