RESUMO
Cells are among the most dynamic entities, constantly undergoing various processes such as growth, division, movement, and interaction with other cells as well as the environment. Time-lapse microscopy is central to capturing these dynamic behaviors, providing detailed temporal and spatial information that allows biologists to observe and analyze cellular activities in real-time. The analysis of time-lapse microscopy data relies on two fundamental tasks: cell segmentation and cell tracking. Integrating deep learning into bioimage analysis has revolutionized cell segmentation, producing models with high precision across a wide range of biological images. However, developing generalizable deep-learning models for tracking cells over time remains challenging due to the scarcity of large, diverse annotated datasets of time-lapse movies of cells. To address this bottleneck, we propose a GAN-based time-lapse microscopy generator, termed tGAN, designed to significantly enhance the quality and diversity of synthetic annotated time-lapse microscopy data. Our model features a dual-resolution architecture that adeptly synthesizes both low and high-resolution images, uniquely capturing the intricate dynamics of cellular processes essential for accurate tracking. We demonstrate the performance of tGAN in generating high-quality, realistic, annotated time-lapse videos. Our findings indicate that tGAN decreases dependency on extensive manual annotation to enhance the precision of cell tracking models for time-lapse microscopy.
RESUMO
Deep learning is transforming bioimage analysis, but its application in single-cell segmentation is limited by the lack of large, diverse annotated datasets. We addressed this by introducing a CycleGAN-based architecture, cGAN-Seg, that enhances the training of cell segmentation models with limited annotated datasets. During training, cGAN-Seg generates annotated synthetic phase-contrast or fluorescent images with morphological details and nuances closely mimicking real images. This increases the variability seen by the segmentation model, enhancing the authenticity of synthetic samples and thereby improving predictive accuracy and generalization. Experimental results show that cGAN-Seg significantly improves the performance of widely used segmentation models over conventional training techniques. Our approach has the potential to accelerate the development of foundation models for microscopy image analysis, indicating its significance in advancing bioimage analysis with efficient training methodologies.
RESUMO
Embryonic stem cells (ESCs) can self-organize in vitro into developmental patterns with spatial organization and molecular similarity to that of early embryonic stages. This self-organization of ESCs requires transmission of signaling cues, via addition of small molecule chemicals or recombinant proteins, to induce distinct embryonic cellular fates and subsequent assembly into structures that can mimic aspects of early embryonic development. During natural embryonic development, different embryonic cell types co-develop together, where each cell type expresses specific fate-inducing transcription factors through activation of non-coding regulatory elements and interactions with neighboring cells. However, previous studies have not fully explored the possibility of engineering endogenous regulatory elements to shape self-organization of ESCs into spatially-ordered embryo models. Here, we hypothesized that cell-intrinsic activation of a minimum number of such endogenous regulatory elements is sufficient to self-organize ESCs into early embryonic models. Our results show that CRISPR-based activation (CRISPRa) of only two endogenous regulatory elements in the genome of pluripotent stem cells is sufficient to generate embryonic patterns that show spatial and molecular resemblance to that of pre-gastrulation mouse embryonic development. Quantitative single-cell live fluorescent imaging showed that the emergence of spatially-ordered embryonic patterns happens through the intrinsic induction of cell fate that leads to an orchestrated collective cellular motion. Based on these results, we propose a straightforward approach to efficiently form 3D embryo models through intrinsic CRISPRa-based epigenome editing and independent of external signaling cues. CRISPRa-Programmed Embryo Models (CPEMs) show highly consistent composition of major embryonic cell types that are spatially-organized, with nearly 80% of the structures forming an embryonic cavity. Single cell transcriptomics confirmed the presence of main embryonic cell types in CPEMs with transcriptional similarity to pre-gastrulation mouse embryos and revealed novel signaling communication links between different embryonic cell types. Our findings offer a programmable embryo model and demonstrate that minimum intrinsic epigenome editing is sufficient to self-organize ESCs into highly consistent pre-gastrulation embryo models.
RESUMO
The application of deep learning is rapidly transforming the field of bioimage analysis. While deep learning has shown great promise in complex microscopy tasks such as single-cell segmentation, the development of generalizable foundation deep learning segmentation models is hampered by the scarcity of large and diverse annotated datasets of cell images for training purposes. Generative Adversarial Networks (GANs) can generate realistic images that can potentially be easily used to train deep learning models without the generation of large manually annotated microscopy images. Here, we propose a customized CycleGAN architecture to train an enhanced cell segmentation model with limited annotated cell images, effectively addressing the challenge of paucity of annotated data in microscopy imaging. Our customized CycleGAN model can generate realistic synthetic images of cells with morphological details and nuances very similar to that of real images. This method not only increases the variability seen during training but also enhances the authenticity of synthetic samples, thereby enhancing the overall predictive accuracy and robustness of the cell segmentation model. Our experimental results show that our CycleGAN-based method significantly improves the performance of the segmentation model compared to conventional training techniques. Interestingly, we demonstrate that our model can extrapolate its knowledge by synthesizing imaging scenarios that were not seen during the training process. Our proposed customized CycleGAN method will accelerate the development of foundation models for cell segmentation in microscopy images.
RESUMO
Time-lapse microscopy is the only method that can directly capture the dynamics and heterogeneity of fundamental cellular processes at the single-cell level with high temporal resolution. Successful application of single-cell time-lapse microscopy requires automated segmentation and tracking of hundreds of individual cells over several time points. However, segmentation and tracking of single cells remain challenging for the analysis of time-lapse microscopy images, in particular for widely available and non-toxic imaging modalities such as phase-contrast imaging. This work presents a versatile and trainable deep-learning model, termed DeepSea, that allows for both segmentation and tracking of single cells in sequences of phase-contrast live microscopy images with higher precision than existing models. We showcase the application of DeepSea by analyzing cell size regulation in embryonic stem cells.
Assuntos
Aprendizado Profundo , Microscopia , Imagem com Lapso de Tempo/métodos , Microscopia de Contraste de FaseRESUMO
BACKGROUND AND OBJECTIVE: This study aims to develop and evaluate a unique global mammographic image feature analysis scheme to predict likelihood of a case depicting the detected suspicious breast mass being malignant for breast cancer. METHODS: From the entire breast area depicting on the mammograms, 59 features were initially computed to characterize the breast tissue properties at both spatial and frequency domain. Given that each case consists of two cranio-caudal and two medio-lateral oblique view images of left and right breasts, two feature pools were built, which contain the computed features from either two positive images of one breast or all the four images of two breasts. Next, for each feature pool, a particle swarm optimization (PSO) method was applied to determine the optimal feature cluster followed by training a support vector machine (SVM) classifier to generate a final score for predicting likelihood of the case being malignant. To test the scheme, we assembled a dataset involving 275 patients who had biopsy due to the suspicious findings on mammograms. Among them, 134 are malignant and 141 are benign. A ten-fold cross validation method was used to train and test the scheme. RESULTS: The classification performance levels measured by the areas under ROC curves are 0.79 ± 0.07 and 0.75 ± 0.08 when applying the SVM classifiers trained using image features computed from two-view and four-view images, respectively. CONCLUSIONS: This study demonstrates feasibility of developing a new global mammographic image feature analysis-based scheme to predict the likelihood of case being malignant without lesion segmentation.
Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mamografia/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Densidade da Mama , Bases de Dados Factuais/estatística & dados numéricos , Feminino , Humanos , Mamografia/estatística & dados numéricos , Curva ROC , Interpretação de Imagem Radiográfica Assistida por Computador/estatística & dados numéricos , Máquina de Vetores de SuporteRESUMO
The tumor-stroma ratio (TSR) reflected on hematoxylin and eosin (H&E)-stained histological images is a potential prognostic factor for survival. Automatic image processing techniques that allow for high-throughput and precise discrimination of tumor epithelium and stroma are required to elevate the prognostic significance of the TSR. As a variant of deep learning techniques, transfer learning leverages nature-images features learned by deep convolutional neural networks (CNNs) to relieve the requirement of deep CNNs for immense sample size when handling biomedical classification problems. Herein we studied different transfer learning strategies for accurately distinguishing epithelial and stromal regions of H&E-stained histological images acquired from either breast or ovarian cancer tissue. We compared the performance of important deep CNNs as either a feature extractor or as an architecture for fine-tuning with target images. Moreover, we addressed the current contradictory issue about whether the higher-level features would generalize worse than lower-level ones because they are more specific to the source-image domain. Under our experimental setting, the transfer learning approach achieved an accuracy of 90.2 (vs. 91.1 for fine tuning) with GoogLeNet, suggesting the feasibility of using it in assisting pathology-based binary classification problems. Our results also show that the superiority of the lower-level or the higher-level features over the other ones was determined by the architecture of deep CNNs.
Assuntos
Neoplasias da Mama/patologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Neoplasias Ovarianas/patologia , Bases de Dados Factuais , Feminino , Humanos , Análise Serial de TecidosRESUMO
This study aimed to investigate the feasibility of integrating image features computed from both spatial and frequency domain to better describe the tumor heterogeneity for precise prediction of tumor response to postsurgical chemotherapy in patients with advanced-stage ovarian cancer. A computer-aided scheme was applied to first compute 133 features from five categories namely, shape and density, fast Fourier transform, discrete cosine transform (DCT), wavelet, and gray level difference method. An optimal feature cluster was then determined by the scheme using the particle swarm optimization algorithm aiming to achieve an enhanced discrimination power that was unattainable with the single features. The scheme was tested using a balanced dataset (responders and non-responders defined using 6 month PFS) retrospectively collected from 120 ovarian cancer patients. By evaluating the performance of the individual features among the five categories, the DCT features achieved the highest predicting accuracy than the features in other groups. By comparison, a quantitative image marker generated from the optimal feature cluster yielded the area under ROC curve (AUC) of 0.86, while the top performing single feature only had an AUC of 0.74. Furthermore, it was observed that the features computed from the frequency domain were as important as those computed from the spatial domain. In conclusion, this study demonstrates the potential of our proposed new quantitative image marker fused with the features computed from both spatial and frequency domain for a reliable prediction of tumor response to postsurgical chemotherapy.