Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
bioRxiv ; 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38915545

RESUMEN

Cells are among the most dynamic entities, constantly undergoing various processes such as growth, division, movement, and interaction with other cells as well as the environment. Time-lapse microscopy is central to capturing these dynamic behaviors, providing detailed temporal and spatial information that allows biologists to observe and analyze cellular activities in real-time. The analysis of time-lapse microscopy data relies on two fundamental tasks: cell segmentation and cell tracking. Integrating deep learning into bioimage analysis has revolutionized cell segmentation, producing models with high precision across a wide range of biological images. However, developing generalizable deep-learning models for tracking cells over time remains challenging due to the scarcity of large, diverse annotated datasets of time-lapse movies of cells. To address this bottleneck, we propose a GAN-based time-lapse microscopy generator, termed tGAN, designed to significantly enhance the quality and diversity of synthetic annotated time-lapse microscopy data. Our model features a dual-resolution architecture that adeptly synthesizes both low and high-resolution images, uniquely capturing the intricate dynamics of cellular processes essential for accurate tracking. We demonstrate the performance of tGAN in generating high-quality, realistic, annotated time-lapse videos. Our findings indicate that tGAN decreases dependency on extensive manual annotation to enhance the precision of cell tracking models for time-lapse microscopy.

2.
iScience ; 27(5): 109740, 2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38706861

RESUMEN

Deep learning is transforming bioimage analysis, but its application in single-cell segmentation is limited by the lack of large, diverse annotated datasets. We addressed this by introducing a CycleGAN-based architecture, cGAN-Seg, that enhances the training of cell segmentation models with limited annotated datasets. During training, cGAN-Seg generates annotated synthetic phase-contrast or fluorescent images with morphological details and nuances closely mimicking real images. This increases the variability seen by the segmentation model, enhancing the authenticity of synthetic samples and thereby improving predictive accuracy and generalization. Experimental results show that cGAN-Seg significantly improves the performance of widely used segmentation models over conventional training techniques. Our approach has the potential to accelerate the development of foundation models for microscopy image analysis, indicating its significance in advancing bioimage analysis with efficient training methodologies.

3.
bioRxiv ; 2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37546774

RESUMEN

The application of deep learning is rapidly transforming the field of bioimage analysis. While deep learning has shown great promise in complex microscopy tasks such as single-cell segmentation, the development of generalizable foundation deep learning segmentation models is hampered by the scarcity of large and diverse annotated datasets of cell images for training purposes. Generative Adversarial Networks (GANs) can generate realistic images that can potentially be easily used to train deep learning models without the generation of large manually annotated microscopy images. Here, we propose a customized CycleGAN architecture to train an enhanced cell segmentation model with limited annotated cell images, effectively addressing the challenge of paucity of annotated data in microscopy imaging. Our customized CycleGAN model can generate realistic synthetic images of cells with morphological details and nuances very similar to that of real images. This method not only increases the variability seen during training but also enhances the authenticity of synthetic samples, thereby enhancing the overall predictive accuracy and robustness of the cell segmentation model. Our experimental results show that our CycleGAN-based method significantly improves the performance of the segmentation model compared to conventional training techniques. Interestingly, we demonstrate that our model can extrapolate its knowledge by synthesizing imaging scenarios that were not seen during the training process. Our proposed customized CycleGAN method will accelerate the development of foundation models for cell segmentation in microscopy images.

4.
Cell Rep Methods ; 3(6): 100500, 2023 06 26.
Artículo en Inglés | MEDLINE | ID: mdl-37426758

RESUMEN

Time-lapse microscopy is the only method that can directly capture the dynamics and heterogeneity of fundamental cellular processes at the single-cell level with high temporal resolution. Successful application of single-cell time-lapse microscopy requires automated segmentation and tracking of hundreds of individual cells over several time points. However, segmentation and tracking of single cells remain challenging for the analysis of time-lapse microscopy images, in particular for widely available and non-toxic imaging modalities such as phase-contrast imaging. This work presents a versatile and trainable deep-learning model, termed DeepSea, that allows for both segmentation and tracking of single cells in sequences of phase-contrast live microscopy images with higher precision than existing models. We showcase the application of DeepSea by analyzing cell size regulation in embryonic stem cells.


Asunto(s)
Aprendizaje Profundo , Microscopía , Imagen de Lapso de Tiempo/métodos , Microscopía de Contraste de Fase
5.
Front Neurosci ; 17: 1333725, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38312737

RESUMEN

Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79-91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings.

6.
Front Neurosci ; 16: 1099560, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36699521

RESUMEN

Mild traumatic brain injury (mTBI) is a major public health concern that can result in a broad spectrum of short-term and long-term symptoms. Recently, machine learning (ML) algorithms have been used in neuroscience research for diagnostics and prognostic assessment of brain disorders. The present study aimed to develop an automatic classifier to distinguish patients suffering from chronic mTBI from healthy controls (HCs) utilizing multilevel metrics of resting-state functional magnetic resonance imaging (rs-fMRI). Sixty mTBI patients and forty HCs were enrolled and allocated to training and testing datasets with a ratio of 80:20. Several rs-fMRI metrics including fractional amplitude of low-frequency fluctuation (fALFF), regional homogeneity (ReHo), degree centrality (DC), voxel-mirrored homotopic connectivity (VMHC), functional connectivity strength (FCS), and seed-based FC were generated from two main analytical categories: local measures and network measures. Statistical two-sample t-test was employed comparing between mTBI and HCs groups. Then, for each rs-fMRI metric the features were selected extracting the mean values from the clusters showing significant differences. Finally, the support vector machine (SVM) models based on separate and multilevel metrics were built and the performance of the classifiers were assessed using five-fold cross-validation and via the area under the receiver operating characteristic curve (AUC). Feature importance was estimated using Shapley additive explanation (SHAP) values. Among local measures, the range of AUC was 86.67-100% and the optimal SVM model was obtained based on combined multilevel rs-fMRI metrics and DC as a separate model with AUC of 100%. Among network measures, the range of AUC was 80.42-93.33% and the optimal SVM model was obtained based on the combined multilevel seed-based FC metrics. The SHAP analysis revealed the DC value in the left postcentral and seed-based FC value between the motor ventral network and right superior temporal as the most important local and network features with the greatest contribution to the classification models. Our findings demonstrated that different rs-fMRI metrics can provide complementary information for classifying patients suffering from chronic mTBI. Moreover, we showed that ML approach is a promising tool for detecting patients with mTBI and might serve as potential imaging biomarker to identify patients at individual level. Clinical trial registration: [clinicaltrials.gov], identifier [NCT03241732].

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA