Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Biomed Opt ; 29(Suppl 2): S22710, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-39184400

RESUMO

Significance: Accurate cell segmentation and classification in three-dimensional (3D) images are vital for studying live cell behavior and drug responses in 3D tissue culture. Evaluating diverse cell populations in 3D cell culture over time necessitates non-toxic staining methods, as specific fluorescent tags may not be suitable, and immunofluorescence staining can be cytotoxic for prolonged live cell cultures. Aim: We aim to perform machine learning-based cell classification within a live heterogeneous cell culture population grown in a 3D tissue culture relying only on reflectance, transmittance, and nuclei counterstained images obtained by confocal microscopy. Approach: In this study, we employed a supervised convolutional neural network (CNN) to classify tumor cells and fibroblasts within 3D-grown spheroids. These cells are first segmented using the marker-controlled watershed image processing method. Training data included nuclei counterstaining, reflectance, and transmitted light images, with stained fibroblast and tumor cells as ground-truth labels. Results: Our results demonstrate the successful marker-controlled watershed segmentation of 84% of spheroid cells into single cells. We achieved a median accuracy of 67% (95% confidence interval of the median is 65-71%) in identifying cell types. We also recapitulate the original 3D images using the CNN-classified cells to visualize the original 3D-stained image's cell distribution. Conclusion: This study introduces a non-invasive toxicity-free approach to 3D cell culture evaluation, combining machine learning with confocal microscopy, opening avenues for advanced cell studies.


Assuntos
Núcleo Celular , Redes Neurais de Computação , Células Estromais , Humanos , Células Estromais/citologia , Células Estromais/patologia , Esferoides Celulares/patologia , Microscopia Confocal/métodos , Técnicas de Cultura de Células em Três Dimensões/métodos , Fibroblastos/citologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Linhagem Celular Tumoral , Neoplasias/diagnóstico por imagem , Neoplasias/patologia
2.
Opt Express ; 28(2): 2511-2535, 2020 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-32121939

RESUMO

Deep neural networks (DNNs) are efficient solvers for ill-posed problems and have been shown to outperform classical optimization techniques in several computational imaging problems. In supervised mode, DNNs are trained by minimizing a measure of the difference between their actual output and their desired output; the choice of measure, referred to as "loss function," severely impacts performance and generalization ability. In a recent paper [A. Goy et al., Phys. Rev. Lett. 121(24), 243902 (2018)], we showed that DNNs trained with the negative Pearson correlation coefficient (NPCC) as the loss function are particularly fit for photon-starved phase-retrieval problems, though the reconstructions are manifestly deficient at high spatial frequencies. In this paper, we show that reconstructions by DNNs trained with default feature loss (defined at VGG layer ReLU-22) contain more fine details; however, grid-like artifacts appear and are enhanced as photon counts become very low. Two additional key findings related to these artifacts are presented here. First, the frequency signature of the artifacts depends on the VGG's inner layer that perceptual loss is defined upon, halving with each MaxPooling2D layer deeper in the VGG. Second, VGG ReLU-12 outperforms all other layers as the defining layer for the perceptual loss.

3.
Proc Natl Acad Sci U S A ; 116(40): 19848-19856, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31527279

RESUMO

We present a machine learning-based method for tomographic reconstruction of dense layered objects, with range of projection angles limited to [Formula: see text] Whereas previous approaches to phase tomography generally require 2 steps, first to retrieve phase projections from intensity projections and then to perform tomographic reconstruction on the retrieved phase projections, in our work a physics-informed preprocessor followed by a deep neural network (DNN) conduct the 3-dimensional reconstruction directly from the intensity projections. We demonstrate this single-step method experimentally in the visible optical domain on a scaled-up integrated circuit phantom. We show that even under conditions of highly attenuated photon fluxes a DNN trained only on synthetic data can be used to successfully reconstruct physical samples disjoint from the synthetic training set. Thus, the need for producing a large number of physical examples for training is ameliorated. The method is generally applicable to tomography with electromagnetic or other types of radiation at all bands.

4.
Phys Rev Lett ; 121(24): 243902, 2018 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-30608745

RESUMO

Imaging systems' performance at low light intensity is affected by shot noise, which becomes increasingly strong as the power of the light source decreases. In this Letter, we experimentally demonstrate the use of deep neural networks to recover objects illuminated with weak light and demonstrate better performance than with the classical Gerchberg-Saxton phase retrieval algorithm for equivalent signal over noise ratio. The prior contained in the training image set can be leveraged by the deep neural network to detect features with a signal over noise ratio close to one. We apply this principle to a phase retrieval problem and show successful recovery of the object's most salient features with as little as one photon per detector pixel on average in the illumination beam. We also show that the phase reconstruction is significantly improved by training the neural network with an initial estimate of the object, as opposed to training it with the raw intensity measurement.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA