Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Nat Commun ; 15(1): 2755, 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38553438

RESUMO

Projection imaging accelerates volumetric interrogation in fluorescence microscopy, but for multi-cellular samples, the resulting images may lack contrast, as many structures and haze are summed up. Here, we demonstrate rapid projective light-sheet imaging with parameter selection (props) of imaging depth, position and viewing angle. This allows us to selectively image different sub-volumes of a sample, rapidly switch between them and exclude background fluorescence. Here we demonstrate the power of props by functional imaging within distinct regions of the zebrafish brain, monitoring calcium firing inside muscle cells of moving Drosophila larvae, super-resolution imaging of selected cell layers, and by optically unwrapping the curved surface of a Drosophila embryo. We anticipate that props will accelerate volumetric interrogation, ranging from subcellular to mesoscopic scales.


Assuntos
Drosophila , Peixe-Zebra , Animais , Microscopia de Fluorescência/métodos , Encéfalo/ultraestrutura , Larva
2.
bioRxiv ; 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38766074

RESUMO

Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA