Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
PLoS Comput Biol ; 20(8): e1012361, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39178193

RESUMEN

Segmentation is required to quantify cellular structures in microscopic images. This typically requires their fluorescent labeling. Convolutional neural networks (CNNs) can detect these structures also in only transmitted light images. This eliminates the need for transgenic or dye fluorescent labeling, frees up imaging channels, reduces phototoxicity and speeds up imaging. However, this approach currently requires optimized experimental conditions and computational specialists. Here, we introduce "aiSEGcell" a user-friendly CNN-based software to segment nuclei and cells in bright field images. We extensively evaluated it for nucleus segmentation in different primary cell types in 2D cultures from different imaging modalities in hand-curated published and novel imaging data sets. We provide this curated ground-truth data with 1.1 million nuclei in 20,000 images. aiSEGcell accurately segments nuclei from even challenging bright field images, very similar to manual segmentation. It retains biologically relevant information, e.g. for demanding quantification of noisy biosensors reporting signaling pathway activity dynamics. aiSEGcell is readily adaptable to new use cases with only 32 images required for retraining. aiSEGcell is accessible through both a command line, and a napari graphical user interface. It is agnostic to computational environments and does not require user expert coding experience.


Asunto(s)
Núcleo Celular , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Programas Informáticos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Biología Computacional/métodos , Animales , Redes Neurales de la Computación , Ratones
2.
Nat Commun ; 13(1): 2999, 2022 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-35637179

RESUMEN

Liquid handling robots have the potential to automate many procedures in life sciences. However, they are not in widespread use in academic settings, where funding, space and maintenance specialists are usually limiting. In addition, current robots require lengthy programming by specialists and are incompatible with most academic laboratories with constantly changing small-scale projects. Here, we present the Pipetting Helper Imaging Lid (PHIL), an inexpensive, small, open-source personal liquid handling robot. It is designed for inexperienced users, with self-production from cheap commercial and 3D-printable components and custom control software. PHIL successfully automates pipetting (incl. aspiration) for e.g. tissue immunostainings and stimulations of live stem and progenitor cells during time-lapse microscopy using 3D printed peristaltic pumps. PHIL is cheap enough to put a personal pipetting robot within the reach of most labs and enables users without programming skills to easily automate a large range of experiments.


Asunto(s)
Disciplinas de las Ciencias Biológicas , Robótica , Microscopía , Robótica/métodos , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA