RESUMEN
Magnetic resonance imaging (MRI) is widely used for ischemic stroke lesion detection in mice. A challenge is that lesion segmentation often relies on manual tracing by trained experts, which is labor-intensive, time-consuming, and prone to inter- and intra-rater variability. Here, we present a fully automated ischemic stroke lesion segmentation method for mouse T2-weighted MRI data. As an end-to-end deep learning approach, the automated lesion segmentation requires very little preprocessing and works directly on the raw MRI scans. We randomly split a large dataset of 382 MRI scans into a subset (n = 293) to train the automated lesion segmentation and a subset (n = 89) to evaluate its performance. We compared Dice coefficients and accuracy of lesion volume against manual segmentation, as well as its performance on an independent dataset from an open repository with different imaging characteristics. The automated lesion segmentation produced segmentation masks with a smooth, compact, and realistic appearance that are in high agreement with manual segmentation. We report dice scores higher than the agreement between two human raters reported in previous studies, highlighting the ability to remove individual human bias and standardize the process across research studies and centers.
Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular Isquémico , Trabajo de Parto , Accidente Cerebrovascular , Humanos , Embarazo , Femenino , Animales , Ratones , Accidente Cerebrovascular/diagnóstico por imagen , Imagen por Resonancia MagnéticaRESUMEN
The growing size of EM volumes is a significant barrier to findable, accessible, interoperable, and reusable (FAIR) sharing. Storage, sharing, visualization and processing are challenging for large datasets. Here we discuss a recent development toward the standardized storage of volume electron microscopy (vEM) data which addresses many of the issues that researchers face. The OME-Zarr format splits data into more manageable, performant chunks enabling streaming-based access, and unifies important metadata such as multiresolution pyramid descriptions. The file format is designed for centralized and remote storage (e.g., cloud storage or file system) and is therefore ideal for sharing large data. By coalescing on a common, community-wide format, these benefits will expand as ever more data is made available to the scientific community.
Asunto(s)
Almacenamiento y Recuperación de la Información , Microscopía Electrónica de VolumenRESUMEN
A growing community is constructing a next-generation file format (NGFF) for bioimaging to overcome problems of scalability and heterogeneity. Organized by the Open Microscopy Environment (OME), individuals and institutes across diverse modalities facing these problems have designed a format specification process (OME-NGFF) to address these needs. This paper brings together a wide range of those community members to describe the cloud-optimized format itself-OME-Zarr-along with tools and data resources available today to increase FAIR access and remove barriers in the scientific process. The current momentum offers an opportunity to unify a key component of the bioimaging domain-the file format that underlies so many personal, institutional, and global data management and analysis tasks.
Asunto(s)
Microscopía , Programas Informáticos , Humanos , Apoyo ComunitarioRESUMEN
A growing community is constructing a next-generation file format (NGFF) for bioimaging to overcome problems of scalability and heterogeneity. Organized by the Open Microscopy Environment (OME), individuals and institutes across diverse modalities facing these problems have designed a format specification process (OME-NGFF) to address these needs. This paper brings together a wide range of those community members to describe the cloud-optimized format itself -- OME-Zarr -- along with tools and data resources available today to increase FAIR access and remove barriers in the scientific process. The current momentum offers an opportunity to unify a key component of the bioimaging domain -- the file format that underlies so many personal, institutional, and global data management and analysis tasks.
RESUMEN
Understanding the function of biological tissues requires a coordinated study of physiology and structure, exploring volumes that contain complete functional units at a detail that resolves the relevant features. Here, we introduce an approach to address this challenge: Mouse brain tissue sections containing a region where function was recorded using in vivo 2-photon calcium imaging were stained, dehydrated, resin-embedded and imaged with synchrotron X-ray computed tomography with propagation-based phase contrast (SXRT). SXRT provided context at subcellular detail, and could be followed by targeted acquisition of multiple volumes using serial block-face electron microscopy (SBEM). In the olfactory bulb, combining SXRT and SBEM enabled disambiguation of in vivo-assigned regions of interest. In the hippocampus, we found that superficial pyramidal neurons in CA1a displayed a larger density of spine apparati than deeper ones. Altogether, this approach can enable a functional and structural investigation of subcellular features in the context of cells and tissues.
Asunto(s)
Imagenología Tridimensional , Sincrotrones , Animales , Encéfalo/diagnóstico por imagen , Encéfalo/ultraestructura , Imagen de Difusión por Resonancia Magnética , Ratones , Microscopía Electrónica , Microscopía Electrónica de Rastreo , Microtomografía por Rayos X/métodosRESUMEN
We report webKnossos, an in-browser annotation tool for 3D electron microscopic data. webKnossos provides flight mode, a single-view egocentric reconstruction method enabling trained annotator crowds to reconstruct at a speed of 1.5 ± 0.6 mm/h for axons and 2.1 ± 0.9 mm/h for dendrites in 3D electron microscopic data from mammalian cortex. webKnossos accelerates neurite reconstruction for connectomics by 4- to 13-fold compared with current state-of-the-art tools, thus extending the range of connectomes that can realistically be mapped in the future.