Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Nat Methods ; 20(8): 1256-1265, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37429995

RESUMEN

Three-dimensional (3D) reconstruction of living brain tissue down to an individual synapse level would create opportunities for decoding the dynamics and structure-function relationships of the brain's complex and dense information processing network; however, this has been hindered by insufficient 3D resolution, inadequate signal-to-noise ratio and prohibitive light burden in optical imaging, whereas electron microscopy is inherently static. Here we solved these challenges by developing an integrated optical/machine-learning technology, LIONESS (live information-optimized nanoscopy enabling saturated segmentation). This leverages optical modifications to stimulated emission depletion microscopy in comprehensively, extracellularly labeled tissue and previous information on sample structure via machine learning to simultaneously achieve isotropic super-resolution, high signal-to-noise ratio and compatibility with living tissue. This allows dense deep-learning-based instance segmentation and 3D reconstruction at a synapse level, incorporating molecular, activity and morphodynamic information. LIONESS opens up avenues for studying the dynamic functional (nano-)architecture of living brain tissue.


Asunto(s)
Encéfalo , Sinapsis , Microscopía Fluorescente/métodos , Procesamiento de Imagen Asistido por Computador
2.
Science ; 384(6696): eadk4858, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38723085

RESUMEN

To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. Our analysis showed that glia outnumber neurons 2:1, oligodendrocytes were the most common cell, deep layer excitatory neurons could be classified on the basis of dendritic orientation, and among thousands of weak connections to each neuron, there exist rare powerful axonal inputs of up to 50 synapses. Further studies using this resource may bring valuable insights into the mysteries of the human brain.


Asunto(s)
Corteza Cerebral , Humanos , Axones/fisiología , Axones/ultraestructura , Corteza Cerebral/irrigación sanguínea , Corteza Cerebral/ultraestructura , Dendritas/fisiología , Neuronas/ultraestructura , Oligodendroglía/ultraestructura , Sinapsis/fisiología , Sinapsis/ultraestructura , Lóbulo Temporal/ultraestructura , Microscopía
3.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 11707-11719, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37339034

RESUMEN

Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data. However, given a UNIT model trained on certain domains, it is difficult for current methods to incorporate new domains because they often need to train the full model on both existing and new domains. To address this problem, we propose a new domain-scalable UNIT method, termed as latent space anchoring, which can be efficiently extended to new visual domains and does not need to fine-tune encoders and decoders of existing domains. Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models to reconstruct single-domain images. In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning. Experiments on various datasets show that the proposed method achieves superior performance on both standard and domain-scalable UNIT tasks in comparison with the state-of-the-art methods.

4.
IEEE J Biomed Health Inform ; 27(8): 4018-4027, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37252868

RESUMEN

3D instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment a new modality by either deploying pre-trained models optimized on diverse training data or sequentially conducting image translation and segmentation with two relatively independent networks. In this work, we propose a novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts image translation and instance segmentation simultaneously using a unified network with weight sharing. Since the image translation layer can be removed at inference time, our proposed model does not introduce additional computational cost upon a standard segmentation model. For optimizing CySGAN, besides the CycleGAN losses for image translation and supervised losses for the annotated source domain, we also utilize self-supervised and segmentation-based adversarial objectives to enhance the model performance by leveraging unlabeled target domain images. We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data. The proposed CySGAN outperforms pre-trained generalist models, feature-level domain adaptation models, and the baselines that conduct image translation and segmentation sequentially. Our implementation and the newly collected, densely annotated ExM zebrafish brain nuclei dataset, named NucExM, are publicly available at https://connectomics-bazaar.github.io/proj/CySGAN/index.html.


Asunto(s)
Benchmarking , Pez Cebra , Animales , Microscopía , Procesamiento de Imagen Asistido por Computador
5.
Res Sq ; 2023 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-37461609

RESUMEN

Mapping neuronal networks that underlie behavior has become a central focus in neuroscience. While serial section electron microscopy (ssEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide the molecular information that helps identify cell types or their functional properties. Volumetric correlated light and electron microscopy (vCLEM) combines ssEM and volumetric fluorescence microscopy to incorporate molecular labeling into ssEM datasets. We developed an approach that uses small fluorescent single-chain variable fragment (scFv) immuno-probes to perform multiplexed detergent-free immuno-labeling and ssEM on the same samples. We generated eight such fluorescent scFvs that targeted useful markers for brain studies (green fluorescent protein, glial fibrillary acidic protein, calbindin, parvalbumin, voltage-gated potassium channel subfamily A member 2, vesicular glutamate transporter 1, postsynaptic density protein 95, and neuropeptide Y). To test the vCLEM approach, six different fluorescent probes were imaged in a sample of the cortex of a cerebellar lobule (Crus 1), using confocal microscopy with spectral unmixing, followed by ssEM imaging of the same sample. The results show excellent ultrastructure with superimposition of the multiple fluorescence channels. Using this approach we could document a poorly described cell type in the cerebellum, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies.

6.
bioRxiv ; 2023 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-37961104

RESUMEN

Connectomics is a nascent neuroscience field to map and analyze neuronal networks. It provides a new way to investigate abnormalities in brain tissue, including in models of Alzheimer's disease (AD). This age-related disease is associated with alterations in amyloid-ß (Aß) and phosphorylated tau (pTau). These alterations correlate with AD's clinical manifestations, but causal links remain unclear. Therefore, studying these molecular alterations within the context of the local neuronal and glial milieu may provide insight into disease mechanisms. Volume electron microscopy (vEM) is an ideal tool for performing connectomics studies at the ultrastructural level, but localizing specific biomolecules within large-volume vEM data has been challenging. Here we report a volumetric correlated light and electron microscopy (vCLEM) approach using fluorescent nanobodies as immuno-probes to localize Alzheimer's disease-related molecules in a large vEM volume. Three molecules (pTau, Aß, and a marker for activated microglia (CD11b)) were labeled without the need for detergents by three nanobody probes in a sample of the hippocampus of the 3xTg Alzheimer's disease model mouse. Confocal microscopy followed by vEM imaging of the same sample allowed for registration of the location of the molecules within the volume. This dataset revealed several ultrastructural abnormalities regarding the localizations of Aß and pTau in novel locations. For example, two pTau-positive post-synaptic spine-like protrusions innervated by axon terminals were found projecting from the axon initial segment of a pyramidal cell. Three pyramidal neurons with intracellular Aß or pTau were 3D reconstructed. Automatic synapse detection, which is necessary for connectomics analysis, revealed the changes in density and volume of synapses at different distances from an Aß plaque. This vCLEM approach is useful to uncover molecular alterations within large-scale volume electron microscopy data, opening a new connectomics pathway to study Alzheimer's disease and other types of dementia.

7.
bioRxiv ; 2023 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-37292964

RESUMEN

Mapping neuronal networks that underlie behavior has become a central focus in neuroscience. While serial section electron microscopy (ssEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide the molecular information that helps identify cell types or their functional properties. Volumetric correlated light and electron microscopy (vCLEM) combines ssEM and volumetric fluorescence microscopy to incorporate molecular labeling into ssEM datasets. We developed an approach that uses small fluorescent single-chain variable fragment (scFv) immuno-probes to perform multiplexed detergent-free immuno-labeling and ssEM on the same samples. We generated eight such fluorescent scFvs that targeted useful markers for brain studies (green fluorescent protein, glial fibrillary acidic protein, calbindin, parvalbumin, voltage-gated potassium channel subfamily A member 2, vesicular glutamate transporter 1, postsynaptic density protein 95, and neuropeptide Y). To test the vCLEM approach, six different fluorescent probes were imaged in a sample of the cortex of a cerebellar lobule (Crus 1), using confocal microscopy with spectral unmixing, followed by ssEM imaging of the same sample. The results show excellent ultrastructure with superimposition of the multiple fluorescence channels. Using this approach we could document a poorly described cell type in the cerebellum, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies.

8.
IEEE Trans Med Imaging ; 42(12): 3956-3971, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37768797

RESUMEN

In this paper, we present the results of the MitoEM challenge on mitochondria 3D instance segmentation from electron microscopy images, organized in conjunction with the IEEE-ISBI 2021 conference. Our benchmark dataset consists of two large-scale 3D volumes, one from human and one from rat cortex tissue, which are 1,986 times larger than previously used datasets. At the time of paper submission, 257 participants had registered for the challenge, 14 teams had submitted their results, and six teams participated in the challenge workshop. Here, we present eight top-performing approaches from the challenge participants, along with our own baseline strategies. Posterior to the challenge, annotation errors in the ground truth were corrected without altering the final ranking. Additionally, we present a retrospective evaluation of the scoring system which revealed that: 1) challenge metric was permissive with the false positive predictions; and 2) size-based grouping of instances did not correctly categorize mitochondria of interest. Thus, we propose a new scoring system that better reflects the correctness of the segmentation results. Although several of the top methods are compared favorably to our own baselines, substantial errors remain unsolved for mitochondria with challenging morphologies. Thus, the challenge remains open for submission and automatic evaluation, with all volumes available for download.


Asunto(s)
Corteza Cerebral , Mitocondrias , Humanos , Ratas , Animales , Estudios Retrospectivos , Microscopía Electrónica , Procesamiento de Imagen Asistido por Computador/métodos
9.
Artículo en Inglés | MEDLINE | ID: mdl-36465475

RESUMEN

Evaluation practices for image super-resolution (SR) use a single-value metric, the PSNR or SSIM, to determine model performance. This provides little insight into the source of errors and model behavior. Therefore, it is beneficial to move beyond the conventional approach and reconceptualize evaluation with interpretability as our main priority. We focus on a thorough error analysis from a variety of perspectives. Our key contribution is to leverage a texture classifier, which enables us to assign patches with semantic labels, to identify the source of SR errors both globally and locally. We then use this to determine (a) the semantic alignment of SR datasets, (b) how SR models perform on each label, (c) to what extent high-resolution (HR) and SR patches semantically correspond, and more. Through these different angles, we are able to highlight potential pitfalls and blindspots. Our overall investigation highlights numerous unexpected insights. We hope this work serves as an initial step for debugging blackbox SR networks.

10.
Proc IEEE Int Conf Comput Vis ; 2021: 4268-4277, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35368831

RESUMEN

Deep convolutional neural networks (CNNs) have pushed forward the frontier of super-resolution (SR) research. However, current CNN models exhibit a major flaw: they are biased towards learning low-frequency signals. This bias becomes more problematic for the image SR task which targets reconstructing all fine details and image textures. To tackle this challenge, we propose to improve the learning of high-frequency features both locally and globally and introduce two novel architectural units to existing SR models. Specifically, we propose a dynamic highpass filtering (HPF) module that locally applies adaptive filter weights for each spatial location and channel group to preserve high-frequency signals. We also propose a matrix multi-spectral channel attention (MMCA) module that predicts the attention map of features decomposed in the frequency domain. This module operates in a global context to adaptively recalibrate feature responses at different frequencies. Extensive qualitative and quantitative results demonstrate that our proposed modules achieve better accuracy and visual improvements against state-of-the-art methods on several benchmark datasets.

11.
Comput Vis ECCV ; 12363: 103-120, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33345257

RESUMEN

For large-scale vision tasks in biomedical images, the labeled data is often limited to train effective deep models. Active learning is a common solution, where a query suggestion method selects representative unlabeled samples for annotation, and the new labels are used to improve the base model. However, most query suggestion models optimize their learnable parameters only on the limited labeled data and consequently become less effective for the more challenging unlabeled data. To tackle this, we propose a two-stream active query suggestion approach. In addition to the supervised feature extractor, we introduce an unsupervised one optimized on all raw images to capture diverse image features, which can later be improved by fine-tuning on new labels. As a use case, we build an end-to-end active learning framework with our query suggestion method for 3D synapse detection and mitochondria segmentation in connectomics. With the framework, we curate, to our best knowledge, the largest connectomics dataset with dense synapses and mitochondria annotation. On this new dataset, our method outperforms previous state-of-the-art methods by 3.1% for synapse and 3.8% for mitochondria in terms of region-of-interest proposal accuracy. We also apply our method to image classification, where it outperforms previous approaches on CIFAR-10 under the same limited annotation budget. The project page is https://zudi-lin.github.io/projects/#two_stream_active.

12.
Med Image Comput Comput Assist Interv ; 12265: 66-76, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33283212

RESUMEN

Electron microscopy (EM) allows the identification of intracellular organelles such as mitochondria, providing insights for clinical and scientific studies. However, public mitochondria segmentation datasets only contain hundreds of instances with simple shapes. It is unclear if existing methods achieving human-level accuracy on these small datasets are robust in practice. To this end, we introduce the MitoEM dataset, a 3D mitochondria instance segmentation dataset with two (30µm)3 volumes from human and rat cortices respectively, 3, 600× larger than previous benchmarks. With around 40K instances, we find a great diversity of mitochondria in terms of shape and density. For evaluation, we tailor the implementation of the average precision (AP) metric for 3D data with a 45× speedup. On MitoEM, we find existing instance segmentation methods often fail to correctly segment mitochondria with complex shapes or close contacts with other instances. Thus, our MitoEM dataset poses new challenges to the field. We release our code and data: https://donglaiw.github.io/page/mitoEM/index.html.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA