RESUMO
Three-dimensional (3D) reconstruction of living brain tissue down to an individual synapse level would create opportunities for decoding the dynamics and structure-function relationships of the brain's complex and dense information processing network; however, this has been hindered by insufficient 3D resolution, inadequate signal-to-noise ratio and prohibitive light burden in optical imaging, whereas electron microscopy is inherently static. Here we solved these challenges by developing an integrated optical/machine-learning technology, LIONESS (live information-optimized nanoscopy enabling saturated segmentation). This leverages optical modifications to stimulated emission depletion microscopy in comprehensively, extracellularly labeled tissue and previous information on sample structure via machine learning to simultaneously achieve isotropic super-resolution, high signal-to-noise ratio and compatibility with living tissue. This allows dense deep-learning-based instance segmentation and 3D reconstruction at a synapse level, incorporating molecular, activity and morphodynamic information. LIONESS opens up avenues for studying the dynamic functional (nano-)architecture of living brain tissue.
Assuntos
Encéfalo , Sinapses , Microscopia de Fluorescência/métodos , Processamento de Imagem Assistida por ComputadorRESUMO
STUDY QUESTION: Can the BlastAssist deep learning pipeline perform comparably to or outperform human experts and embryologists at measuring interpretable, clinically relevant features of human embryos in IVF? SUMMARY ANSWER: The BlastAssist pipeline can measure a comprehensive set of interpretable features of human embryos and either outperform or perform comparably to embryologists and human experts in measuring these features. WHAT IS KNOWN ALREADY: Some studies have applied deep learning and developed 'black-box' algorithms to predict embryo viability directly from microscope images and videos but these lack interpretability and generalizability. Other studies have developed deep learning networks to measure individual features of embryos but fail to conduct careful comparisons to embryologists' performance, which are fundamental to demonstrate the network's effectiveness. STUDY DESIGN, SIZE, DURATION: We applied the BlastAssist pipeline to 67â043â973 images (32â939 embryos) recorded in the IVF lab from 2012 to 2017 in Tel Aviv Sourasky Medical Center. We first compared the pipeline measurements of individual images/embryos to manual measurements by human experts for sets of features, including: (i) fertilization status (n = 207 embryos), (ii) cell symmetry (n = 109 embryos), (iii) degree of fragmentation (n = 6664 images), and (iv) developmental timing (n = 21â036 images). We then conducted detailed comparisons between pipeline outputs and annotations made by embryologists during routine treatments for features, including: (i) fertilization status (n = 18â922 embryos), (ii) pronuclei (PN) fade time (n = 13â781 embryos), (iii) degree of fragmentation on Day 2 (n = 11â582 embryos), and (iv) time of blastulation (n = 3266 embryos). In addition, we compared the pipeline outputs to the implantation results of 723 single embryo transfer (SET) cycles, and to the live birth results of 3421 embryos transferred in 1801 cycles. PARTICIPANTS/MATERIALS, SETTING, METHODS: In addition to EmbryoScope™ image data, manual embryo grading and annotations, and electronic health record (EHR) data on treatment outcomes were also included. We integrated the deep learning networks we developed for individual features to construct the BlastAssist pipeline. Pearson's χ2 test was used to evaluate the statistical independence of individual features and implantation success. Bayesian statistics was used to evaluate the association of the probability of an embryo resulting in live birth to BlastAssist inputs. MAIN RESULTS AND THE ROLE OF CHANCE: The BlastAssist pipeline integrates five deep learning networks and measures comprehensive, interpretable, and quantitative features in clinical IVF. The pipeline performs similarly or better than manual measurements. For fertilization status, the network performs with very good parameters of specificity and sensitivity (area under the receiver operating characteristics (AUROC) 0.84-0.94). For symmetry score, the pipeline performs comparably to the human expert at both 2-cell (r = 0.71 ± 0.06) and 4-cell stages (r = 0.77 ± 0.07). For degree of fragmentation, the pipeline (acc = 69.4%) slightly under-performs compared to human experts (acc = 73.8%). For developmental timing, the pipeline (acc = 90.0%) performs similarly to human experts (acc = 91.4%). There is also strong agreement between pipeline outputs and annotations made by embryologists during routine treatments. For fertilization status, the pipeline and embryologists strongly agree (acc = 79.6%), and there is strong correlation between the two measurements (r = 0.683). For degree of fragmentation, the pipeline and embryologists mostly agree (acc = 55.4%), and there is also strong correlation between the two measurements (r = 0.648). For both PN fade time (r = 0.787) and time of blastulation (r = 0.887), there's strong correlation between the pipeline and embryologists. For SET cycles, 2-cell time (P < 0.01) and 2-cell symmetry (P < 0.03) are significantly correlated with implantation success rate, while other features showed correlations with implantation success without statistical significance. In addition, 2-cell time (P < 5 × 10-11), PN fade time (P < 5 × 10-10), degree of fragmentation on Day 3 (P < 5 × 10-4), and 2-cell symmetry (P < 5 × 10-3) showed statistically significant correlation with the probability of the transferred embryo resulting in live birth. LIMITATIONS, REASONS FOR CAUTION: We have not tested the BlastAssist pipeline on data from other clinics or other time-lapse microscopy (TLM) systems. The association study we conducted with live birth results do not take into account confounding variables, which will be necessary to construct an embryo selection algorithm. Randomized controlled trials (RCT) will be necessary to determine whether the pipeline can improve success rates in clinical IVF. WIDER IMPLICATIONS OF THE FINDINGS: BlastAssist provides a comprehensive and holistic means of evaluating human embryos. Instead of using a black-box algorithm, BlastAssist outputs meaningful measurements of embryos that can be interpreted and corroborated by embryologists, which is crucial in clinical decision making. Furthermore, the unprecedentedly large dataset generated by BlastAssist measurements can be used as a powerful resource for further research in human embryology and IVF. STUDY FUNDING/COMPETING INTEREST(S): This work was supported by Harvard Quantitative Biology Initiative, the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard (award number 1764269), the National Institute of Heath (award number R01HD104969), the Perelson Fund, and the Sagol fund for embryos and stem cells as part of the Sagol Network. The authors declare no competing interests. TRIAL REGISTRATION NUMBER: Not applicable.
Assuntos
Aprendizado Profundo , Gravidez , Feminino , Humanos , Implantação do Embrião , Transferência de Embrião Único/métodos , Blastocisto , Nascido Vivo , Fertilização in vitro , Estudos RetrospectivosRESUMO
AIM: This scoping review aims to assess the utility of ultrasound as a prospective tool in measuring body composition and nutritional status in the paediatric population. We provide a comprehensive summary of the existing literature, identify gaps, and propose future research directions. METHODS: We conducted a systematic scoping review following the PRISMA Extension for Scoping Reviews guidelines. This involved screening titles and abstracts of relevant studies, followed by a detailed full-text review and extraction of pertinent data. RESULTS: We identified and synthesised 34 articles. The review revealed that while ultrasound has been used to assess body composition and bone properties in children, significant gaps remain in the literature. These include limited studies on ultrasound performance, insufficient attention to relevant sample characteristics, reliance on manual image measurements, and limited sample diversity. CONCLUSION: Point-of-care ultrasound shows significant promise for assessing paediatric body composition and nutritional status. To validate and enhance its effectiveness, further research is needed. Future studies should include larger and more diverse patient cohorts and conduct longitudinal investigations to evaluate nutritional interventions. Additionally, developing artificial intelligence (AI) for standardising and automating data interpretation will be crucial in improving the accuracy and efficiency of ultrasound assessments.
RESUMO
Although the human visual system is remarkable at perceiving and interpreting motions, it has limited sensitivity, and we cannot see motions that are smaller than some threshold. Although difficult to visualize, tiny motions below this threshold are important and can reveal physical mechanisms, or be precursors to large motions in the case of mechanical failure. Here, we present a "motion microscope," a computational tool that quantifies tiny motions in videos and then visualizes them by producing a new video in which the motions are made large enough to see. Three scientific visualizations are shown, spanning macroscopic to nanoscopic length scales. They are the resonant vibrations of a bridge demonstrating simultaneous spatial and temporal modal analysis, micrometer vibrations of a metamaterial demonstrating wave propagation through an elastic matrix with embedded resonating units, and nanometer motions of an extracellular tissue found in the inner ear demonstrating a mechanism of frequency separation in hearing. In these instances, the motion microscope uncovers hidden dynamics over a variety of length scales, leading to the discovery of previously unknown phenomena.
Assuntos
Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Gravação em Vídeo , Lasers , Movimento (Física)RESUMO
Pulmonary diseases rank prominently among the principal causes of death worldwide. Curing them will require, among other things, a better understanding of the complex 3D tree-shaped structures within the pulmonary system, such as airways, arteries, and veins. Traditional approaches using high-resolution image stacks and standard CNNs on dense voxel grids face challenges in computational efficiency, limited resolution, local context, and inadequate preservation of shape topology. Our method addresses these issues by shifting from dense voxel to sparse point representation, offering better memory efficiency and global context utilization. However, the inherent sparsity in point representation can lead to a loss of crucial connectivity in tree-shaped structures. To mitigate this, we introduce graph learning on skeletonized structures, incorporating differentiable feature fusion for improved topology and long-distance context capture. Furthermore, we employ an implicit function for efficient conversion of sparse representations into dense reconstructions end-to-end. The proposed method not only delivers state-of-the-art performance in labeling accuracy, both overall and at key locations, but also enables efficient inference and the generation of closed surface shapes. Addressing data scarcity in this field, we have also curated a comprehensive dataset to validate our approach. Data and code are available at https://github.com/M3DV/pulmonary-tree-labeling.
RESUMO
Large-scale electron microscopy (EM) has enabled the reconstruction of brain connectomes at the synaptic level by serially scanning over massive areas of sample sections. The acquired big EM data sets raise the great challenge of image mosaicking at high accuracy. Currently, it simply follows the conventional algorithms designed for natural images, which are usually composed of only a few tiles, using a single type of keypoint feature that would sacrifice speed for stronger performance. Even so, in the process of stitching hundreds of thousands of tiles for large EM data, errors are still inevitable and diverse. Moreover, there has not yet been an appropriate metric to quantitatively evaluate the stitching of biomedical EM images. Here we propose a two-stage error detection method to improve the EM image mosaicking. It firstly uses point-based error detection in combination with a hybrid feature framework to expedite the stitching computation while maintaining high accuracy. Following is the second detection of unresolved errors with a newly designed metric of EM stitched image quality assessment (EMSIQA). The novel detection-based mosaicking pipeline is tested on large EM data sets and proven to be more effective and as accurate when compared with existing methods.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Microscopia Eletrônica , Processamento de Imagem Assistida por Computador/métodos , Microscopia Eletrônica/métodos , Encéfalo/diagnóstico por imagem , Humanos , Conectoma/métodos , AnimaisRESUMO
The size of image volumes in connectomics studies now reaches terabyte and often petabyte scales with a great diversity of appearance due to different sample preparation procedures. However, manual annotation of neuronal structures (e.g., synapses) in these huge image volumes is time-consuming, leading to limited labeled training data often smaller than 0.001% of the large-scale image volumes in application. Methods that can utilize in-domain labeled data and generalize to out-of-domain unlabeled data are in urgent need. Although many domain adaptation approaches are proposed to address such issues in the natural image domain, few of them have been evaluated on connectomics data due to a lack of domain adaptation benchmarks. Therefore, to enable developments of domain adaptive synapse detection methods for large-scale connectomics applications, we annotated 14 image volumes from a biologically diverse set of Megaphragma viggianii brain regions originating from three different whole-brain datasets and organized the WASPSYN challenge at ISBI 2023. The annotations include coordinates of pre-synapses and post-synapses in the 3D space, together with their one-to-many connectivity information. This paper describes the dataset, the tasks, the proposed baseline, the evaluation method, and the results of the challenge. Limitations of the challenge and the impact on neuroscience research are also discussed. The challenge is and will continue to be available at https://codalab.lisn.upsaclay.fr/competitions/9169. Successful algorithms that emerge from our challenge may potentially revolutionize real-world connectomics research and further the cause that aims to unravel the complexity of brain structure and function.
RESUMO
Mapping neuronal networks is a central focus in neuroscience. While volume electron microscopy (vEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide molecular information to identify cell types or functions. We developed an approach that uses fluorescent single-chain variable fragments (scFvs) to perform multiplexed detergent-free immunolabeling and volumetric-correlated-light-and-electron-microscopy on the same sample. We generated eight fluorescent scFvs targeting brain markers. Six fluorescent probes were imaged in the cerebellum of a female mouse, using confocal microscopy with spectral unmixing, followed by vEM of the same sample. The results provide excellent ultrastructure superimposed with multiple fluorescence channels. Using this approach, we documented a poorly described cell type, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies.
Assuntos
Córtex Cerebelar , Animais , Feminino , Camundongos , Córtex Cerebelar/metabolismo , Córtex Cerebelar/citologia , Córtex Cerebelar/ultraestrutura , Microscopia Confocal/métodos , Microscopia Eletrônica/métodos , Conectoma/métodos , Neurônios/metabolismo , Neurônios/ultraestrutura , Corantes Fluorescentes/química , Camundongos Endogâmicos C57BL , CitologiaRESUMO
To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. Our analysis showed that glia outnumber neurons 2:1, oligodendrocytes were the most common cell, deep layer excitatory neurons could be classified on the basis of dendritic orientation, and among thousands of weak connections to each neuron, there exist rare powerful axonal inputs of up to 50 synapses. Further studies using this resource may bring valuable insights into the mysteries of the human brain.
Assuntos
Córtex Cerebral , Humanos , Axônios/fisiologia , Axônios/ultraestrutura , Córtex Cerebral/irrigação sanguínea , Córtex Cerebral/ultraestrutura , Dendritos/fisiologia , Neurônios/ultraestrutura , Oligodendroglia/ultraestrutura , Sinapses/fisiologia , Sinapses/ultraestrutura , Lobo Temporal/ultraestrutura , MicroscopiaRESUMO
Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data. However, given a UNIT model trained on certain domains, it is difficult for current methods to incorporate new domains because they often need to train the full model on both existing and new domains. To address this problem, we propose a new domain-scalable UNIT method, termed as latent space anchoring, which can be efficiently extended to new visual domains and does not need to fine-tune encoders and decoders of existing domains. Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models to reconstruct single-domain images. In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning. Experiments on various datasets show that the proposed method achieves superior performance on both standard and domain-scalable UNIT tasks in comparison with the state-of-the-art methods.
RESUMO
RGB color is a basic visual feature. Here we use machine learning and visual evoked potential (VEP) of electroencephalogram (EEG) data to investigate the decoding features of the time courses and space location that extract it, and whether they depend on a common brain cortex channel. We show that RGB color information can be decoded from EEG data and, with the task-irrelevant paradigm, features can be decoded across fast changes in VEP stimuli. These results are consistent with the theory of both event-related potential (ERP) and P300 mechanisms. The latency on time course is shorter and more temporally precise for RGB color stimuli than P300, a result that does not depend on a task-relevant paradigm, suggesting that RGB color is an updating signal that separates visual events. Meanwhile, distribution features are evident for the brain cortex of EEG signal, providing a space correlate of RGB color in classification accuracy and channel location. Finally, space decoding of RGB color depends on the channel classification accuracy and location obtained through training and testing EEG data. The result is consistent with channel power value distribution discharged by both VEP and electrophysiological stimuli mechanisms.
RESUMO
We introduce MedMNIST v2, a large-scale MNIST-like dataset collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into a small size of 28 × 28 (2D) or 28 × 28 × 28 (3D) with the corresponding classification labels so that no background knowledge is required for users. Covering primary data modalities in biomedical images, MedMNIST v2 is designed to perform classification on lightweight 2D and 3D images with various dataset scales (from 100 to 100,000) and diverse tasks (binary/multi-class, ordinal regression, and multi-label). The resulting dataset, consisting of 708,069 2D images and 9,998 3D images in total, could support numerous research/educational purposes in biomedical image analysis, computer vision, and machine learning. We benchmark several baseline methods on MedMNIST v2, including 2D/3D neural networks and open-source/commercial AutoML tools. The data and code are publicly available at https://medmnist.com/ .
Assuntos
Imageamento Tridimensional , Benchmarking , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/classificação , Imageamento Tridimensional/métodos , Aprendizado de Máquina , Redes Neurais de ComputaçãoRESUMO
Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSeg.
RESUMO
Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2 - 6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2 - 4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.
RESUMO
Connectomics allows mapping of cells and their circuits at the nanometer scale in volumes of approximately 1 mm3. Given that the human cerebral cortex can be 3 mm in thickness, larger volumes are required. Larger-volume circuit reconstructions of human brain are limited by 1) the availability of fresh biopsies; 2) the need for excellent preservation of ultrastructure, including extracellular space; and 3) the requirement of uniform staining throughout the sample, among other technical challenges. Cerebral cortical samples from neurosurgical patients are available owing to lead placement for deep brain stimulation. Described here is an immersion fixation, heavy metal staining, and tissue processing method that consistently provides excellent ultrastructure throughout human and rodent surgical brain samples of volumes 2 × 2 × 2 mm3 and up to 37 mm3 with one dimension ≤2 mm. This method should allow synapse-level circuit analysis in samples from patients with psychiatric and neurologic disorders.
Assuntos
Conectoma , Humanos , Conectoma/métodos , Imersão , Microscopia Eletrônica , Coloração e Rotulagem , Encéfalo , BiópsiaRESUMO
In this study, sulfonated starch (SS) was successfully synthesized using sulfamic acid as a sulfonating agent in a deep eutectic solvent (DES). Four-factor and three-level orthogonal experiments were conducted to determine the optimal preparation conditions, which were found to be a molar ratio of starch to urea of 1:20, a reaction temperature of 90 °C, a reaction time of 5 h, and a stirring speed of 200 rpm. The sulfonation reaction mechanism was extensively studied using various techniques, including Fourier transform infrared spectroscopy, elemental analysis, X-ray diffraction, molecular weight, particle distribution, X-ray photoelectron spectroscopy, scanning electron microscopy, and DFT calculations. The results showed that the sulfonation reaction slightly damaged starch granules, occurred on the surface of starch granules, and on the O6 atoms of the glucose unit. SS exhibited a wide pH range of application (5-10), a fast adsorption rate (400 s to reach adsorption equilibrium), and a high adsorption capacity (118.3 mg/g) under optimal conditions. The adsorption process of SS for methylene blue followed the pseudo-first-order kinetic model and was consistent with the Langmuir model, which was endothermic and spontaneous. The adsorption process was attributed to hydrogen bonding and electrostatic interactions.
Assuntos
Solventes Eutéticos Profundos , Poluentes Químicos da Água , Solventes , Adsorção , Amido/química , Temperatura , Espectroscopia de Infravermelho com Transformada de Fourier , Cinética , Poluentes Químicos da Água/química , Concentração de Íons de HidrogênioRESUMO
3D instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment a new modality by either deploying pre-trained models optimized on diverse training data or sequentially conducting image translation and segmentation with two relatively independent networks. In this work, we propose a novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts image translation and instance segmentation simultaneously using a unified network with weight sharing. Since the image translation layer can be removed at inference time, our proposed model does not introduce additional computational cost upon a standard segmentation model. For optimizing CySGAN, besides the CycleGAN losses for image translation and supervised losses for the annotated source domain, we also utilize self-supervised and segmentation-based adversarial objectives to enhance the model performance by leveraging unlabeled target domain images. We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data. The proposed CySGAN outperforms pre-trained generalist models, feature-level domain adaptation models, and the baselines that conduct image translation and segmentation sequentially. Our implementation and the newly collected, densely annotated ExM zebrafish brain nuclei dataset, named NucExM, are publicly available at https://connectomics-bazaar.github.io/proj/CySGAN/index.html.
Assuntos
Benchmarking , Peixe-Zebra , Animais , Microscopia , Processamento de Imagem Assistida por ComputadorRESUMO
Mapping neuronal networks that underlie behavior has become a central focus in neuroscience. While serial section electron microscopy (ssEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide the molecular information that helps identify cell types or their functional properties. Volumetric correlated light and electron microscopy (vCLEM) combines ssEM and volumetric fluorescence microscopy to incorporate molecular labeling into ssEM datasets. We developed an approach that uses small fluorescent single-chain variable fragment (scFv) immuno-probes to perform multiplexed detergent-free immuno-labeling and ssEM on the same samples. We generated eight such fluorescent scFvs that targeted useful markers for brain studies (green fluorescent protein, glial fibrillary acidic protein, calbindin, parvalbumin, voltage-gated potassium channel subfamily A member 2, vesicular glutamate transporter 1, postsynaptic density protein 95, and neuropeptide Y). To test the vCLEM approach, six different fluorescent probes were imaged in a sample of the cortex of a cerebellar lobule (Crus 1), using confocal microscopy with spectral unmixing, followed by ssEM imaging of the same sample. The results show excellent ultrastructure with superimposition of the multiple fluorescence channels. Using this approach we could document a poorly described cell type in the cerebellum, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies.
RESUMO
Connectomics is a nascent neuroscience field to map and analyze neuronal networks. It provides a new way to investigate abnormalities in brain tissue, including in models of Alzheimer's disease (AD). This age-related disease is associated with alterations in amyloid-ß (Aß) and phosphorylated tau (pTau). These alterations correlate with AD's clinical manifestations, but causal links remain unclear. Therefore, studying these molecular alterations within the context of the local neuronal and glial milieu may provide insight into disease mechanisms. Volume electron microscopy (vEM) is an ideal tool for performing connectomics studies at the ultrastructural level, but localizing specific biomolecules within large-volume vEM data has been challenging. Here we report a volumetric correlated light and electron microscopy (vCLEM) approach using fluorescent nanobodies as immuno-probes to localize Alzheimer's disease-related molecules in a large vEM volume. Three molecules (pTau, Aß, and a marker for activated microglia (CD11b)) were labeled without the need for detergents by three nanobody probes in a sample of the hippocampus of the 3xTg Alzheimer's disease model mouse. Confocal microscopy followed by vEM imaging of the same sample allowed for registration of the location of the molecules within the volume. This dataset revealed several ultrastructural abnormalities regarding the localizations of Aß and pTau in novel locations. For example, two pTau-positive post-synaptic spine-like protrusions innervated by axon terminals were found projecting from the axon initial segment of a pyramidal cell. Three pyramidal neurons with intracellular Aß or pTau were 3D reconstructed. Automatic synapse detection, which is necessary for connectomics analysis, revealed the changes in density and volume of synapses at different distances from an Aß plaque. This vCLEM approach is useful to uncover molecular alterations within large-scale volume electron microscopy data, opening a new connectomics pathway to study Alzheimer's disease and other types of dementia.
RESUMO
Mapping neuronal networks that underlie behavior has become a central focus in neuroscience. While serial section electron microscopy (ssEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide the molecular information that helps identify cell types or their functional properties. Volumetric correlated light and electron microscopy (vCLEM) combines ssEM and volumetric fluorescence microscopy to incorporate molecular labeling into ssEM datasets. We developed an approach that uses small fluorescent single-chain variable fragment (scFv) immuno-probes to perform multiplexed detergent-free immuno-labeling and ssEM on the same samples. We generated eight such fluorescent scFvs that targeted useful markers for brain studies (green fluorescent protein, glial fibrillary acidic protein, calbindin, parvalbumin, voltage-gated potassium channel subfamily A member 2, vesicular glutamate transporter 1, postsynaptic density protein 95, and neuropeptide Y). To test the vCLEM approach, six different fluorescent probes were imaged in a sample of the cortex of a cerebellar lobule (Crus 1), using confocal microscopy with spectral unmixing, followed by ssEM imaging of the same sample. The results show excellent ultrastructure with superimposition of the multiple fluorescence channels. Using this approach we could document a poorly described cell type in the cerebellum, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies.