Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 123
Filtrar
1.
Science ; 384(6696): eadk4858, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38723085

RESUMEN

To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. Our analysis showed that glia outnumber neurons 2:1, oligodendrocytes were the most common cell, deep layer excitatory neurons could be classified on the basis of dendritic orientation, and among thousands of weak connections to each neuron, there exist rare powerful axonal inputs of up to 50 synapses. Further studies using this resource may bring valuable insights into the mysteries of the human brain.


Asunto(s)
Neuronas , Sinapsis , Lóbulo Temporal , Humanos , Neuronas/ultraestructura , Sinapsis/fisiología , Sinapsis/ultraestructura , Oligodendroglía/citología , Neuroglía , Corteza Cerebral/irrigación sanguínea , Corteza Cerebral/citología , Corteza Cerebral/ultraestructura , Dendritas/fisiología , Axones/fisiología , Axones/ultraestructura
2.
IEEE Trans Med Imaging ; PP2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38739506

RESUMEN

The size of image volumes in connectomics studies now reaches terabyte and often petabyte scales with a great diversity of appearance due to different sample preparation procedures. However, manual annotation of neuronal structures (e.g., synapses) in these huge image volumes is time-consuming, leading to limited labeled training data often smaller than 0.001% of the large-scale image volumes in application. Methods that can utilize in-domain labeled data and generalize to out-of-domain unlabeled data are in urgent need. Although many domain adaptation approaches are proposed to address such issues in the natural image domain, few of them have been evaluated on connectomics data due to a lack of domain adaptation benchmarks. Therefore, to enable developments of domain adaptive synapse detection methods for large-scale connectomics applications, we annotated 14 image volumes from a biologically diverse set of Megaphragma viggianii brain regions originating from three different whole-brain datasets and organized the WASPSYN challenge at ISBI 2023. The annotations include coordinates of pre-synapses and post-synapses in the 3D space, together with their one-to-many connectivity information. This paper describes the dataset, the tasks, the proposed baseline, the evaluation method, and the results of the challenge. Limitations of the challenge and the impact on neuroscience research are also discussed. The challenge is and will continue to be available at https://codalab.lisn.upsaclay.fr/competitions/9169. Successful algorithms that emerge from our challenge may potentially revolutionize real-world connectomics research and further the cause that aims to unravel the complexity of brain structure and function.

3.
Int J Comput Vis ; 132(4): 1148-1166, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38549787

RESUMEN

Portrait viewpoint and illumination editing is an important problem with several applications in VR/AR, movies, and photography. Comprehensive knowledge of geometry and illumination is critical for obtaining photorealistic results. Current methods are unable to explicitly model in 3D while handling both viewpoint and illumination editing from a single image. In this paper, we propose VoRF, a novel approach that can take even a single portrait image as input and relight human heads under novel illuminations that can be viewed from arbitrary viewpoints. VoRF represents a human head as a continuous volumetric field and learns a prior model of human heads using a coordinate-based MLP with individual latent spaces for identity and illumination. The prior model is learned in an auto-decoder manner over a diverse class of head shapes and appearances, allowing VoRF to generalize to novel test identities from a single input image. Additionally, VoRF has a reflectance MLP that uses the intermediate features of the prior model for rendering One-Light-at-A-Time (OLAT) images under novel views. We synthesize novel illuminations by combining these OLAT images with target environment maps. Qualitative and quantitative evaluations demonstrate the effectiveness of VoRF for relighting and novel view synthesis, even when applied to unseen subjects under uncontrolled illumination. This work is an extension of Rao et al. (VoRF: Volumetric Relightable Faces 2022). We provide extensive evaluation and ablative studies of our model and also provide an application, where any face can be relighted using textual input.

4.
Hum Reprod ; 39(4): 698-708, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38396213

RESUMEN

STUDY QUESTION: Can the BlastAssist deep learning pipeline perform comparably to or outperform human experts and embryologists at measuring interpretable, clinically relevant features of human embryos in IVF? SUMMARY ANSWER: The BlastAssist pipeline can measure a comprehensive set of interpretable features of human embryos and either outperform or perform comparably to embryologists and human experts in measuring these features. WHAT IS KNOWN ALREADY: Some studies have applied deep learning and developed 'black-box' algorithms to predict embryo viability directly from microscope images and videos but these lack interpretability and generalizability. Other studies have developed deep learning networks to measure individual features of embryos but fail to conduct careful comparisons to embryologists' performance, which are fundamental to demonstrate the network's effectiveness. STUDY DESIGN, SIZE, DURATION: We applied the BlastAssist pipeline to 67 043 973 images (32 939 embryos) recorded in the IVF lab from 2012 to 2017 in Tel Aviv Sourasky Medical Center. We first compared the pipeline measurements of individual images/embryos to manual measurements by human experts for sets of features, including: (i) fertilization status (n = 207 embryos), (ii) cell symmetry (n = 109 embryos), (iii) degree of fragmentation (n = 6664 images), and (iv) developmental timing (n = 21 036 images). We then conducted detailed comparisons between pipeline outputs and annotations made by embryologists during routine treatments for features, including: (i) fertilization status (n = 18 922 embryos), (ii) pronuclei (PN) fade time (n = 13 781 embryos), (iii) degree of fragmentation on Day 2 (n = 11 582 embryos), and (iv) time of blastulation (n = 3266 embryos). In addition, we compared the pipeline outputs to the implantation results of 723 single embryo transfer (SET) cycles, and to the live birth results of 3421 embryos transferred in 1801 cycles. PARTICIPANTS/MATERIALS, SETTING, METHODS: In addition to EmbryoScope™ image data, manual embryo grading and annotations, and electronic health record (EHR) data on treatment outcomes were also included. We integrated the deep learning networks we developed for individual features to construct the BlastAssist pipeline. Pearson's χ2 test was used to evaluate the statistical independence of individual features and implantation success. Bayesian statistics was used to evaluate the association of the probability of an embryo resulting in live birth to BlastAssist inputs. MAIN RESULTS AND THE ROLE OF CHANCE: The BlastAssist pipeline integrates five deep learning networks and measures comprehensive, interpretable, and quantitative features in clinical IVF. The pipeline performs similarly or better than manual measurements. For fertilization status, the network performs with very good parameters of specificity and sensitivity (area under the receiver operating characteristics (AUROC) 0.84-0.94). For symmetry score, the pipeline performs comparably to the human expert at both 2-cell (r = 0.71 ± 0.06) and 4-cell stages (r = 0.77 ± 0.07). For degree of fragmentation, the pipeline (acc = 69.4%) slightly under-performs compared to human experts (acc = 73.8%). For developmental timing, the pipeline (acc = 90.0%) performs similarly to human experts (acc = 91.4%). There is also strong agreement between pipeline outputs and annotations made by embryologists during routine treatments. For fertilization status, the pipeline and embryologists strongly agree (acc = 79.6%), and there is strong correlation between the two measurements (r = 0.683). For degree of fragmentation, the pipeline and embryologists mostly agree (acc = 55.4%), and there is also strong correlation between the two measurements (r = 0.648). For both PN fade time (r = 0.787) and time of blastulation (r = 0.887), there's strong correlation between the pipeline and embryologists. For SET cycles, 2-cell time (P < 0.01) and 2-cell symmetry (P < 0.03) are significantly correlated with implantation success rate, while other features showed correlations with implantation success without statistical significance. In addition, 2-cell time (P < 5 × 10-11), PN fade time (P < 5 × 10-10), degree of fragmentation on Day 3 (P < 5 × 10-4), and 2-cell symmetry (P < 5 × 10-3) showed statistically significant correlation with the probability of the transferred embryo resulting in live birth. LIMITATIONS, REASONS FOR CAUTION: We have not tested the BlastAssist pipeline on data from other clinics or other time-lapse microscopy (TLM) systems. The association study we conducted with live birth results do not take into account confounding variables, which will be necessary to construct an embryo selection algorithm. Randomized controlled trials (RCT) will be necessary to determine whether the pipeline can improve success rates in clinical IVF. WIDER IMPLICATIONS OF THE FINDINGS: BlastAssist provides a comprehensive and holistic means of evaluating human embryos. Instead of using a black-box algorithm, BlastAssist outputs meaningful measurements of embryos that can be interpreted and corroborated by embryologists, which is crucial in clinical decision making. Furthermore, the unprecedentedly large dataset generated by BlastAssist measurements can be used as a powerful resource for further research in human embryology and IVF. STUDY FUNDING/COMPETING INTEREST(S): This work was supported by Harvard Quantitative Biology Initiative, the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard (award number 1764269), the National Institute of Heath (award number R01HD104969), the Perelson Fund, and the Sagol fund for embryos and stem cells as part of the Sagol Network. The authors declare no competing interests. TRIAL REGISTRATION NUMBER: Not applicable.


Asunto(s)
Aprendizaje Profundo , Embarazo , Femenino , Humanos , Implantación del Embrión , Transferencia de un Solo Embrión/métodos , Blastocisto , Nacimiento Vivo , Fertilización In Vitro , Estudios Retrospectivos
5.
IEEE Trans Vis Comput Graph ; 30(1): 458-468, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37878442

RESUMEN

Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence.


Asunto(s)
Tutoría , Deportes de Raqueta , Gráficos por Computador
6.
IEEE Trans Vis Comput Graph ; 30(1): 1380-1390, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37889813

RESUMEN

We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.

7.
IEEE Trans Vis Comput Graph ; 30(1): 76-86, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37883267

RESUMEN

Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.

8.
IEEE Trans Vis Comput Graph ; 30(1): 348-358, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37922171

RESUMEN

Trust is an essential aspect of data visualization, as it plays a crucial role in the interpretation and decision-making processes of users. While research in social sciences outlines the multi-dimensional factors that can play a role in trust formation, most data visualization trust researchers employ a single-item scale to measure trust. We address this gap by proposing a comprehensive, multidimensional conceptualization and operationalization of trust in visualization. We do this by applying general theories of trust from social sciences, as well as synthesizing and extending earlier work and factors identified by studies in the visualization field. We apply a two-dimensional approach to trust in visualization, to distinguish between cognitive and affective elements, as well as between visualization and data-specific trust antecedents. We use our framework to design and run a large crowd-sourced study to quantify the role of visual complexity in establishing trust in science visualizations. Our study provides empirical evidence for several aspects of our proposed theoretical framework, most notably the impact of cognition, affective responses, and individual differences when establishing trust in visualizations.

9.
Artículo en Inglés | MEDLINE | ID: mdl-38096098

RESUMEN

We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.

10.
bioRxiv ; 2023 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-37961104

RESUMEN

Connectomics is a nascent neuroscience field to map and analyze neuronal networks. It provides a new way to investigate abnormalities in brain tissue, including in models of Alzheimer's disease (AD). This age-related disease is associated with alterations in amyloid-ß (Aß) and phosphorylated tau (pTau). These alterations correlate with AD's clinical manifestations, but causal links remain unclear. Therefore, studying these molecular alterations within the context of the local neuronal and glial milieu may provide insight into disease mechanisms. Volume electron microscopy (vEM) is an ideal tool for performing connectomics studies at the ultrastructural level, but localizing specific biomolecules within large-volume vEM data has been challenging. Here we report a volumetric correlated light and electron microscopy (vCLEM) approach using fluorescent nanobodies as immuno-probes to localize Alzheimer's disease-related molecules in a large vEM volume. Three molecules (pTau, Aß, and a marker for activated microglia (CD11b)) were labeled without the need for detergents by three nanobody probes in a sample of the hippocampus of the 3xTg Alzheimer's disease model mouse. Confocal microscopy followed by vEM imaging of the same sample allowed for registration of the location of the molecules within the volume. This dataset revealed several ultrastructural abnormalities regarding the localizations of Aß and pTau in novel locations. For example, two pTau-positive post-synaptic spine-like protrusions innervated by axon terminals were found projecting from the axon initial segment of a pyramidal cell. Three pyramidal neurons with intracellular Aß or pTau were 3D reconstructed. Automatic synapse detection, which is necessary for connectomics analysis, revealed the changes in density and volume of synapses at different distances from an Aß plaque. This vCLEM approach is useful to uncover molecular alterations within large-scale volume electron microscopy data, opening a new connectomics pathway to study Alzheimer's disease and other types of dementia.

11.
Artículo en Inglés | MEDLINE | ID: mdl-37883279

RESUMEN

Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2 - 6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2 - 4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.

12.
Artículo en Inglés | MEDLINE | ID: mdl-37871050

RESUMEN

Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-LABEL, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-LABEL considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-LABEL in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-LABEL excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes.

13.
IEEE Trans Med Imaging ; 42(12): 3956-3971, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37768797

RESUMEN

In this paper, we present the results of the MitoEM challenge on mitochondria 3D instance segmentation from electron microscopy images, organized in conjunction with the IEEE-ISBI 2021 conference. Our benchmark dataset consists of two large-scale 3D volumes, one from human and one from rat cortex tissue, which are 1,986 times larger than previously used datasets. At the time of paper submission, 257 participants had registered for the challenge, 14 teams had submitted their results, and six teams participated in the challenge workshop. Here, we present eight top-performing approaches from the challenge participants, along with our own baseline strategies. Posterior to the challenge, annotation errors in the ground truth were corrected without altering the final ranking. Additionally, we present a retrospective evaluation of the scoring system which revealed that: 1) challenge metric was permissive with the false positive predictions; and 2) size-based grouping of instances did not correctly categorize mitochondria of interest. Thus, we propose a new scoring system that better reflects the correctness of the segmentation results. Although several of the top methods are compared favorably to our own baselines, substantial errors remain unsolved for mitochondria with challenging morphologies. Thus, the challenge remains open for submission and automatic evaluation, with all volumes available for download.


Asunto(s)
Corteza Cerebral , Mitocondrias , Humanos , Ratas , Animales , Estudios Retrospectivos , Microscopía Electrónica , Procesamiento de Imagen Asistido por Computador/métodos
14.
IEEE Trans Med Imaging ; PP2023 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-37695967

RESUMEN

Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSeg.

15.
bioRxiv ; 2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37546753

RESUMEN

Advances in Electron Microscopy, image segmentation and computational infrastructure have given rise to large-scale and richly annotated connectomic datasets which are increasingly shared across communities. To enable collaboration, users need to be able to concurrently create new annotations and correct errors in the automated segmentation by proofreading. In large datasets, every proofreading edit relabels cell identities of millions of voxels and thousands of annotations like synapses. For analysis, users require immediate and reproducible access to this constantly changing and expanding data landscape. Here, we present the Connectome Annotation Versioning Engine (CAVE), a computational infrastructure for immediate and reproducible connectome analysis in up-to petascale datasets (~1mm3) while proofreading and annotating is ongoing. For segmentation, CAVE provides a distributed proofreading infrastructure for continuous versioning of large reconstructions. Annotations in CAVE are defined by locations such that they can be quickly assigned to the underlying segment which enables fast analysis queries of CAVE's data for arbitrary time points. CAVE supports schematized, extensible annotations, so that researchers can readily design novel annotation types. CAVE is already used for many connectomics datasets, including the largest datasets available to date.

16.
Artículo en Inglés | MEDLINE | ID: mdl-37506003

RESUMEN

Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.

17.
Nat Methods ; 20(8): 1256-1265, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37429995

RESUMEN

Three-dimensional (3D) reconstruction of living brain tissue down to an individual synapse level would create opportunities for decoding the dynamics and structure-function relationships of the brain's complex and dense information processing network; however, this has been hindered by insufficient 3D resolution, inadequate signal-to-noise ratio and prohibitive light burden in optical imaging, whereas electron microscopy is inherently static. Here we solved these challenges by developing an integrated optical/machine-learning technology, LIONESS (live information-optimized nanoscopy enabling saturated segmentation). This leverages optical modifications to stimulated emission depletion microscopy in comprehensively, extracellularly labeled tissue and previous information on sample structure via machine learning to simultaneously achieve isotropic super-resolution, high signal-to-noise ratio and compatibility with living tissue. This allows dense deep-learning-based instance segmentation and 3D reconstruction at a synapse level, incorporating molecular, activity and morphodynamic information. LIONESS opens up avenues for studying the dynamic functional (nano-)architecture of living brain tissue.


Asunto(s)
Encéfalo , Sinapsis , Microscopía Fluorescente/métodos , Procesamiento de Imagen Asistido por Computador
18.
Res Sq ; 2023 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-37461609

RESUMEN

Mapping neuronal networks that underlie behavior has become a central focus in neuroscience. While serial section electron microscopy (ssEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide the molecular information that helps identify cell types or their functional properties. Volumetric correlated light and electron microscopy (vCLEM) combines ssEM and volumetric fluorescence microscopy to incorporate molecular labeling into ssEM datasets. We developed an approach that uses small fluorescent single-chain variable fragment (scFv) immuno-probes to perform multiplexed detergent-free immuno-labeling and ssEM on the same samples. We generated eight such fluorescent scFvs that targeted useful markers for brain studies (green fluorescent protein, glial fibrillary acidic protein, calbindin, parvalbumin, voltage-gated potassium channel subfamily A member 2, vesicular glutamate transporter 1, postsynaptic density protein 95, and neuropeptide Y). To test the vCLEM approach, six different fluorescent probes were imaged in a sample of the cortex of a cerebellar lobule (Crus 1), using confocal microscopy with spectral unmixing, followed by ssEM imaging of the same sample. The results show excellent ultrastructure with superimposition of the multiple fluorescence channels. Using this approach we could document a poorly described cell type in the cerebellum, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies.

19.
bioRxiv ; 2023 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-37292964

RESUMEN

Mapping neuronal networks that underlie behavior has become a central focus in neuroscience. While serial section electron microscopy (ssEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide the molecular information that helps identify cell types or their functional properties. Volumetric correlated light and electron microscopy (vCLEM) combines ssEM and volumetric fluorescence microscopy to incorporate molecular labeling into ssEM datasets. We developed an approach that uses small fluorescent single-chain variable fragment (scFv) immuno-probes to perform multiplexed detergent-free immuno-labeling and ssEM on the same samples. We generated eight such fluorescent scFvs that targeted useful markers for brain studies (green fluorescent protein, glial fibrillary acidic protein, calbindin, parvalbumin, voltage-gated potassium channel subfamily A member 2, vesicular glutamate transporter 1, postsynaptic density protein 95, and neuropeptide Y). To test the vCLEM approach, six different fluorescent probes were imaged in a sample of the cortex of a cerebellar lobule (Crus 1), using confocal microscopy with spectral unmixing, followed by ssEM imaging of the same sample. The results show excellent ultrastructure with superimposition of the multiple fluorescence channels. Using this approach we could document a poorly described cell type in the cerebellum, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies.

20.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 11707-11719, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37339034

RESUMEN

Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data. However, given a UNIT model trained on certain domains, it is difficult for current methods to incorporate new domains because they often need to train the full model on both existing and new domains. To address this problem, we propose a new domain-scalable UNIT method, termed as latent space anchoring, which can be efficiently extended to new visual domains and does not need to fine-tune encoders and decoders of existing domains. Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models to reconstruct single-domain images. In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning. Experiments on various datasets show that the proposed method achieves superior performance on both standard and domain-scalable UNIT tasks in comparison with the state-of-the-art methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...