RESUMEN
Characterizing cellular diversity at different levels of biological organization and across data modalities is a prerequisite to understanding the function of cell types in the brain. Classification of neurons is also essential to manipulate cell types in controlled ways and to understand their variation and vulnerability in brain disorders. The BRAIN Initiative Cell Census Network (BICCN) is an integrated network of data-generating centers, data archives, and data standards developers, with the goal of systematic multimodal brain cell type profiling and characterization. Emphasis of the BICCN is on the whole mouse brain with demonstration of prototype feasibility for human and nonhuman primate (NHP) brains. Here, we provide a guide to the cellular and spatial approaches employed by the BICCN, and to accessing and using these data and extensive resources, including the BRAIN Cell Data Center (BCDC), which serves to manage and integrate data across the ecosystem. We illustrate the power of the BICCN data ecosystem through vignettes highlighting several BICCN analysis and visualization tools. Finally, we present emerging standards that have been developed or adopted toward Findable, Accessible, Interoperable, and Reusable (FAIR) neuroscience. The combined BICCN ecosystem provides a comprehensive resource for the exploration and analysis of cell types in the brain.
Asunto(s)
Encéfalo , Neurociencias , Animales , Humanos , Ratones , Ecosistema , NeuronasRESUMEN
Objective.Speech brain-computer interfaces (BCIs) have the potential to augment communication in individuals with impaired speech due to muscle weakness, for example in amyotrophic lateral sclerosis (ALS) and other neurological disorders. However, to achieve long-term, reliable use of a speech BCI, it is essential for speech-related neural signal changes to be stable over long periods of time. Here we study, for the first time, the stability of speech-related electrocorticographic (ECoG) signals recorded from a chronically implanted ECoG BCI over a 12 month period.Approach.ECoG signals were recorded by an ECoG array implanted over the ventral sensorimotor cortex in a clinical trial participant with ALS. Because ECoG-based speech decoding has most often relied on broadband high gamma (HG) signal changes relative to baseline (non-speech) conditions, we studied longitudinal changes of HG band power at baseline and during speech, and we compared these with residual high frequency noise levels at baseline. Stability was further assessed by longitudinal measurements of signal-to-noise ratio, activation ratio, and peak speech-related HG response magnitude (HG response peaks). Lastly, we analyzed the stability of the event-related HG power changes (HG responses) for individual syllables at each electrode.Main Results.We found that speech-related ECoG signal responses were stable over a range of syllables activating different articulators for the first year after implantation.Significance.Together, our results indicate that ECoG can be a stable recording modality for long-term speech BCI systems for those living with severe paralysis.Clinical Trial Information.ClinicalTrials.gov, registration number NCT03567213.
Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Electrocorticografía , Habla , Humanos , Esclerosis Amiotrófica Lateral/fisiopatología , Estudios Longitudinales , Electrocorticografía/métodos , Habla/fisiología , Masculino , Ritmo Gamma/fisiología , Persona de Mediana Edad , Femenino , Electrodos ImplantadosRESUMEN
The development of novel imaging platforms has improved our ability to collect and analyze large three-dimensional (3D) biological imaging datasets. Advances in computing have led to an ability to extract complex spatial information from these data, such as the composition, morphology, and interactions of multi-cellular structures, rare events, and integration of multi-modal features combining anatomical, molecular, and transcriptomic (among other) information. Yet, the accuracy of these quantitative results is intrinsically limited by the quality of the input images, which can contain missing or damaged regions, or can be of poor resolution due to mechanical, temporal, or financial constraints. In applications ranging from intact imaging (e.g. light-sheet microscopy and magnetic resonance imaging) to sectioning based platforms (e.g. serial histology and serial section transmission electron microscopy), the quality and resolution of imaging data has become paramount. Here, we address these challenges by leveraging frame interpolation for large image motion (FILM), a generative AI model originally developed for temporal interpolation, for spatial interpolation of a range of 3D image types. Comparative analysis demonstrates the superiority of FILM over traditional linear interpolation to produce functional synthetic images, due to its ability to better preserve biological information including microanatomical features and cell counts, as well as image quality, such as contrast, variance, and luminance. FILM repairs tissue damages in images and reduces stitching artifacts. We show that FILM can decrease imaging time by synthesizing skipped images. We demonstrate the versatility of our method with a wide range of imaging modalities (histology, tissue-clearing/light-sheet microscopy, magnetic resonance imaging, serial section transmission electron microscopy), species (human, mouse), healthy and diseased tissues (pancreas, lung, brain), staining techniques (IHC, H&E), and pixel resolutions (8 nm, 2 µm, 1mm). Overall, we demonstrate the potential of generative AI in improving the resolution, throughput, and quality of biological image datasets, enabling improved 3D imaging.
RESUMEN
BACKGROUND: Brain-computer interfaces (BCIs) can restore communication for movement- and/or speech-impaired individuals by enabling neural control of computer typing applications. Single command click detectors provide a basic yet highly functional capability. METHODS: We sought to test the performance and long-term stability of click decoding using a chronically implanted high density electrocorticographic (ECoG) BCI with coverage of the sensorimotor cortex in a human clinical trial participant (ClinicalTrials.gov, NCT03567213) with amyotrophic lateral sclerosis. We trained the participant's click detector using a small amount of training data (<44 min across 4 days) collected up to 21 days prior to BCI use, and then tested it over a period of 90 days without any retraining or updating. RESULTS: Using a click detector to navigate a switch scanning speller interface, the study participant can maintain a median spelling rate of 10.2 characters per min. Though a transient reduction in signal power modulation can interrupt usage of a fixed model, a new click detector can achieve comparable performance despite being trained with even less data (<15 min, within 1 day). CONCLUSIONS: These results demonstrate that a click detector can be trained with a small ECoG dataset while retaining robust performance for extended periods, providing functional text-based communication to BCI users.
Amyotrophic lateral sclerosis (ALS) is a progressive disease of the nervous system that causes muscle weakness and leads to paralysis. People living with ALS therefore struggle to communicate with family and caregivers. We investigated whether the brain signals of a participant with ALS could be used to control a spelling application. Specifically, when the participant attempted a grasping movement, a computer method detected increased brain signals from electrodes implanted on the surface of his brain, and thereby generated a mouse-click. The participant clicked on letters or words from a spelling application to type sentences. Our method was trained using 44 min' worth of brain signals and performed reliably for three months without any retraining. This approach can potentially be used to restore communication to other severely paralyzed individuals over an extended time period and after only a short training period.
RESUMEN
Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2 - 6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2 - 4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.
RESUMEN
The immense scale and complexity of neuronal electron microscopy (EM) datasets pose significant challenges in data processing, validation, and interpretation, necessitating the development of efficient, automated, and scalable error-detection methodologies. This paper proposes a novel approach that employs mesh processing techniques to identify potential error locations near neuronal tips. Error detection at tips is a particularly important challenge since these errors usually indicate that many synapses are falsely split from their parent neuron, injuring the integrity of the connectomic reconstruction. Additionally, we draw implications and results from an implementation of this error detection in a semi-automated proofreading pipeline. Manual proofreading is a laborious, costly, and currently necessary method for identifying the errors in the machine learning based segmentation of neural tissue. This approach streamlines the process of proofreading by systematically highlighting areas likely to contain inaccuracies and guiding proofreaders towards potential continuations, accelerating the rate at which errors are corrected.
RESUMEN
Background: Brain-computer interfaces (BCIs) can restore communication in movement- and/or speech-impaired individuals by enabling neural control of computer typing applications. Single command "click" decoders provide a basic yet highly functional capability. Methods: We sought to test the performance and long-term stability of click-decoding using a chronically implanted high density electrocorticographic (ECoG) BCI with coverage of the sensorimotor cortex in a human clinical trial participant (ClinicalTrials.gov, NCT03567213) with amyotrophic lateral sclerosis (ALS). We trained the participant's click decoder using a small amount of training data (< 44 minutes across four days) collected up to 21 days prior to BCI use, and then tested it over a period of 90 days without any retraining or updating. Results: Using this click decoder to navigate a switch-scanning spelling interface, the study participant was able to maintain a median spelling rate of 10.2 characters per min. Though a transient reduction in signal power modulation interrupted testing with this fixed model, a new click decoder achieved comparable performance despite being trained with even less data (< 15 min, within one day). Conclusion: These results demonstrate that a click decoder can be trained with a small ECoG dataset while retaining robust performance for extended periods, providing functional text-based communication to BCI users.
RESUMEN
Brain-computer interfaces (BCIs) can be used to control assistive devices by patients with neurological disorders like amyotrophic lateral sclerosis (ALS) that limit speech and movement. For assistive control, it is desirable for BCI systems to be accurate and reliable, preferably with minimal setup time. In this study, a participant with severe dysarthria due to ALS operates computer applications with six intuitive speech commands via a chronic electrocorticographic (ECoG) implant over the ventral sensorimotor cortex. Speech commands are accurately detected and decoded (median accuracy: 90.59%) throughout a 3-month study period without model retraining or recalibration. Use of the BCI does not require exogenous timing cues, enabling the participant to issue self-paced commands at will. These results demonstrate that a chronically implanted ECoG-based speech BCI can reliably control assistive devices over long time periods with only initial model training and calibration, supporting the feasibility of unassisted home use.
Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Humanos , Habla , Esclerosis Amiotrófica Lateral/complicaciones , ElectrocorticografíaRESUMEN
We are now in the era of millimeter-scale electron microscopy (EM) volumes collected at nanometer resolution (Shapson-Coe et al., 2021; Consortium et al., 2021). Dense reconstruction of cellular compartments in these EM volumes has been enabled by recent advances in Machine Learning (ML) (Lee et al., 2017; Wu et al., 2021; Lu et al., 2021; Macrina et al., 2021). Automated segmentation methods can now yield exceptionally accurate reconstructions of cells, but despite this accuracy, laborious post-hoc proofreading is still required to generate large connectomes free of merge and split errors. The elaborate 3-D meshes of neurons produced by these segmentations contain detailed morphological information, from the diameter, shape, and branching patterns of axons and dendrites, down to the fine-scale structure of dendritic spines. However, extracting information about these features can require substantial effort to piece together existing tools into custom workflows. Building on existing open-source software for mesh manipulation, here we present "NEURD", a software package that decomposes each meshed neuron into a compact and extensively-annotated graph representation. With these feature-rich graphs, we implement workflows for state of the art automated post-hoc proofreading of merge errors, cell classification, spine detection, axon-dendritic proximities, and other features that can enable many downstream analyses of neural morphology and connectivity. NEURD can make these new massive and complex datasets more accessible to neuroscience researchers focused on a variety of scientific questions.
RESUMEN
Neuroscientists can leverage technological advances to image neural tissue across a range of different scales, potentially forming the basis for the next generation of brain atlases and circuit reconstructions at submicron resolution, using Electron Microscopy and X-ray Microtomography modalities. However, there is variability in data collection, annotation, and storage approaches, which limits effective comparative and secondary analysis. There has been great progress in standardizing interfaces for large-scale spatial image data, but more work is needed to standardize annotations, especially metadata associated with neuroanatomical entities. Standardization will enable validation, sharing, and replication, greatly amplifying investment throughout the connectomics community. We share key design considerations and a usecase developed for metadata for a recent large-scale dataset.
RESUMEN
Technological advances in imaging and data acquisition are leading to the development of petabyte-scale neuroscience image datasets. These large-scale volumetric datasets pose unique challenges since analyses often span the entire volume, requiring a unified platform to access it. In this paper, we describe the Brain Observatory Storage Service and Database (BossDB), a cloud-based solution for storing and accessing petascale image datasets. BossDB provides support for data ingest, storage, visualization, and sharing through a RESTful Application Programming Interface (API). A key feature is the scalable indexing of spatial data and automatic and manual annotations to facilitate data discovery. Our project is open source and can be easily and cost effectively used for a variety of modalities and applications, and has effectively worked with datasets over a petabyte in size.
RESUMEN
Rigorous comparisons of human and machine learning algorithm performance on the same task help to support accurate claims about algorithm success rates and advances understanding of their performance relative to that of human performers. In turn, these comparisons are critical for supporting advances in artificial intelligence. However, the machine learning community has lacked a standardized, consensus framework for performing the evaluations of human performance necessary for comparison. We demonstrate common pitfalls in a designing the human performance evaluation and propose a framework for the evaluation of human performance, illustrating guiding principles for a successful comparison. These principles are first, to design the human evaluation with an understanding of the differences between human and algorithm cognition; second, to match trials between human participants and the algorithm evaluation, and third, to employ best practices for psychology research studies, such as the collection and analysis of supplementary and subjective data and adhering to ethical review protocols. We demonstrate our framework's utility for designing a study to evaluate human performance on a one-shot learning task. Adoption of this common framework may provide a standard approach to evaluate algorithm performance and aid in the reproducibility of comparisons between human and machine learning algorithm performance.
Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Algoritmos , Humanos , Reproducibilidad de los ResultadosRESUMEN
Understanding the cortical representations of movements and their stability can shed light on improved brain-machine interface (BMI) approaches to decode these representations without frequent recalibration. Here, we characterize the spatial organization (somatotopy) and stability of the bilateral sensorimotor map of forearm muscles in an incomplete-high spinal-cord injury study participant implanted bilaterally in the primary motor and sensory cortices with Utah microelectrode arrays (MEAs). We built representation maps by recording bilateral multiunit activity (MUA) and surface electromyography (EMG) as the participant executed voluntary contractions of the extensor carpi radialis (ECR), and attempted motions in the flexor carpi radialis (FCR), which was paralytic. To assess stability, we repeatedly mapped and compared left- and right-wrist-extensor-related activity throughout several sessions, comparing somatotopy of active electrodes, as well as neural signals both at the within-electrode (multiunit) and cross-electrode (network) levels. Wrist motions showed significant activation in motor and sensory cortical electrodes. Within electrodes, firing strength stability diminished as the time increased between consecutive measurements (hours within a session, or days across sessions), with higher stability observed in sensory cortex than in motor, and in the contralateral hemisphere than in the ipsilateral. However, we observed no differences at network level, and no evidence of decoding instabilities for wrist EMG, either across timespans of hours or days, or across recording area. While map stability differs between brain area and hemisphere at multiunit/electrode level, these differences are nullified at ensemble level.
Asunto(s)
Antebrazo , Músculo Esquelético , Electromiografía , Antebrazo/fisiología , Humanos , Movimiento/fisiología , Músculo Esquelético/fisiología , CuadriplejíaRESUMEN
Advances in intelligent robotic systems and brain-machine interfaces (BMI) have helped restore functionality and independence to individuals living with sensorimotor deficits; however, tasks requiring bimanual coordination and fine manipulation continue to remain unsolved given the technical complexity of controlling multiple degrees of freedom (DOF) across multiple limbs in a coordinated way through a user input. To address this challenge, we implemented a collaborative shared control strategy to manipulate and coordinate two Modular Prosthetic Limbs (MPL) for performing a bimanual self-feeding task. A human participant with microelectrode arrays in sensorimotor brain regions provided commands to both MPLs to perform the self-feeding task, which included bimanual cutting. Motor commands were decoded from bilateral neural signals to control up to two DOFs on each MPL at a time. The shared control strategy enabled the participant to map his four-DOF control inputs, two per hand, to as many as 12 DOFs for specifying robot end effector position and orientation. Using neurally-driven shared control, the participant successfully and simultaneously controlled movements of both robotic limbs to cut and eat food in a complex bimanual self-feeding task. This demonstration of bimanual robotic system control via a BMI in collaboration with intelligent robot behavior has major implications for restoring complex movement behaviors for those living with sensorimotor deficits.
RESUMEN
BACKGROUND AND OBJECTIVES: The restoration of touch to fingers and fingertips is critical to achieving dexterous neuroprosthetic control for individuals with sensorimotor dysfunction. However, localized fingertip sensations have not been evoked via intracortical microstimulation (ICMS). METHODS: Using a novel intraoperative mapping approach, we implanted electrode arrays in the finger areas of left and right somatosensory cortex and delivered ICMS over a 2-year period in a human participant with spinal cord injury. RESULTS: Stimulation evoked tactile sensations in 8 fingers, including fingertips, spanning both hands. Evoked percepts followed expected somatotopic arrangements. The subject was able to reliably identify up to 7 finger-specific sites spanning both hands in a finger discrimination task. The size of the evoked percepts was on average 33% larger than a finger pad, as assessed via manual markings of a hand image. The size of the evoked percepts increased modestly with increased stimulation intensity, growing 21% as pulse amplitude increased from 20 to 80 µA. Detection thresholds were estimated on a subset of electrodes, with estimates of 9.2 to 35 µA observed, roughly consistent with prior studies. DISCUSSION: These results suggest that ICMS can enable the delivery of consistent and localized fingertip sensations during object manipulation by neuroprostheses for individuals with somatosensory deficits. CLINICALTRIALSGOV IDENTIFIER: NCT03161067.
Asunto(s)
Corteza Somatosensorial , Traumatismos de la Médula Espinal , Estimulación Eléctrica/métodos , Mano , Humanos , TactoRESUMEN
Recent advances in neuroscience have enabled the exploration of brain structure at the level of individual synaptic connections. These connectomics datasets continue to grow in size and complexity; methods to search for and identify interesting graph patterns offer a promising approach to quickly reduce data dimensionality and enable discovery. These graphs are often too large to be analyzed manually, presenting significant barriers to searching for structure and testing hypotheses. We combine graph database and analysis libraries with an easy-to-use neuroscience grammar suitable for rapidly constructing queries and searching for subgraphs and patterns of interest. Our approach abstracts many of the computer science and graph theory challenges associated with nanoscale brain network analysis and allows scientists to quickly conduct research at scale. We demonstrate the utility of these tools by searching for motifs on simulated data and real public connectomics datasets, and we share simple and complex structures relevant to the neuroscience community. We contextualize our findings and provide case studies and software to motivate future neuroscience exploration.
Asunto(s)
Conectoma , Bases de Datos como Asunto , Motor de Búsqueda , Programas Informáticos , Animales , Caenorhabditis elegans/fisiología , Drosophila melanogaster/fisiología , Ratones , Reproducibilidad de los ResultadosRESUMEN
Aerial images are frequently used in geospatial analysis to inform responses to crises and disasters but can pose unique challenges for visual search when they contain low resolution, degraded information about color, and small object sizes. Aerial image analysis is often performed by humans, but machine learning approaches are being developed to complement manual analysis. To date, however, relatively little work has explored how humans perform visual search on these tasks, and understanding this could ultimately help enable human-machine teaming. We designed a set of studies to understand what features of an aerial image make visual search difficult for humans and what strategies humans use when performing these tasks. Across two experiments, we tested human performance on a counting task with a series of aerial images and examined the influence of features such as target size, location, color, clarity, and number of targets on accuracy and search strategies. Both experiments presented trials consisting of an aerial satellite image; participants were asked to find all instances of a search template in the image. Target size was consistently a significant predictor of performance, influencing not only accuracy of selections but the order in which participants selected target instances in the trial. Experiment 2 demonstrated that the clarity of the target instance and the match between the color of the search template and the color of the target instance also predicted accuracy. Furthermore, color also predicted the order of selecting instances in the trial. These experiments establish not only a benchmark of typical human performance on visual search of aerial images but also identify several features that can influence the task difficulty level for humans. These results have implications for understanding human visual search on real-world tasks and when humans may benefit from automated approaches.
RESUMEN
As neuroimagery datasets continue to grow in size, the complexity of data analyses can require a detailed understanding and implementation of systems computer science for storage, access, processing, and sharing. Currently, several general data standards (e.g., Zarr, HDF5, precomputed) and purpose-built ecosystems (e.g., BossDB, CloudVolume, DVID, and Knossos) exist. Each of these systems has advantages and limitations and is most appropriate for different use cases. Using datasets that don't fit into RAM in this heterogeneous environment is challenging, and significant barriers exist to leverage underlying research investments. In this manuscript, we outline our perspective for how to approach this challenge through the use of community provided, standardized interfaces that unify various computational backends and abstract computer science challenges from the scientist. We introduce desirable design patterns and share our reference implementation called intern.
Asunto(s)
Conjuntos de Datos como Asunto/normas , NeurocienciasRESUMEN
The nanoscale connectomics community has recently generated automated and semi-automated "wiring diagrams" of brain subregions from terabytes and petabytes of dense 3D neuroimagery. This process involves many challenging and imperfect technical steps, including dense 3D image segmentation, anisotropic nonrigid image alignment and coregistration, and pixel classification of each neuron and their individual synaptic connections. As data volumes continue to grow in size, and connectome generation becomes increasingly commonplace, it is important that the scientific community is able to rapidly assess the quality and accuracy of a connectome product to promote dataset analysis and reuse. In this work, we share our scalable toolkit for assessing the quality of a connectome reconstruction via targeted inquiry and large-scale graph analysis, and to provide insights into how such connectome proofreading processes may be improved and optimized in the future. We illustrate the applications and ecosystem on a recent reference dataset.Clinical relevance- Large-scale electron microscopy (EM) data offers a novel opportunity to characterize etiologies and neurological diseases and conditions at an unprecedented scale. EM is useful for low-level analyses such as biopsies; this increased scale offers new possibilities for research into areas such as neural networks if certain bottlenecks and problems are overcome.