Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Robot AI ; 11: 1295308, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38756983

RESUMEN

Dance plays a vital role in human societies across time and culture, with different communities having invented different systems for artistic expression through movement (genres). Differences between genres can be described by experts in words and movements, but these descriptions can only be appreciated by people with certain background abilities. Existing dance notation schemes could be applied to describe genre-differences, however they fall substantially short of being able to capture the important details of movement across a wide spectrum of genres. Our knowledge and practice around dance would benefit from a general, quantitative and human-understandable method of characterizing meaningful differences between aspects of any dance style; a computational kinematics of dance. Here we introduce and apply a novel system for encoding bodily movement as 17 macroscopic, interpretable features, such as expandedness of the body or the frequency of sharp movements. We use this encoding to analyze Hip Hop Dance genres, in part by building a low-cost machine-learning classifier that distinguishes genre with high accuracy. Our study relies on an open dataset (AIST++) of pose-sequences from dancers instructed to perform one of ten Hip Hop genres, such as Breakdance, Popping, or Krump. For comparison we evaluate moderately experienced human observers at discerning these sequence's genres from movements alone (38% where chance = 10%). The performance of a baseline, Ridge classifier model was fair (48%) and that of the model resulting from our automated machine learning pipeline was strong (76%). This indicates that the selected features represent important dimensions of movement for the expression of the attitudes, stories, and aesthetic values manifested in these dance forms. Our study offers a new window into significant relations of similarity and difference between the genres studied. Given the rich, complex, and culturally shaped nature of these genres, the interpretability of our features, and the lightweight techniques used, our approach has significant potential for generalization to other movement domains and movement-related applications.

2.
bioRxiv ; 2023 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-37961670

RESUMEN

The immense scale and complexity of neuronal electron microscopy (EM) datasets pose significant challenges in data processing, validation, and interpretation, necessitating the development of efficient, automated, and scalable error-detection methodologies. This paper proposes a novel approach that employs mesh processing techniques to identify potential error locations near neuronal tips. Error detection at tips is a particularly important challenge since these errors usually indicate that many synapses are falsely split from their parent neuron, injuring the integrity of the connectomic reconstruction. Additionally, we draw implications and results from an implementation of this error detection in a semi-automated proofreading pipeline. Manual proofreading is a laborious, costly, and currently necessary method for identifying the errors in the machine learning based segmentation of neural tissue. This approach streamlines the process of proofreading by systematically highlighting areas likely to contain inaccuracies and guiding proofreaders towards potential continuations, accelerating the rate at which errors are corrected.

3.
Artículo en Inglés | MEDLINE | ID: mdl-37883279

RESUMEN

Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2 - 6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2 - 4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.

4.
Sci Rep ; 13(1): 2236, 2023 02 08.
Artículo en Inglés | MEDLINE | ID: mdl-36755135

RESUMEN

As clinicians are faced with a deluge of clinical data, data science can play an important role in highlighting key features driving patient outcomes, aiding in the development of new clinical hypotheses. Insight derived from machine learning can serve as a clinical support tool by connecting care providers with reliable results from big data analysis that identify previously undetected clinical patterns. In this work, we show an example of collaboration between clinicians and data scientists during the COVID-19 pandemic, identifying sub-groups of COVID-19 patients with unanticipated outcomes or who are high-risk for severe disease or death. We apply a random forest classifier model to predict adverse patient outcomes early in the disease course, and we connect our classification results to unsupervised clustering of patient features that may underpin patient risk. The paradigm for using data science for hypothesis generation and clinical decision support, as well as our triaged classification approach and unsupervised clustering methods to determine patient cohorts, are applicable to driving rapid hypothesis generation and iteration in a variety of clinical challenges, including future public health crises.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , Pandemias , Aprendizaje Automático , Pacientes , Macrodatos
5.
Front Neuroinform ; 16: 828787, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35242021

RESUMEN

Technological advances in imaging and data acquisition are leading to the development of petabyte-scale neuroscience image datasets. These large-scale volumetric datasets pose unique challenges since analyses often span the entire volume, requiring a unified platform to access it. In this paper, we describe the Brain Observatory Storage Service and Database (BossDB), a cloud-based solution for storing and accessing petascale image datasets. BossDB provides support for data ingest, storage, visualization, and sharing through a RESTful Application Programming Interface (API). A key feature is the scalable indexing of spatial data and automatic and manual annotations to facilitate data discovery. Our project is open source and can be easily and cost effectively used for a variety of modalities and applications, and has effectively worked with datasets over a petabyte in size.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2413-2418, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891768

RESUMEN

As neuroimagery datasets continue to grow in size, the complexity of data analyses can require a detailed understanding and implementation of systems computer science for storage, access, processing, and sharing. Currently, several general data standards (e.g., Zarr, HDF5, precomputed) and purpose-built ecosystems (e.g., BossDB, CloudVolume, DVID, and Knossos) exist. Each of these systems has advantages and limitations and is most appropriate for different use cases. Using datasets that don't fit into RAM in this heterogeneous environment is challenging, and significant barriers exist to leverage underlying research investments. In this manuscript, we outline our perspective for how to approach this challenge through the use of community provided, standardized interfaces that unify various computational backends and abstract computer science challenges from the scientist. We introduce desirable design patterns and share our reference implementation called intern.


Asunto(s)
Conjuntos de Datos como Asunto/normas , Neurociencias
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2444-2450, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891774

RESUMEN

The nanoscale connectomics community has recently generated automated and semi-automated "wiring diagrams" of brain subregions from terabytes and petabytes of dense 3D neuroimagery. This process involves many challenging and imperfect technical steps, including dense 3D image segmentation, anisotropic nonrigid image alignment and coregistration, and pixel classification of each neuron and their individual synaptic connections. As data volumes continue to grow in size, and connectome generation becomes increasingly commonplace, it is important that the scientific community is able to rapidly assess the quality and accuracy of a connectome product to promote dataset analysis and reuse. In this work, we share our scalable toolkit for assessing the quality of a connectome reconstruction via targeted inquiry and large-scale graph analysis, and to provide insights into how such connectome proofreading processes may be improved and optimized in the future. We illustrate the applications and ecosystem on a recent reference dataset.Clinical relevance- Large-scale electron microscopy (EM) data offers a novel opportunity to characterize etiologies and neurological diseases and conditions at an unprecedented scale. EM is useful for low-level analyses such as biopsies; this increased scale offers new possibilities for research into areas such as neural networks if certain bottlenecks and problems are overcome.


Asunto(s)
Conectoma , Ecosistema , Imagenología Tridimensional , Microscopía Electrónica , Neuronas
8.
Sci Rep ; 11(1): 13045, 2021 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-34158519

RESUMEN

Recent advances in neuroscience have enabled the exploration of brain structure at the level of individual synaptic connections. These connectomics datasets continue to grow in size and complexity; methods to search for and identify interesting graph patterns offer a promising approach to quickly reduce data dimensionality and enable discovery. These graphs are often too large to be analyzed manually, presenting significant barriers to searching for structure and testing hypotheses. We combine graph database and analysis libraries with an easy-to-use neuroscience grammar suitable for rapidly constructing queries and searching for subgraphs and patterns of interest. Our approach abstracts many of the computer science and graph theory challenges associated with nanoscale brain network analysis and allows scientists to quickly conduct research at scale. We demonstrate the utility of these tools by searching for motifs on simulated data and real public connectomics datasets, and we share simple and complex structures relevant to the neuroscience community. We contextualize our findings and provide case studies and software to motivate future neuroscience exploration.


Asunto(s)
Conectoma , Bases de Datos como Asunto , Motor de Búsqueda , Programas Informáticos , Animales , Caenorhabditis elegans/fisiología , Drosophila melanogaster/fisiología , Ratones , Reproducibilidad de los Resultados
9.
Gigascience ; 9(12)2020 12 21.
Artículo en Inglés | MEDLINE | ID: mdl-33347572

RESUMEN

BACKGROUND: Emerging neuroimaging datasets (collected with imaging techniques such as electron microscopy, optical microscopy, or X-ray microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. RESULTS: We developed an ecosystem of neuroimaging data analysis pipelines that use open-source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, which connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. CONCLUSIONS: Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets but may be applied to similar problems in other domains.


Asunto(s)
Ecosistema , Programas Informáticos , Algoritmos , Neuroimagen , Flujo de Trabajo
10.
PLoS One ; 15(4): e0231300, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32324754

RESUMEN

Incorporating expert knowledge at the time machine learning models are trained holds promise for producing models that are easier to interpret. The main objectives of this study were to use a feature engineering approach to incorporate clinical expert knowledge prior to applying machine learning techniques, and to assess the impact of the approach on model complexity and performance. Four machine learning models were trained to predict mortality with a severe asthma case study. Experiments to select fewer input features based on a discriminative score showed low to moderate precision for discovering clinically meaningful triplets, indicating that discriminative score alone cannot replace clinical input. When compared to baseline machine learning models, we found a decrease in model complexity with use of fewer features informed by discriminative score and filtering of laboratory features with clinical input. We also found a small difference in performance for the mortality prediction task when comparing baseline ML models to models that used filtered features. Encoding demographic and triplet information in ML models with filtered features appeared to show performance improvements from the baseline. These findings indicated that the use of filtered features may reduce model complexity, and with little impact on performance.


Asunto(s)
Asma/tratamiento farmacológico , Asma/mortalidad , Aprendizaje Automático , Conocimientos, Actitudes y Práctica en Salud , Humanos , Pronóstico
11.
Artículo en Inglés | MEDLINE | ID: mdl-33880186

RESUMEN

BACKGROUND: As the scope of scientific questions increase and datasets grow larger, the visualization of relevant information correspondingly becomes more difficult and complex. Sharing visualizations amongst collaborators and with the public can be especially onerous, as it is challenging to reconcile software dependencies, data formats, and specific user needs in an easily accessible package. RESULTS: We present substrate, a data-visualization framework designed to simplify communication and code reuse across diverse research teams. Our platform provides a simple, powerful, browser-based interface for scientists to rapidly build effective three-dimensional scenes and visualizations. We aim to reduce the limitations of existing systems, which commonly prescribe a limited set of high-level components, that are rarely optimized for arbitrarily large data visualization or for custom data types. CONCLUSIONS: To further engage the broader scientific community and enable seamless integration with existing scientific workflows, we also present pytri, a Python library that bridges the use of substrate with the ubiquitous scientific computing platform, Jupyter. Our intention is to lower the activation energy required to transition between exploratory data analysis, data visualization, and publication-quality interactive scenes.

13.
J Digit Imaging ; 31(3): 315-320, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29740715

RESUMEN

Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.


Asunto(s)
Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Sistemas de Información Radiológica , Humanos , Reproducibilidad de los Resultados , Flujo de Trabajo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...