Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
2.
Adv Anat Embryol Cell Biol ; 219: 263-72, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27207370

RESUMO

Bioimage informatics is a field wherein high-throughput image informatics methods are used to solve challenging scientific problems related to biology and medicine. When the image datasets become larger and more complicated, many conventional image analysis approaches are no longer applicable. Here, we discuss two critical challenges of large-scale bioimage informatics applications, namely, data accessibility and adaptive data analysis. We highlight case studies to show that these challenges can be tackled based on distributed image computing as well as machine learning of image examples in a multidimensional environment.


Assuntos
Biologia Computacional/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Aprendizado de Máquina , Imagem Molecular/métodos , Biologia Computacional/métodos , Interpretação Estatística de Dados , Humanos , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/instrumentação , Microscopia de Fluorescência/métodos , Imagem Molecular/instrumentação , Reconhecimento Automatizado de Padrão/estatística & dados numéricos
3.
Front Neuroinform ; 16: 828787, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35242021

RESUMO

Technological advances in imaging and data acquisition are leading to the development of petabyte-scale neuroscience image datasets. These large-scale volumetric datasets pose unique challenges since analyses often span the entire volume, requiring a unified platform to access it. In this paper, we describe the Brain Observatory Storage Service and Database (BossDB), a cloud-based solution for storing and accessing petascale image datasets. BossDB provides support for data ingest, storage, visualization, and sharing through a RESTful Application Programming Interface (API). A key feature is the scalable indexing of spatial data and automatic and manual annotations to facilitate data discovery. Our project is open source and can be easily and cost effectively used for a variety of modalities and applications, and has effectively worked with datasets over a petabyte in size.

4.
Gigascience ; 9(12)2020 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-33347572

RESUMO

BACKGROUND: Emerging neuroimaging datasets (collected with imaging techniques such as electron microscopy, optical microscopy, or X-ray microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. RESULTS: We developed an ecosystem of neuroimaging data analysis pipelines that use open-source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, which connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. CONCLUSIONS: Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets but may be applied to similar problems in other domains.


Assuntos
Ecossistema , Software , Algoritmos , Neuroimagem , Fluxo de Trabalho
5.
Front Neuroinform ; 12: 74, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30455638

RESUMO

Neuroscientists are actively pursuing high-precision maps, or graphs consisting of networks of neurons and connecting synapses in mammalian and non-mammalian brains. Such graphs, when coupled with physiological and behavioral data, are likely to facilitate greater understanding of how circuits in these networks give rise to complex information processing capabilities. Given that the automated or semi-automated methods required to achieve the acquisition of these graphs are still evolving, we developed a metric for measuring the performance of such methods by comparing their output with those generated by human annotators ("ground truth" data). Whereas classic metrics for comparing annotated neural tissue reconstructions generally do so at the voxel level, the metric proposed here measures the "integrity" of neurons based on the degree to which a collection of synaptic terminals belonging to a single neuron of the reconstruction can be matched to those of a single neuron in the ground truth data. The metric is largely insensitive to small errors in segmentation and more directly measures accuracy of the generated brain graph. It is our hope that use of the metric will facilitate the broader community's efforts to improve upon existing methods for acquiring brain graphs. Herein we describe the metric in detail, provide demonstrative examples of the intuitive scores it generates, and apply it to a synthesized neural network with simulated reconstruction errors. Demonstration code is available.

6.
Gigascience ; 6(5): 1-10, 2017 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-28327935

RESUMO

Modern technologies are enabling scientists to collect extraordinary amounts of complex and sophisticated data across a huge range of scales like never before. With this onslaught of data, we can allow the focal point to shift from data collection to data analysis. Unfortunately, lack of standardized sharing mechanisms and practices often make reproducing or extending scientific results very difficult. With the creation of data organization structures and tools that drastically improve code portability, we now have the opportunity to design such a framework for communicating extensible scientific discoveries. Our proposed solution leverages these existing technologies and standards, and provides an accessible and extensible model for reproducible research, called 'science in the cloud' (SIC). Exploiting scientific containers, cloud computing, and cloud data services, we show the capability to compute in the cloud and run a web service that enables intimate interaction with the tools and data presented. We hope this model will inspire the community to produce reproducible and, importantly, extensible results that will enable us to collectively accelerate the rate at which scientific breakthroughs are discovered, replicated, and extended.


Assuntos
Computação em Nuvem , Ciência , Conectoma , Humanos , Processamento de Imagem Assistida por Computador , Internet , Imageamento por Ressonância Magnética , Software
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 411-414, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268360

RESUMO

Retinal prosthetic devices can significantly and positively impact the ability of visually challenged individuals to live a more independent life. We describe a visual processing system which leverages image analysis techniques to produce visual patterns and allows the user to more effectively perceive their environment. These patterns are used to stimulate a retinal prosthesis to allow self guidance and a higher degree of autonomy for the affected individual. Specifically, we describe an image processing pipeline that allows for object and face localization in cluttered environments as well as various contrast enhancement strategies in the "implanted image." Finally, we describe a real-time implementation and deployment of this system on the Argus II platform. We believe that these advances can significantly improve the effectiveness of the next generation of retinal prostheses.


Assuntos
Algoritmos , Face , Próteses Visuais , Humanos , Processamento de Imagem Assistida por Computador , Reconhecimento Visual de Modelos/fisiologia , Pessoas com Deficiência Visual
8.
Front Neuroinform ; 9: 20, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26321942

RESUMO

Reconstructing a map of neuronal connectivity is a critical challenge in contemporary neuroscience. Recent advances in high-throughput serial section electron microscopy (EM) have produced massive 3D image volumes of nanoscale brain tissue for the first time. The resolution of EM allows for individual neurons and their synaptic connections to be directly observed. Recovering neuronal networks by manually tracing each neuronal process at this scale is unmanageable, and therefore researchers are developing automated image processing modules. Thus far, state-of-the-art algorithms focus only on the solution to a particular task (e.g., neuron segmentation or synapse identification). In this manuscript we present the first fully-automated images-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction). To evaluate overall performance and select the best parameters and methods, we also develop a metric to assess the quality of the output graphs. We evaluate a set of algorithms and parameters, searching possible operating points to identify the best available brain graph for our assessment metric. Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set. This provides a baseline result and framework for community analysis and future algorithm development and testing. All code and data derivatives have been made publicly available in support of eventually unlocking new biofidelic computational primitives and understanding of neuropathologies.

9.
Artigo em Inglês | MEDLINE | ID: mdl-24401992

RESUMO

We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes- neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA