Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
J Synchrotron Radiat ; 25(Pt 4): 1261-1270, 2018 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-29979189

RESUMEN

Xi-cam is an extensible platform for data management, analysis and visualization. Xi-cam aims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core of Xi-cam is an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. With Xi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data. Xi-cam's plugin-based architecture targets cross-facility and cross-technique collaborative development, in support of multi-modal analysis. Xi-cam is open-source and cross-platform, and available for download on GitHub.

2.
J Synchrotron Radiat ; 24(Pt 5): 1065-1077, 2017 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-28862630

RESUMEN

Three-dimensional (3D) micro-tomography (µ-CT) has proven to be an important imaging modality in industry and scientific domains. Understanding the properties of material structure and behavior has produced many scientific advances. An important component of the 3D µ-CT pipeline is image partitioning (or image segmentation), a step that is used to separate various phases or components in an image. Image partitioning schemes require specific rules for different scientific fields, but a common strategy consists of devising metrics to quantify performance and accuracy. The present article proposes a set of protocols to systematically analyze and compare the results of unsupervised classification methods used for segmentation of synchrotron-based data. The proposed dataflow for Materials Segmentation and Metrics (MSM) provides 3D micro-tomography image segmentation algorithms, such as statistical region merging (SRM), k-means algorithm and parallel Markov random field (PMRF), while offering different metrics to evaluate segmentation quality, confidence and conformity with standards. Both experimental and synthetic data are assessed, illustrating quantitative results through the MSM dashboard, which can return sample information such as media porosity and permeability. The main contributions of this work are: (i) to deliver tools to improve material design and quality control; (ii) to provide datasets for benchmarking and reproducibility; (iii) to yield good practices in the absence of standards or ground-truth for ceramic composite analysis.

3.
Sci Rep ; 13(1): 3155, 2023 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-36914705

RESUMEN

A Gaussian Process (GP) is a prominent mathematical framework for stochastic function approximation in science and engineering applications. Its success is largely attributed to the GP's analytical tractability, robustness, and natural inclusion of uncertainty quantification. Unfortunately, the use of exact GPs is prohibitively expensive for large datasets due to their unfavorable numerical complexity of [Formula: see text] in computation and [Formula: see text] in storage. All existing methods addressing this issue utilize some form of approximation-usually considering subsets of the full dataset or finding representative pseudo-points that render the covariance matrix well-structured and sparse. These approximate methods can lead to inaccuracies in function approximations and often limit the user's flexibility in designing expressive kernels. Instead of inducing sparsity via data-point geometry and structure, we propose to take advantage of naturally-occurring sparsity by allowing the kernel to discover-instead of induce-sparse structure. The premise of this paper is that the data sets and physical processes modeled by GPs often exhibit natural or implicit sparsities, but commonly-used kernels do not allow us to exploit such sparsity. The core concept of exact, and at the same time sparse GPs relies on kernel definitions that provide enough flexibility to learn and encode not only non-zero but also zero covariances. This principle of ultra-flexible, compactly-supported, and non-stationary kernels, combined with HPC and constrained optimization, lets us scale exact GPs well beyond 5 million data points.

4.
Artículo en Inglés | MEDLINE | ID: mdl-38130938

RESUMEN

Scientific user facilities present a unique set of challenges for image processing due to the large volume of data generated from experiments and simulations. Furthermore, developing and implementing algorithms for real-time processing and analysis while correcting for any artifacts or distortions in images remains a complex task, given the computational requirements of the processing algorithms. In a collaborative effort across multiple Department of Energy national laboratories, the "MLExchange" project is focused on addressing these challenges. MLExchange is a Machine Learning framework deploying interactive web interfaces to enhance and accelerate data analysis. The platform allows users to easily upload, visualize, label, and train networks. The resulting models can be deployed on real data while both results and models could be shared with the scientists. The MLExchange web-based application for image segmentation allows for training, testing, and evaluating multiple machine learning models on hand-labeled tomography data. This environment provides users with an intuitive interface for segmenting images using a variety of machine learning algorithms and deep-learning neural networks. Additionally, these tools have the potential to overcome limitations in traditional image segmentation techniques, particularly for complex and low-contrast images.

5.
Artículo en Inglés | MEDLINE | ID: mdl-38131031

RESUMEN

Machine learning (ML) algorithms are showing a growing trend in helping the scientific communities across different disciplines and institutions to address large and diverse data problems. However, many available ML tools are programmatically demanding and computationally costly. The MLExchange project aims to build a collaborative platform equipped with enabling tools that allow scientists and facility users who do not have a profound ML background to use ML and computational resources in scientific discovery. At the high level, we are targeting a full user experience where managing and exchanging ML algorithms, workflows, and data are readily available through web applications. Since each component is an independent container, the whole platform or its individual service(s) can be easily deployed at servers of different scales, ranging from a personal device (laptop, smart phone, etc.) to high performance clusters (HPC) accessed (simultaneously) by many users. Thus, MLExchange renders flexible using scenarios-users could either access the services and resources from a remote server or run the whole platform or its individual service(s) within their local network.

6.
Artículo en Inglés | MEDLINE | ID: mdl-38947249

RESUMEN

Mathematical optimization lies at the core of many science and industry applications. One important issue with many current optimization strategies is a well-known trade-off between the number of function evaluations and the probability to find the global, or at least sufficiently high-quality local optima. In machine learning (ML), and by extension in active learning - for instance for autonomous experimentation - mathematical optimization is often used to find the underlying uncertain surrogate model from which subsequent decisions are made and therefore ML relies on high-quality optima to obtain the most accurate models. Active learning often has the added complexity of missing offline training data; therefore, the training has to be conducted during the data collection which can stall the acquisition if standard methods are used. In this work, we highlight recent efforts to create a high-performance hybrid optimization algorithm (HGDL), combining derivative-free global optimization strategies with local, derivative-based optimization, ultimately yielding an ordered list of unique local optima. Redundancies are avoided by deflating the objective function around earlier encountered optima. HGDL is designed to take full advantage of parallelism by having the most computationally expensive process, the local first and second-order-derivative-based optimizations, run in parallel on separate compute nodes in separate processes. In addition, the algorithm runs asynchronously; as soon as the first solution is found, it can be used while the algorithm continues to find more solutions. We apply the proposed optimization and training strategy to Gaussian-Process-driven stochastic function approximation and active learning.

7.
Sci Adv ; 6(51)2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33328228

RESUMEN

The analysis of chemical states and morphology in nanomaterials is central to many areas of science. We address this need with an ultrahigh-resolution scanning transmission soft x-ray microscope. Our instrument provides multiple analysis tools in a compact assembly and can achieve few-nanometer spatial resolution and high chemical sensitivity via x-ray ptychography and conventional scanning microscopy. A novel scanning mechanism, coupled to advanced x-ray detectors, a high-brightness x-ray source, and high-performance computing for analysis provide a revolutionary step forward in terms of imaging speed and resolution. We present x-ray microscopy with 8-nm full-period spatial resolution and use this capability in conjunction with operando sample environments and cryogenic imaging, which are now routinely available. Our multimodal approach will find wide use across many fields of science and facilitate correlative analysis of materials with other types of probes.

8.
Nat Commun ; 9(1): 921, 2018 03 02.
Artículo en Inglés | MEDLINE | ID: mdl-29500344

RESUMEN

Battery function is determined by the efficiency and reversibility of the electrochemical phase transformations at solid electrodes. The microscopic tools available to study the chemical states of matter with the required spatial resolution and chemical specificity are intrinsically limited when studying complex architectures by their reliance on two-dimensional projections of thick material. Here, we report the development of soft X-ray ptychographic tomography, which resolves chemical states in three dimensions at 11 nm spatial resolution. We study an ensemble of nano-plates of lithium iron phosphate extracted from a battery electrode at 50% state of charge. Using a set of nanoscale tomograms, we quantify the electrochemical state and resolve phase boundaries throughout the volume of individual nanoparticles. These observations reveal multiple reaction points, intra-particle heterogeneity, and size effects that highlight the importance of multi-dimensional analytical tools in providing novel insight to the design of the next generation of high-performance devices.

9.
Adv Mater ; 27(42): 6591-7, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26423560

RESUMEN

High-resolution X-ray microscopy is used to investigate the sequence of lithiation in LiFePO4 porous electrodes. For electrodes with homogeneous interparticle electronic connectivity via the carbon black network, the smaller particles lithiate first. For electrodes with heterogeneous connectivity, the better-connected particles preferentially lithiate. Correlative electron and X-ray microscopy also reveal the presence of incoherent nanodomains that lithiate as if they are separate particles.

10.
IEEE Trans Vis Comput Graph ; 18(6): 966-77, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21519102

RESUMEN

Many flow visualization techniques, especially integration-based methods, are problematic when the measured data exhibit noise and discretization issues. Particularly, this is the case for flow-sensitive phase-contrast magnetic resonance imaging (PC-MRI) data sets which not only record anatomic information, but also time-varying flow information. We propose a novel approach for the visualization of such data sets using integration-based methods. Our ideas are based upon finite-time Lyapunov exponents (FTLE) and enable identification of vessel boundaries in the data as high regions of separation. This allows us to correctly restrict integration-based visualization to blood vessels. We validate our technique by comparing our approach to existing anatomy-based methods as well as addressing the benefits and limitations of using FTLE to restrict flow. We also discuss the importance of parameters, i.e., advection length and data resolution, in establishing a well-defined vessel boundary. We extract appropriate flow lines and surfaces that enable the visualization of blood flow within the vessels. We further enhance the visualization by analyzing flow behavior in the seeded region and generating simplified depictions.


Asunto(s)
Gráficos por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Angiografía por Resonancia Magnética/métodos , Modelos Cardiovasculares , Algoritmos , Velocidad del Flujo Sanguíneo , Humanos , Tórax/anatomía & histología , Tórax/irrigación sanguínea
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA