Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Intervalo de año de publicación
1.
Chem Rev ; 124(16): 9633-9732, 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39137296

RESUMEN

Self-driving laboratories (SDLs) promise an accelerated application of the scientific method. Through the automation of experimental workflows, along with autonomous experimental planning, SDLs hold the potential to greatly accelerate research in chemistry and materials discovery. This review provides an in-depth analysis of the state-of-the-art in SDL technology, its applications across various scientific disciplines, and the potential implications for research and industry. This review additionally provides an overview of the enabling technologies for SDLs, including their hardware, software, and integration with laboratory infrastructure. Most importantly, this review explores the diverse range of scientific domains where SDLs have made significant contributions, from drug discovery and materials science to genomics and chemistry. We provide a comprehensive review of existing real-world examples of SDLs, their different levels of automation, and the challenges and limitations associated with each domain.

2.
STAR Protoc ; 4(2): 102329, 2023 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-37267112

RESUMEN

Learn how to build a Closed-loop Spectroscopy Lab: Light-mixing demo (CLSLab:Light) to perform color matching via RGB LEDs and a light sensor for under 100 USD and less than an hour of setup. Our tutorial covers ordering parts, verifying prerequisites, software setup, sensor mounting, testing, and an optimization algorithm comparison tutorial. We use secure IoT-style communication via MQTT, MicroPython firmware on a pre-soldered Pico W microcontroller, and the self-driving-lab-demo Python package. A video tutorial is available at https://youtu.be/D54yfxRSY6s. For complete details on the use and execution of this protocol, please refer to Baird et al.1.

3.
Data Brief ; 50: 109487, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37655231

RESUMEN

In scientific disciplines, benchmarks play a vital role in driving progress forward. For a benchmark to be effective, it must closely resemble real-world tasks. If the level of difficulty or relevance is inadequate, it can impede progress in the field. Moreover, benchmarks should have low computational overhead to ensure accessibility and repeatability. The objective is to achieve a kind of ``Turing test'' by creating a surrogate model that is practically indistinguishable from the ground truth observation, at least within the dataset's explored boundaries. This objective necessitates a large quantity of data. This data encompasses numerous features that are characteristic of chemistry and materials science optimization tasks that are relevant to industry. These features include high levels of noise, multiple fidelities, multiple objectives, linear constraints, non-linear correlations, and failure regions. We performed 494498 random hard-sphere packing simulations representing 206 CPU days' worth of computational overhead. Simulations required nine input parameters with linear constraints and two discrete fidelities each with continuous fidelity parameters. The data was logged in a free-tier shared MongoDB Atlas database, producing two core tabular datasets: a failure probability dataset and a regression dataset. The failure probability dataset maps unique input parameter sets to the estimated probabilities that the simulation will fail. The regression dataset maps input parameter sets (including repeats) to particle packing fractions and computational runtimes for each of the two steps. These two datasets were used to create a surrogate model as close as possible to running the actual simulations by incorporating simulation failure and heteroskedastic noise. In the regression dataset, percentile ranks were calculated for each group of identical parameter sets to account for heteroskedastic noise, thereby ensuring reliable and accurate data. This differs from the conventional approach that imposes a-priori assumptions, such as Gaussian noise, by specifying mean and standard deviation. This technique can be extended to other benchmark datasets to bridge the gap between optimization benchmarks with low computational overhead and the complex optimization scenarios encountered in the real world.

4.
MethodsX ; 9: 101731, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35664040

RESUMEN

We present a method for performing efficient barycentric interpolation for large grain boundary octonion point sets which reside on the surface of a hypersphere. This method includes removal of degenerate dimensions via singular value decomposition (SVD) transformations and linear projections, determination of intersecting facets via nearest neighbor (NN) searches, and interpolation. This method is useful for hyperspherical point sets for applications such as grain boundaries structure-property models, robotics, and specialized neural networks. We provide a case study of the method applied to the 7-sphere. We provide 1-sphere and 2-sphere visualizations to illustrate important aspects of these dimension reduction and interpolation methods. A MATLAB implementation is available at github.com/sgbaird-5dof/interp.•Barycentric interpolation is combined with hypersphere facet intersections, dimensionality reduction, and linear projections to reduce computational complexity without loss of information•A max nearest neighbor threshold is used in conjunction with facet intersection determination to reduce computational runtime.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA