Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 315
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(31): e2404676121, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39042681

RESUMO

This work establishes a different paradigm on digital molecular spaces and their efficient navigation by exploiting sigma profiles. To do so, the remarkable capability of Gaussian processes (GPs), a type of stochastic machine learning model, to correlate and predict physicochemical properties from sigma profiles is demonstrated, outperforming state-of-the-art neural networks previously published. The amount of chemical information encoded in sigma profiles eases the learning burden of machine learning models, permitting the training of GPs on small datasets which, due to their negligible computational cost and ease of implementation, are ideal models to be combined with optimization tools such as gradient search or Bayesian optimization (BO). Gradient search is used to efficiently navigate the sigma profile digital space, quickly converging to local extrema of target physicochemical properties. While this requires the availability of pretrained GP models on existing datasets, such limitations are eliminated with the implementation of BO, which can find global extrema with a limited number of iterations. A remarkable example of this is that of BO toward boiling temperature optimization. Holding no knowledge of chemistry except for the sigma profile and boiling temperature of carbon monoxide (the worst possible initial guess), BO finds the global maximum of the available boiling temperature dataset (over 1,000 molecules encompassing more than 40 families of organic and inorganic compounds) in just 15 iterations (i.e., 15 property measurements), cementing sigma profiles as a powerful digital chemical space for molecular optimization and discovery, particularly when little to no experimental data is initially available.

2.
Brief Bioinform ; 24(1)2023 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-36562723

RESUMO

Directed protein evolution applies repeated rounds of genetic mutagenesis and phenotypic screening and is often limited by experimental throughput. Through in silico prioritization of mutant sequences, machine learning has been applied to reduce wet lab burden to a level practical for human researchers. On the other hand, robotics permits large batches and rapid iterations for protein engineering cycles, but such capacities have not been well exploited in existing machine learning-assisted directed evolution approaches. Here, we report a scalable and batched method, Bayesian Optimization-guided EVOlutionary (BO-EVO) algorithm, to guide multiple rounds of robotic experiments to explore protein fitness landscapes of combinatorial mutagenesis libraries. We first examined various design specifications based on an empirical landscape of protein G domain B1. Then, BO-EVO was successfully generalized to another empirical landscape of an Escherichia coli kinase PhoQ, as well as simulated NK landscapes with up to moderate epistasis. This approach was then applied to guide robotic library creation and screening to engineer enzyme specificity of RhlA, a key biosynthetic enzyme for rhamnolipid biosurfactants. A 4.8-fold improvement in producing a target rhamnolipid congener was achieved after examining less than 1% of all possible mutants after four iterations. Overall, BO-EVO proves to be an efficient and general approach to guide combinatorial protein engineering without prior knowledge.


Assuntos
Engenharia de Proteínas , Proteínas , Humanos , Teorema de Bayes , Proteínas/genética , Evolução Biológica , Algoritmos
3.
Brief Bioinform ; 24(3)2023 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-36935112

RESUMO

Cardiac conduction disease is a major cause of morbidity and mortality worldwide. There is considerable clinical significance and an emerging need of early detection of these diseases for preventive treatment success before more severe arrhythmias occur. However, developing such early screening tools is challenging due to the lack of early electrocardiograms (ECGs) before symptoms occur in patients. Mouse models are widely used in cardiac arrhythmia research. The goal of this paper is to develop deep learning models to predict cardiac conduction diseases in mice using their early ECGs. We hypothesize that mutant mice present subtle abnormalities in their early ECGs before severe arrhythmias present. These subtle patterns can be detected by deep learning though they are hard to be identified by human eyes. We propose a deep transfer learning model, DeepMiceTL, which leverages knowledge from human ECGs to learn mouse ECG patterns. We further apply the Bayesian optimization and $k$-fold cross validation methods to tune the hyperparameters of the DeepMiceTL. Our results show that DeepMiceTL achieves a promising performance (F1-score: 83.8%, accuracy: 84.8%) in predicting the occurrence of cardiac conduction diseases using early mouse ECGs. This study is among the first efforts that use state-of-the-art deep transfer learning to identify ECG patterns during the early course of cardiac conduction disease in mice. Our approach not only could help in cardiac conduction disease research in mice, but also suggest a feasibility for early clinical diagnosis of human cardiac conduction diseases and other types of cardiac arrythmias using deep transfer learning in the future.


Assuntos
Arritmias Cardíacas , Eletrocardiografia , Humanos , Animais , Camundongos , Teorema de Bayes , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/genética , Arritmias Cardíacas/epidemiologia , Eletrocardiografia/efeitos adversos , Projetos de Pesquisa , Aprendizado de Máquina
4.
Hum Brain Mapp ; 45(5): e26638, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38520365

RESUMO

Connectome spectrum electromagnetic tomography (CSET) combines diffusion MRI-derived structural connectivity data with well-established graph signal processing tools to solve the M/EEG inverse problem. Using simulated EEG signals from fMRI responses, and two EEG datasets on visual-evoked potentials, we provide evidence supporting that (i) CSET captures realistic neurophysiological patterns with better accuracy than state-of-the-art methods, (ii) CSET can reconstruct brain responses more accurately and with more robustness to intrinsic noise in the EEG signal. These results demonstrate that CSET offers high spatio-temporal accuracy, enabling neuroscientists to extend their research beyond the current limitations of low sampling frequency in functional MRI and the poor spatial resolution of M/EEG.


Assuntos
Conectoma , Humanos , Conectoma/métodos , Eletroencefalografia/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Imageamento por Ressonância Magnética/métodos , Fenômenos Eletromagnéticos
5.
Small ; 20(2): e2304437, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37691013

RESUMO

Bioinspired fibrillar structures are promising for a wide range of disruptive adhesive applications. Especially micro/nanofibrillar structures on gecko toes can have strong and controllable adhesion and shear on a wide range of surfaces with residual-free, repeatable, self-cleaning, and other unique features. Synthetic dry fibrillar adhesives inspired by such biological fibrils are optimized in different aspects to increase their performance. Previous fibril designs for shear optimization are limited by predefined standard shapes in a narrow range primarily based on human intuition, which restricts their maximum performance. This study combines the machine learning-based optimization and finite-element-method-based shear mechanics simulations to find shear-optimized fibril designs automatically. In addition, fabrication limitations are integrated into the simulations to have more experimentally relevant results. The computationally discovered shear-optimized structures are fabricated, experimentally validated, and compared with the simulations. The results show that the computed shear-optimized fibrils perform better than the predefined standard fibril designs. This design optimization method can be used in future real-world shear-based gripping or nonslip surface applications, such as robotic pick-and-place grippers, climbing robots, gloves, electronic devices, and medical and wearable devices.

6.
Magn Reson Med ; 91(6): 2358-2373, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38193277

RESUMO

PURPOSE: Spoke pulses improve excitation homogeneity in parallel-transmit MRI. We propose an efficient global optimization algorithm, Bayesian optimization of gradient trajectory (BOGAT), for single-slice and simultaneous multislice imaging. THEORY AND METHODS: BOGAT adds an outer loop to optimize kT-space positions. For each position, the RF coefficients are optimized (e.g., with magnitude least squares) and the cost function evaluated. Bayesian optimization progressively estimates the cost function. It automatically chooses the kT-space positions to sample, to achieve fast convergence, often coming close to the globally optimal spoke positions. We investigated the typical features of spokes cost functions by a grid search with field maps comprising 85 slabs from 14 volunteers. We tested BOGAT in this database, and prospectively in a phantom and in vivo. We compared the vendor-provided Fourier transform approach with the same magnitude least squares RF optimizer. RESULTS: The cost function is nonconvex and seen empirically to be piecewise smooth with discontinuities where the underlying RF optimum changes sharply. BOGAT converged to within 10% of the global minimum cost within 30 iterations in 93% of slices in our database. BOGAT achieved up to 56% lower flip angle RMS error (RMSE) or 55% lower pulse energy in phantoms versus the Fourier transform approach, and up to 30% lower RMSE and 29% lower energy in vivo with 7.8 s extra computation. CONCLUSION: BOGAT efficiently estimated near-global optimum spoke positions for the two-spoke tests, reducing flip-angle RMSE and/or pulse energy in a computation time (˜10 s), which is suitable for online optimization.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Humanos , Teorema de Bayes , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Análise dos Mínimos Quadrados , Encéfalo/diagnóstico por imagem
7.
Chemphyschem ; 25(16): e202300850, 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-38763901

RESUMO

The discovery and optimization of novel nanoporous materials (NPMs) such as Metal-Organic Frameworks (MOFs) and Covalent Organic Frameworks (COFs) are crucial for addressing global challenges like climate change, energy security, and environmental degradation. Traditional experimental approaches for optimizing these materials are time-consuming and resource-intensive. This research paper presents a strategy using Bayesian optimization (BO) to efficiently navigate the complex design spaces of NPMs for gas storage applications. For a MOF dataset drawn from 19 different sources, we present a quantitative evaluation of BO using a curated set of surrogate model and acquisition function couples. In our study, we employed machine learning (ML) techniques to conduct regression analysis on many models. Following this, we identified the three ML models that exhibited the highest accuracy, which were subsequently chosen as surrogates in our investigation, including the conventional Gaussian Process (GP) model. We found that GP with expected improvement (EI) as the acquisition function but without a gamma prior which is standard in Bayesian Optimisation python library (BO Torch) outperforms other surrogate models. Additionally, it should be noted that while the machine learning model that exhibits superior performance in predicting the target variable may be considered the best choice, it may not necessarily serve as the most suitable surrogate model for BO. This observation has significant importance and warrants further investigation. This comprehensive framework accelerates the pace of materials discovery and addresses urgent needs in energy storage and environmental sustainability. It is to be noted that rather than identifying new MOFs, BO primarily enhances computational efficiency by reducing the reliance on more demanding calculations, such as those involved in Grand Canonical Monte Carlo (GCMC) or Density Functional Theory (DFT).

8.
Biotechnol Bioeng ; 121(5): 1569-1582, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38372656

RESUMO

Optimizing complex bioprocesses poses a significant challenge in several fields, particularly in cell therapy manufacturing. The development of customized, closed, and automated processes is crucial for their industrial translation and for addressing large patient populations at a sustainable price. Limited understanding of the underlying biological mechanisms, coupled with highly resource-intensive experimentation, are two contributing factors that make the development of these next-generation processes challenging. Bayesian optimization (BO) is an iterative experimental design methodology that addresses these challenges, but has not been extensively tested in situations that require parallel experimentation with significant experimental variability. In this study, we present an evaluation of noisy, parallel BO for increasing noise levels and parallel batch sizes on two in silico bioprocesses, and compare it to the industry state-of-the-art. As an in vitro showcase, we apply the method to the optimization of a monocyte purification unit operation. The in silico results show that BO significantly outperforms the state-of-the-art, requiring approximately 50% fewer experiments on average. This study highlights the potential of noisy, parallel BO as valuable tool for cell therapy process development and optimization.


Assuntos
Terapia Baseada em Transplante de Células e Tecidos , Projetos de Pesquisa , Humanos , Teorema de Bayes
9.
Philos Trans A Math Phys Eng Sci ; 382(2279): 20230364, 2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39129401

RESUMO

Locally resonant metamaterials (LRMs) have recently emerged in the search for lightweight noise and vibration solutions. These materials have the ability to create stop bands, which arise from the sub-wavelength addition of identical resonators to a host structure and result in strong vibration attenuation. However, their manufacturing inevitably introduces variability such that the system as-manufactured often deviates significantly from the original as-designed. This can reduce attenuation performance, but may also broaden the attenuation band. This work focuses on the impact of variability within tolerance ranges in resonator properties on the vibration attenuation in metamaterial beams. Following a qualitative pre-study, two non-intrusive uncertainty propagation approaches are applied to find the upper and lower bounds of three performance metrics, by evaluating deterministic metamaterial models with uncertain parameters defined as interval variables. A global search approach is used and compared with a machine learning (ML)-based uncertainty propagation approach which significantly reduces the required number of simulations. Variability in resonator stiffnesses and masses is found to have the highest impact. Variability in the resonator positions only has a comparable impact for less deep sub-wavelength designs. The broadening potential of varying resonator properties is exploited in broadband optimization and the robustness of the optimized metamaterial is assessed.This article is part of the theme issue 'Current developments in elastic and acoustic metamaterials science (Part 2)'.

10.
Environ Res ; 249: 118378, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38311206

RESUMO

With the advent of the second industrial revolution, mining and metallurgical processes generate large volumes of tailings and mine wastes (TMW), which worsens global environmental pollution. Studying the occurrence of metal and metalloid elements in TMW is an effective approach to evaluating pollution linked to TMW. However, traditional laboratory-based measurements are complicated and time-consuming; thus, an empirical method is urgently needed that can rapidly and accurately determine elemental occurrence forms. In this study, a model combining Bayesian optimization and random forest (RF) approaches was proposed to predict TMW occurrence forms. To build the RF model, a dataset of 2376 samples was obtained, with mineral composition, elemental properties, and total concentration composition used as inputs and the percentage of occurrence forms as the model output. The correlation coefficient (R), coefficient of determination, mean absolute error, root mean squared error, and root mean squared logarithmic error metrics were used for model evaluation. After Bayesian optimization, the optimal RF model achieved accurate predictive performance, with R values of 0.99 and 0.965 on the training and test sets, respectively. The feature significance was analyzed using feature importance and Shapley additive explanatory values, which revealed that the electronegativity and total concentration of the elements were the two features with the greatest influence on the model output. As the electronegativity of an element increases, its corresponding residual fraction content gradually decreases. This is because the solubility typically increases with the solvent's polarity and electronegativity. Overall, this study proposes an RF model based on the nature of TMW that can rapidly and accurately predict the percentage values of metal and metalloid element occurrence forms in TMW. This method can minimize testing time requirements and help to assess TMW pollution risks, as well as further promote safe TMW management and recycling.


Assuntos
Inteligência Artificial , Teorema de Bayes , Mineração , Resíduos Industriais/análise , Monitoramento Ambiental/métodos , Metais/análise
11.
Artigo em Inglês | MEDLINE | ID: mdl-38941056

RESUMO

Forward addition/backward elimination (FABE) has been the standard for population pharmacokinetic model selection (PPK) since NONMEM® was introduced. We investigated five machine learning (ML) algorithms (Genetic algorithm [GA], Gaussian process [GP], random forest [RF], gradient boosted random tree [GBRT], and particle swarm optimization [PSO]) as alternatives to FABE. These algorithms were applied to PPK model selection with a focus on comparing the efficiency and robustness of each of them. All machine learning algorithms included the combination of ML algorithms with a local downhill search. The local downhill search consisted of systematically changing one or two "features" at a time (a one-bit or a two-bit local search), alternating with the ML methods. An exhaustive search (all possible combinations of model features, N = 1,572,864 models) was the gold standard for robustness, and the number of models examined leading prior to identification of the final model was the metric for efficiency.All algorithms identified the optimal model when combined with the two-bit local downhill search. GA, RF, GBRT, and GP identified the optimal model with only a one-bit local search. PSO required the two-bit local downhill search. In our analysis, GP was the most efficient algorithm as measured by the number of models examined prior to finding the optimal (495 models), and PSO exhibited the least efficiency, requiring 1710 unique models before finding the best solution. Additionally, GP was also the algorithm that needed the longest elapsed time of 2975.6 min, in comparison with GA, which only required 321.8 min.

12.
Chem Pharm Bull (Tokyo) ; 72(6): 529-539, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38839372

RESUMO

Lipid nanoparticles (LNPs), used for mRNA vaccines against severe acute respiratory syndrome coronavirus 2, protect mRNA and deliver it into cells, making them an essential delivery technology for RNA medicine. The LNPs manufacturing process consists of two steps, the upstream process of preparing LNPs and the downstream process of removing ethyl alcohol (EtOH) and exchanging buffers. Generally, a microfluidic device is used in the upstream process, and a dialysis membrane is used in the downstream process. However, there are many parameters in the upstream and downstream processes, and it is difficult to determine the effects of variations in the manufacturing parameters on the quality of the LNPs and establish a manufacturing process to obtain high-quality LNPs. This study focused on manufacturing mRNA-LNPs using a microfluidic device. Extreme gradient boosting (XGBoost), which is a machine learning technique, identified EtOH concentration (flow rate ratio), buffer pH, and total flow rate as the process parameters that significantly affected the particle size and encapsulation efficiency. Based on these results, we derived the manufacturing conditions for different particle sizes (approximately 80 and 200 nm) of LNPs using Bayesian optimization. In addition, the particle size of the LNPs significantly affected the protein expression level of mRNA in cells. The findings of this study are expected to provide useful information that will enable the rapid and efficient development of mRNA-LNPs manufacturing processes using microfluidic devices.


Assuntos
Lipídeos , Aprendizado de Máquina , Nanopartículas , Tamanho da Partícula , RNA Mensageiro , Nanopartículas/química , Lipídeos/química , Humanos , SARS-CoV-2/genética , Etanol/química , Teorema de Bayes , Dispositivos Lab-On-A-Chip , Lipossomos
13.
Ultrason Imaging ; 46(1): 17-28, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37981781

RESUMO

Efficient Neural Architecture Search (ENAS) is a recent development in searching for optimal cell structures for Convolutional Neural Network (CNN) design. It has been successfully used in various applications including ultrasound image classification for breast lesions. However, the existing ENAS approach only optimizes cell structures rather than the whole CNN architecture nor its trainable hyperparameters. This paper presents a novel framework for automatic design of CNN architectures by combining strengths of ENAS and Bayesian Optimization in two-folds. Firstly, we use ENAS to search for optimal normal and reduction cells. Secondly, with the optimal cells and a suitable hyperparameter search space, we adopt Bayesian Optimization to find the optimal depth of the network and optimal configuration of the trainable hyperparameters. To test the validity of the proposed framework, a dataset of 1522 breast lesion ultrasound images is used for the searching and modeling. We then evaluate the robustness of the proposed approach by testing the optimized CNN model on three external datasets consisting of 727 benign and 506 malignant lesion images. We further compare the CNN model with the default ENAS-based CNN model, and then with CNN models based on the state-of-the-art architectures. The results (error rate of no more than 20.6% on internal tests and 17.3% on average of external tests) show that the proposed framework generates robust and light CNN models.


Assuntos
Redes Neurais de Computação , Ultrassonografia Mamária , Feminino , Humanos , Teorema de Bayes , Ultrassonografia , Mama/diagnóstico por imagem
14.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38676048

RESUMO

In addition to the filter coefficients, the location of the microphone array is a crucial factor in improving the overall performance of a beamformer. The optimal microphone array placement can considerably enhance speech quality. However, the optimization problem with microphone configuration variables is non-convex and highly non-linear. Heuristic algorithms that are frequently employed take a long time and have a chance of missing the optimal microphone array placement design. We extend the Bayesian optimization method to solve the microphone array configuration design problem. The proposed Bayesian optimization method does not depend on gradient and Hessian approximations and makes use of all the information available from prior evaluations. Furthermore, Gaussian process regression and acquisition functions make up the Bayesian optimization method. The objective function is given a prior probabilistic model through Gaussian process regression, which exploits this model while integrating out uncertainty. The acquisition function is adopted to decide the next placement point based upon the incumbent optimum with the posterior distribution. Numerical experiments have demonstrated that the Bayesian optimization method could find a similar or better microphone array placement compared with the hybrid descent method and computational time is significantly reduced. Our proposed method is at least four times faster than the hybrid descent method to find the optimal microphone array configuration from the numerical results.

15.
Sensors (Basel) ; 24(9)2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38733048

RESUMO

This study proposes an optimization method for temperature modulation in chemiresistor-type gas sensors based on Bayesian optimization (BO), and its applicability was investigated. As voltage for a sensor heater, our previously proposed waveform was employed, and the parameters determining the voltage range were optimized. Employing the Bouldin-Davies index (DBI) as an objective function (OBJ), BO was utilized to minimize the DBI calculated from a feature matrix built from the collected data followed by pre-processing. The sensor responses were measured using five test gases with five concentrations, amounting to 2500 data points per parameter set. After seven trials with four initial parameter sets (ten parameter sets were tested in total), the DBI was successfully reduced from 2.1 to 1.5. The classification accuracy for the test gases based on the support vector machine tends to increase with decreasing the DBI, indicating that the DBI acts as a good OBJ. Additionally, the accuracy itself increased from 85.4% to 93.2% through optimization. The deviation from the tendency that the accuracy increases with decreasing the DBI for some parameter sets was also discussed. Consequently, it was demonstrated that the proposed optimization method based on BO is promising for temperature modulation.

16.
Sensors (Basel) ; 24(2)2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-38257703

RESUMO

Highway bridges stand as paramount elements within transportation infrastructure systems. The ability to ensure swift recovery after extreme events, such as earthquakes, is a fundamental trait of resilient communities. Consequently, expediting the recovery process necessitates near real-time diagnosis of structural damage to provide dependable information. In this study, a data-driven approach for damage detection and assessment is investigated, focusing on bridge columns-the pivotal supporting elements of bridge systems-based on simulations derived from nonlinear time history analysis. This research introduces a set of cumulative intensity-based damage features, whose efficacy is demonstrated through unsupervised learning techniques. Leveraging the support vector machine, a prominent pattern recognition algorithm in supervised learning, alongside Bayesian optimization with a Gaussian process, seismic damage detection and assessment are explored. Encouragingly, the methodology yields high estimation accuracies for both binary outcomes (indicating the presence of damage or the occurrence of collapse) and multi-class classifications (indicating the severity of damage). This breakthrough opens avenues for the practical implementation of on-board sensor computing, enabling near real-time damage detection and assessment in bridge structures.

17.
Sensors (Basel) ; 24(3)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38339599

RESUMO

Photovoltaic (PV) power prediction plays a critical role amid the accelerating adoption of renewable energy sources. This paper introduces a bidirectional long short-term memory (BiLSTM) deep learning (DL) model designed for forecasting photovoltaic power one hour ahead. The dataset under examination originates from a small PV installation located at the Polytechnic School of the University of Alcala. To improve the quality of historical data and optimize model performance, a robust data preprocessing algorithm is implemented. The BiLSTM model is synergistically combined with a Bayesian optimization algorithm (BOA) to fine-tune its primary hyperparameters, thereby enhancing its predictive efficacy. The performance of the proposed model is evaluated across diverse meteorological and seasonal conditions. In deterministic forecasting, the findings indicate its superiority over alternative models employed in this research domain, specifically a multilayer perceptron (MLP) neural network model and a random forest (RF) ensemble model. Compared with the MLP and RF reference models, the proposed model achieves reductions in the normalized mean absolute error (nMAE) of 75.03% and 77.01%, respectively, demonstrating its effectiveness in this type of prediction. Moreover, interval prediction utilizing the bootstrap resampling method is conducted, with the acquired prediction intervals carefully adjusted to meet the desired confidence levels, thereby enhancing the robustness and flexibility of the predictions.

18.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39123880

RESUMO

In this paper, we propose a Bayesian Optimization (BO)-based strategy using the Gaussian Process (GP) for feature detection of a known but non-cooperative space object by a chaser with a monocular camera and a single-beam LIDAR in a close-proximity operation. Specifically, the objective of the proposed Space Object Chaser-Resident Assessment Feature Tracking (SOCRAFT) algorithm is to determine the camera directional angles so that the maximum number of features within the camera range is detected while the chaser moves in a predefined orbit around the target. For the chaser-object spatial incentive, rewards are assigned to the chaser states from a combined model with two components: feature detection score and sinusoidal reward. To calculate the sinusoidal reward, estimated feature locations are required, which are predicted by Gaussian Process models. Another Gaussian Process model provides the reward distribution, which is then used by the Bayesian Optimization to determine the camera directional angles. Simulations are conducted in both 2D and 3D domains. The results demonstrate that SOCRAFT can generally detect the maximum number of features within the limited camera range and field of view.

19.
Sensors (Basel) ; 24(3)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38339680

RESUMO

Accurately and efficiently predicting elephant flows (elephants) is crucial for optimizing network performance and resource utilization. Current prediction approaches for software-defined networks (SDNs) typically rely on complete traffic and statistics moving from switches to controllers. This leads to an extra control channel bandwidth occupation and network delay. To address this issue, this paper proposes a prediction strategy based on incomplete traffic that is sampled by the timeouts for the installation or reactivation of flow entries. The strategy involves assigning a very short hard timeout (Tinitial) to flow entries and then increasing it at a rate of r until flows are identified as elephants or out of their lifespans. Predicted elephants are switched to an idle timeout of 5 s. Logistic regression is used to model elephants based on a complete dataset. Bayesian optimization is then used to tune the trained model Tinitial and r over the incomplete dataset. The process of feature selection, model learning, and optimization is explained. An extensive evaluation shows that the proposed approach can achieve over 90% generalization accuracy over 7 different datasets, including campus, backbone, and the Internet of Things (IoT). Elephants can be correctly predicted for about half of their lifetime. The proposed approach can significantly reduce the controller-switch interaction in campus and IoT networks, although packet completion approaches may need to be applied in networks with a short mean packet inter-arrival time.

20.
Nano Lett ; 23(23): 11129-11136, 2023 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-38038194

RESUMO

The photon upconverting properties of lanthanide-doped nanoparticles drive their applications in imaging, optoelectronics, and additive manufacturing. To maximize their brightness, these upconverting nanoparticles (UCNPs) are often synthesized as core/shell heterostructures. However, the large numbers of compositional and structural parameters in multishell heterostructures make optimizing optical properties challenging. Here, we demonstrate the use of Bayesian optimization (BO) to learn the structure and design rules for multishell UCNPs with bright ultraviolet and violet emission. We leverage an automated workflow that iteratively recommends candidate UCNP structures and then simulates their emission spectra using kinetic Monte Carlo. Yb3+/Er3+- and Yb3+/Er3+/Tm3+-codoped UCNP nanostructures optimized with this BO workflow achieve 10- and 110-fold brighter emission within 22 and 40 iterations, respectively. This workflow can be expanded to structures with higher compositional and structural complexity, accelerating the discovery of novel UCNPs while domain-specific knowledge is being developed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA