Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 54
Filter
1.
Proc Natl Acad Sci U S A ; 121(32): e2403449121, 2024 Aug 06.
Article in English | MEDLINE | ID: mdl-39088394

ABSTRACT

Most problems within and beyond the scientific domain can be framed into one of the following three levels of complexity of function approximation. Type 1: Approximate an unknown function given input/output data. Type 2: Consider a collection of variables and functions, some of which are unknown, indexed by the nodes and hyperedges of a hypergraph (a generalized graph where edges can connect more than two vertices). Given partial observations of the variables of the hypergraph (satisfying the functional dependencies imposed by its structure), approximate all the unobserved variables and unknown functions. Type 3: Expanding on Type 2, if the hypergraph structure itself is unknown, use partial observations of the variables of the hypergraph to discover its structure and approximate its unknown functions. These hypergraphs offer a natural platform for organizing, communicating, and processing computational knowledge. While most scientific problems can be framed as the data-driven discovery of unknown functions in a computational hypergraph whose structure is known (Type 2), many require the data-driven discovery of the structure (connectivity) of the hypergraph itself (Type 3). We introduce an interpretable Gaussian Process (GP) framework for such (Type 3) problems that does not require randomization of the data, access to or control over its sampling, or sparsity of the unknown functions in a known or learned basis. Its polynomial complexity, which contrasts sharply with the super-exponential complexity of causal inference methods, is enabled by the nonlinear ANOVA capabilities of GPs used as a sensing mechanism.

2.
Mol Biol Evol ; 41(7)2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38958167

ABSTRACT

Admixture between populations and species is common in nature. Since the influx of new genetic material might be either facilitated or hindered by selection, variation in mixture proportions along the genome is expected in organisms undergoing recombination. Various graph-based models have been developed to better understand these evolutionary dynamics of population splits and mixtures. However, current models assume a single mixture rate for the entire genome and do not explicitly account for linkage. Here, we introduce TreeSwirl, a novel method for inferring branch lengths and locus-specific mixture proportions by using genome-wide allele frequency data, assuming that the admixture graph is known or has been inferred. TreeSwirl builds upon TreeMix that uses Gaussian processes to estimate the presence of gene flow between diverged populations. However, in contrast to TreeMix, our model infers locus-specific mixture proportions employing a hidden Markov model that accounts for linkage. Through simulated data, we demonstrate that TreeSwirl can accurately estimate locus-specific mixture proportions and handle complex demographic scenarios. It also outperforms related D- and f-statistics in terms of accuracy and sensitivity to detect introgressed loci.


Subject(s)
Gene Frequency , Models, Genetic , Genetics, Population/methods , Markov Chains , Gene Flow , Genome , Computer Simulation , Genetic Linkage
3.
Nano Lett ; 24(7): 2149-2156, 2024 Feb 21.
Article in English | MEDLINE | ID: mdl-38329715

ABSTRACT

The integration time and signal-to-noise ratio are inextricably linked when performing scanning probe microscopy based on raster scanning. This often yields a large lower bound on the measurement time, for example, in nano-optical imaging experiments performed using a scanning near-field optical microscope (SNOM). Here, we utilize sparse scanning augmented with Gaussian process regression to bypass the time constraint. We apply this approach to image charge-transfer polaritons in graphene residing on ruthenium trichloride (α-RuCl3) and obtain key features such as polariton damping and dispersion. Critically, nano-optical SNOM imaging data obtained via sparse sampling are in good agreement with those extracted from traditional raster scans but require 11 times fewer sampled points. As a result, Gaussian process-aided sparse spiral scans offer a major decrease in scanning time.

4.
BMC Bioinformatics ; 25(1): 104, 2024 Mar 08.
Article in English | MEDLINE | ID: mdl-38459430

ABSTRACT

The identification of tumor-specific molecular dependencies is essential for the development of effective cancer therapies. Genetic and chemical perturbations are powerful tools for discovering these dependencies. Even though chemical perturbations can be applied to primary cancer samples at large scale, the interpretation of experiment outcomes is often complicated by the fact that one chemical compound can affect multiple proteins. To overcome this challenge, Batzilla et al. (PLoS Comput Biol 18(8): e1010438, 2022) proposed DepInfeR, a regularized multi-response regression model designed to identify and estimate specific molecular dependencies of individual cancers from their ex-vivo drug sensitivity profiles. Inspired by their work, we propose a Bayesian extension to DepInfeR. Our proposed approach offers several advantages over DepInfeR, including e.g. the ability to handle missing values in both protein-drug affinity and drug sensitivity profiles without the need for data pre-processing steps such as imputation. Moreover, our approach uses Gaussian Processes to capture more complex molecular dependency structures, and provides probabilistic statements about whether a protein in the protein-drug affinity profiles is informative to the drug sensitivity profiles. Simulation studies demonstrate that our proposed approach achieves better prediction accuracy, and is able to discover unreported dependency structures.


Subject(s)
Neoplasms , Humans , Bayes Theorem , Neoplasms/drug therapy , Neoplasms/metabolism , Computer Simulation
5.
Hum Brain Mapp ; 45(7): e26692, 2024 May.
Article in English | MEDLINE | ID: mdl-38712767

ABSTRACT

In neuroimaging studies, combining data collected from multiple study sites or scanners is becoming common to increase the reproducibility of scientific discoveries. At the same time, unwanted variations arise by using different scanners (inter-scanner biases), which need to be corrected before downstream analyses to facilitate replicable research and prevent spurious findings. While statistical harmonization methods such as ComBat have become popular in mitigating inter-scanner biases in neuroimaging, recent methodological advances have shown that harmonizing heterogeneous covariances results in higher data quality. In vertex-level cortical thickness data, heterogeneity in spatial autocorrelation is a critical factor that affects covariance heterogeneity. Our work proposes a new statistical harmonization method called spatial autocorrelation normalization (SAN) that preserves homogeneous covariance vertex-level cortical thickness data across different scanners. We use an explicit Gaussian process to characterize scanner-invariant and scanner-specific variations to reconstruct spatially homogeneous data across scanners. SAN is computationally feasible, and it easily allows the integration of existing harmonization methods. We demonstrate the utility of the proposed method using cortical thickness data from the Social Processes Initiative in the Neurobiology of the Schizophrenia(s) (SPINS) study. SAN is publicly available as an R package.


Subject(s)
Cerebral Cortex , Magnetic Resonance Imaging , Schizophrenia , Humans , Magnetic Resonance Imaging/standards , Magnetic Resonance Imaging/methods , Schizophrenia/diagnostic imaging , Schizophrenia/pathology , Cerebral Cortex/diagnostic imaging , Cerebral Cortex/anatomy & histology , Neuroimaging/methods , Neuroimaging/standards , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/standards , Male , Female , Adult , Normal Distribution , Brain Cortical Thickness
6.
Hum Brain Mapp ; 45(10): e26763, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-38943369

ABSTRACT

In this article, we develop an analytical approach for estimating brain connectivity networks that accounts for subject heterogeneity. More specifically, we consider a novel extension of a multi-subject Bayesian vector autoregressive model that estimates group-specific directed brain connectivity networks and accounts for the effects of covariates on the network edges. We adopt a flexible approach, allowing for (possibly) nonlinear effects of the covariates on edge strength via a novel Bayesian nonparametric prior that employs a weighted mixture of Gaussian processes. For posterior inference, we achieve computational scalability by implementing a variational Bayes scheme. Our approach enables simultaneous estimation of group-specific networks and selection of relevant covariate effects. We show improved performance over competing two-stage approaches on simulated data. We apply our method on resting-state functional magnetic resonance imaging data from children with a history of traumatic brain injury (TBI) and healthy controls to estimate the effects of age and sex on the group-level connectivities. Our results highlight differences in the distribution of parent nodes. They also suggest alteration in the relation of age, with peak edge strength in children with TBI, and differences in effective connectivity strength between males and females.


Subject(s)
Bayes Theorem , Brain Injuries, Traumatic , Connectome , Magnetic Resonance Imaging , Humans , Brain Injuries, Traumatic/diagnostic imaging , Brain Injuries, Traumatic/physiopathology , Female , Male , Child , Adolescent , Connectome/methods , Brain/diagnostic imaging , Brain/physiopathology , Nerve Net/diagnostic imaging , Nerve Net/physiopathology , Models, Neurological
7.
Hum Brain Mapp ; 45(3): e26632, 2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38379519

ABSTRACT

Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease-specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations.


Subject(s)
Alzheimer Disease , Schizophrenia , Humans , Workflow , Brain/diagnostic imaging , Brain/pathology , Schizophrenia/diagnostic imaging , Schizophrenia/pathology , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Machine Learning , Magnetic Resonance Imaging/methods
8.
J Comput Chem ; 45(11): 761-776, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38124290

ABSTRACT

Structure and function in nanoscale atomistic assemblies are tightly coupled, and every atom with its specific position and even every electron will have a decisive effect on the electronic structure, and hence, on the molecular properties. Molecular simulations of nanoscopic atomistic structures therefore require accurately resolved three-dimensional input structures. If extracted from experiment, these structures often suffer from severe uncertainties, of which the lack of information on hydrogen atoms is a prominent example. Hence, experimental structures require careful review and curation, which is a time-consuming and error-prone process. Here, we present a fast and robust protocol for the automated structure analysis and pH-consistent protonation, in short, ASAP. For biomolecules as a target, the ASAP protocol integrates sequence analysis and error assessment of a given input structure. ASAP allows for p K a prediction from reference data through Gaussian process regression including uncertainty estimation and connects to system-focused atomistic modeling described in Brunken and Reiher (J. Chem. Theory Comput. 16, 2020, 1646). Although focused on biomolecules, ASAP can be extended to other nanoscopic objects, because most of its design elements rely on a general graph-based foundation guaranteeing transferability. The modular character of the underlying pipeline supports different degrees of automation, which allows for (i) efficient feedback loops for human-machine interaction with a low entrance barrier and for (ii) integration into autonomous procedures such as automated force field parametrizations. This facilitates fast switching of the pH-state through on-the-fly system-focused reparametrization during a molecular simulation at virtually no extra computational cost.

9.
J Comput Chem ; 45(15): 1235-1246, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38345165

ABSTRACT

Machine learning (ML) force fields are revolutionizing molecular dynamics (MD) simulations as they bypass the computational cost associated with ab initio methods but do not sacrifice accuracy in the process. In this work, the GPyTorch library is used to create Gaussian process regression (GPR) models that are interfaced with the next-generation ML force field FFLUX. These models predict atomic properties of different molecular configurations that appear in a progressing MD simulation. An improved kernel function is utilized to correctly capture the periodicity of the input descriptors. The first FFLUX molecular simulations of ammonia, methanol, and malondialdehyde with the updated kernel are performed. Geometry optimizations with the GPR models result in highly accurate final structures with a maximum root-mean-squared deviation of 0.064 Å and sub-kJ mol-1 total energy predictions. Additionally, the models are tested in 298 K MD simulations with FFLUX to benchmark for robustness. The resulting energy and force predictions throughout the simulation are in excellent agreement with ab initio data for ammonia and methanol but decrease in quality for malondialdehyde due to the increased system complexity. GPR model improvements are discussed, which will ensure the future scalability to larger systems.

10.
Biometrics ; 80(1)2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38372403

ABSTRACT

Precision medicine is a promising framework for generating evidence to improve health and health care. Yet, a gap persists between the ever-growing number of statistical precision medicine strategies for evidence generation and implementation in real-world clinical settings, and the strategies for closing this gap will likely be context-dependent. In this paper, we consider the specific context of partial compliance to wound management among patients with peripheral artery disease. Using a Gaussian process surrogate for the value function, we show the feasibility of using Bayesian optimization to learn optimal individualized treatment rules. Further, we expand beyond the common precision medicine task of learning an optimal individualized treatment rule to the characterization of classes of individualized treatment rules and show how those findings can be translated into clinical contexts.


Subject(s)
Precision Medicine , Humans , Bayes Theorem
11.
Stat Med ; 43(16): 3062-3072, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-38803150

ABSTRACT

This article is concerned with sample size determination methodology for prediction models. We propose to combine the individual calculations via learning-type curves. We suggest two distinct ways of doing so, a deterministic skeleton of a learning curve and a Gaussian process centered upon its deterministic counterpart. We employ several learning algorithms for modeling the primary endpoint and distinct measures for trial efficacy. We find that the performance may vary with the sample size, but borrowing information across sample size universally improves the performance of such calculations. The Gaussian process-based learning curve appears more robust and statistically efficient, while computational efficiency is comparable. We suggest that anchoring against historical evidence when extrapolating sample sizes should be adopted when such data are available. The methods are illustrated on binary and survival endpoints.


Subject(s)
Algorithms , Models, Statistical , Humans , Sample Size , Learning Curve , Normal Distribution , Computer Simulation , Survival Analysis
12.
Stat Med ; 43(6): 1135-1152, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38197220

ABSTRACT

The prevalence of chronic non-communicable diseases such as obesity has noticeably increased in the last decade. The study of these diseases in early life is of paramount importance in determining their course in adult life and in supporting clinical interventions. Recently, attention has been drawn to approaches that study the alteration of metabolic pathways in obese children. In this work, we propose a novel joint modeling approach for the analysis of growth biomarkers and metabolite associations, to unveil metabolic pathways related to childhood obesity. Within a Bayesian framework, we flexibly model the temporal evolution of growth trajectories and metabolic associations through the specification of a joint nonparametric random effect distribution, with the main goal of clustering subjects, thus identifying risk sub-groups. Growth profiles as well as patterns of metabolic associations determine the clustering structure. Inclusion of risk factors is straightforward through the specification of a regression term. We demonstrate the proposed approach on data from the Growing Up in Singapore Towards healthy Outcomes cohort study, based in Singapore. Posterior inference is obtained via a tailored MCMC algorithm, involving a nonparametric prior with mixed support. Our analysis has identified potential key pathways in obese children that allow for the exploration of possible molecular mechanisms associated with childhood obesity.


Subject(s)
Pediatric Obesity , Adult , Humans , Child , Pediatric Obesity/epidemiology , Cohort Studies , Bayes Theorem , Risk Factors , Biomarkers
13.
J Anim Ecol ; 93(5): 632-645, 2024 May.
Article in English | MEDLINE | ID: mdl-38297453

ABSTRACT

Identifying important demographic drivers of population dynamics is fundamental for understanding life-history evolution and implementing effective conservation measures. Integrated population models (IPMs) coupled with transient life table response experiments (tLTREs) allow ecologists to quantify the contributions of demographic parameters to observed population change. While IPMs can estimate parameters that are not estimable using any data source alone, for example, immigration, the estimated contribution of such parameters to population change is prone to bias. Currently, it is unclear when robust conclusions can be drawn from them. We sought to understand the drivers of a rebounding southern elephant seal population on Marion Island using the IPM-tLTRE framework, applied to count and mark-recapture data on 9500 female seals over nearly 40 years. Given the uncertainty around IPM-tLTRE estimates of immigration, we also aimed to investigate the utility of simulation and sensitivity analyses as general tools for evaluating the robustness of conclusions obtained in this framework. Using a Bayesian IPM and tLTRE analysis, we quantified the contributions of survival, immigration and population structure to population growth. We assessed the sensitivity of our estimates to choice of multivariate priors on immigration and other vital rates. To do so we make a novel application of Gaussian process priors, in comparison with commonly used shrinkage priors. Using simulation, we assessed our model's ability to estimate the demographic contribution of immigration under different levels of temporal variance in immigration. The tLTRE analysis suggested that adult survival and immigration were the most important drivers of recent population growth. While the contribution of immigration was sensitive to prior choices, the estimate was consistently large. Furthermore, our simulation study validated the importance of immigration by showing that our estimate of its demographic contribution is unlikely to result as a biased overestimate. Our results highlight the connectivity between distant populations of southern elephant seals, illustrating that female dispersal can be important in regulating the abundance of local populations even when natal site fidelity is high. More generally, we demonstrate how robust ecological conclusions may be obtained about immigration from the IPM-tLTRE framework, by combining sensitivity analysis and simulation.


Subject(s)
Models, Biological , Population Dynamics , Seals, Earless , Animals , Seals, Earless/physiology , Female , Animal Migration , Bayes Theorem , Computer Simulation
14.
BMC Med Res Methodol ; 24(1): 26, 2024 Jan 27.
Article in English | MEDLINE | ID: mdl-38281017

ABSTRACT

BACKGROUND: The rapidly growing burden of non-communicable diseases (NCDs) among people living with HIV in sub-Saharan Africa (SSA) has expanded the number of multidisease models predicting future care needs and health system priorities. Usefulness of these models depends on their ability to replicate real-life data and be readily understood and applied by public health decision-makers; yet existing simulation models of HIV comorbidities are computationally expensive and require large numbers of parameters and long run times, which hinders their utility in resource-constrained settings. METHODS: We present a novel, user-friendly emulator that can efficiently approximate complex simulators of long-term HIV and NCD outcomes in Africa. We describe how to implement the emulator via a tutorial based on publicly available data from Kenya. Emulator parameters relating to incidence and prevalence of HIV, hypertension and depression were derived from our own agent-based simulation model and other published literature. Gaussian processes were used to fit the emulator to simulator estimates, assuming presence of noise for design points. Bayesian posterior predictive checks and leave-one-out cross validation confirmed the emulator's descriptive accuracy. RESULTS: In this example, our emulator resulted in a 13-fold (95% Confidence Interval (CI): 8-22) improvement in computing time compared to that of more complex chronic disease simulation models. One emulator run took 3.00 seconds (95% CI: 1.65-5.28) on a 64-bit operating system laptop with 8.00 gigabytes (GB) of Random Access Memory (RAM), compared to > 11 hours for 1000 simulator runs on a high-performance computing cluster with 1500 GBs of RAM. Pareto k estimates were < 0.70 for all emulations, which demonstrates sufficient predictive accuracy of the emulator. CONCLUSIONS: The emulator presented in this tutorial offers a practical and flexible modelling tool that can help inform health policy-making in countries with a generalized HIV epidemic and growing NCD burden. Future emulator applications could be used to forecast the changing burden of HIV, hypertension and depression over an extended (> 10 year) period, estimate longer-term prevalence of other co-occurring conditions (e.g., postpartum depression among women living with HIV), and project the impact of nationally-prioritized interventions such as national health insurance schemes and differentiated care models.


Subject(s)
HIV Infections , Hypertension , Noncommunicable Diseases , Humans , Female , HIV Infections/epidemiology , HIV Infections/therapy , Noncommunicable Diseases/epidemiology , Noncommunicable Diseases/therapy , Bayes Theorem , Computer Simulation , Hypertension/epidemiology , Hypertension/therapy
15.
J Biopharm Stat ; : 1-11, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38557411

ABSTRACT

The incorporation of real-world data (RWD) into medical product development and evaluation has exhibited consistent growth. However, there is no universally adopted method of how much information to borrow from external data. This paper proposes a study design methodology called Tree-based Monte Carlo (TMC) that dynamically integrates patients from various RWD sources to calculate the treatment effect based on the similarity between clinical trial and RWD. Initially, a propensity score is developed to gauge the resemblance between clinical trial data and each real-world dataset. Utilizing this similarity metric, we construct a hierarchical clustering tree that delineates varying degrees of similarity between each RWD source and the clinical trial data. Ultimately, a Gaussian process methodology is employed across this hierarchical clustering framework to synthesize the projected treatment effects of the external group. Simulation result shows that our clustering tree could successfully identify similarity. Data sources exhibiting greater similarity with clinical trial are accorded higher weights in treatment estimation process, while less congruent sources receive comparatively lower emphasis. Compared with another Bayesian method, meta-analytic predictive prior (MAP), our proposed method's estimator is closer to the true value and has smaller bias.

16.
Phytochem Anal ; 35(6): 1345-1357, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38686612

ABSTRACT

INTRODUCTION: Nonstationary, nonlinear mass transfer in traditional Chinese medicine (TCM) extraction poses challenges to correlating process characteristics with quality parameters, particularly in defining clear parameter ranges for the process. OBJECTIVES: The aim of the study was to provide a solution for quality consistency analysis in TCM preparation processes. MATERIALS AND METHODS: Salvia miltiorrhiza was taken as an example for 15 batches of standard decoction. Using aqueous extract, alcoholic extract, and the content of salvianolic acid B as herb material key quality attributes, multiple nonlinear regression, Gaussian process regression, and artificial neural network models were employed to predict the key quality attributes including the paste yield, the content of salvianolic acid B, and the transfer rate. The evaluation criteria were root mean square error, mean absolute percentage error, and R2. RESULTS: The Gaussian process regression model had the best prediction effect on the paste yield, the content of salvianolic acid B, and the transfer rate, with R2 being 0.918, 0.934, and 0.919, respectively. Utilizing Gaussian process regression model confidence intervals, along with Shewhart control and intervals optimized through process capability index analysis, the quality control range of the standard decoction was determined as follows: paste yield, 25.14%-33.19%; salvianolic acid B content, 2.62%-4.78%; and transfer rate, 56.88%-64.80%. CONCLUSION: This study combined the preparation process of standard decoction with the Gaussian process regression model, accurately predicted the key quality attributes, and determined the quality parameter range by using process analysis tools, providing a new idea for the quality consistency standard of TCM processes.


Subject(s)
Benzofurans , Drugs, Chinese Herbal , Salvia miltiorrhiza , Salvia miltiorrhiza/chemistry , Drugs, Chinese Herbal/chemistry , Drugs, Chinese Herbal/standards , Drugs, Chinese Herbal/analysis , Benzofurans/analysis , Regression Analysis , Quality Control , Neural Networks, Computer , Normal Distribution , Depsides
17.
Sensors (Basel) ; 24(8)2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38676048

ABSTRACT

In addition to the filter coefficients, the location of the microphone array is a crucial factor in improving the overall performance of a beamformer. The optimal microphone array placement can considerably enhance speech quality. However, the optimization problem with microphone configuration variables is non-convex and highly non-linear. Heuristic algorithms that are frequently employed take a long time and have a chance of missing the optimal microphone array placement design. We extend the Bayesian optimization method to solve the microphone array configuration design problem. The proposed Bayesian optimization method does not depend on gradient and Hessian approximations and makes use of all the information available from prior evaluations. Furthermore, Gaussian process regression and acquisition functions make up the Bayesian optimization method. The objective function is given a prior probabilistic model through Gaussian process regression, which exploits this model while integrating out uncertainty. The acquisition function is adopted to decide the next placement point based upon the incumbent optimum with the posterior distribution. Numerical experiments have demonstrated that the Bayesian optimization method could find a similar or better microphone array placement compared with the hybrid descent method and computational time is significantly reduced. Our proposed method is at least four times faster than the hybrid descent method to find the optimal microphone array configuration from the numerical results.

18.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38732793

ABSTRACT

During the implementation of the Internet of Things (IoT), the performance of communication and sensing antennas that are embedded in smart surfaces or smart devices can be affected by objects in their reactive near field due to detuning and antenna mismatch. Matching networks have been proposed to re-establish impedance matching when antennas become detuned due to environmental factors. In this work, the change in the reflection coefficient at the antenna, due to the presence of objects, is first characterized as a function of the frequency and object distance by applying Gaussian process regression on experimental data. Based on this characterization, for random object positions, it is shown through simulation that a dynamic environment can lower the reliability of a matching network by up to 90%, depending on the type of object, the probability distribution of the object distance, and the required bandwidth. As an alternative to complex and power-consuming real-time adaptive matching, a new, resilient network tuning strategy is proposed that takes into account these random variations. This new approach increases the reliability of the system by 10% to 40% in these dynamic environment scenarios.

19.
Sensors (Basel) ; 24(3)2024 Jan 24.
Article in English | MEDLINE | ID: mdl-38339487

ABSTRACT

Remote sensing data represent one of the most important sources for automized yield prediction. High temporal and spatial resolution, historical record availability, reliability, and low cost are key factors in predicting yields around the world. Yield prediction as a machine learning task is challenging, as reliable ground truth data are difficult to obtain, especially since new data points can only be acquired once a year during harvest. Factors that influence annual yields are plentiful, and data acquisition can be expensive, as crop-related data often need to be captured by experts or specialized sensors. A solution to both problems can be provided by deep transfer learning based on remote sensing data. Satellite images are free of charge, and transfer learning allows recognition of yield-related patterns within countries where data are plentiful and transfers the knowledge to other domains, thus limiting the number of ground truth observations needed. Within this study, we examine the use of transfer learning for yield prediction, where the data preprocessing towards histograms is unique. We present a deep transfer learning framework for yield prediction and demonstrate its successful application to transfer knowledge gained from US soybean yield prediction to soybean yield prediction within Argentina. We perform a temporal alignment of the two domains and improve transfer learning by applying several transfer learning techniques, such as L2-SP, BSS, and layer freezing, to overcome catastrophic forgetting and negative transfer problems. Lastly, we exploit spatio-temporal patterns within the data by applying a Gaussian process. We are able to improve the performance of soybean yield prediction in Argentina by a total of 19% in terms of RMSE and 39% in terms of R2 compared to predictions without transfer learning and Gaussian processes. This proof of concept for advanced transfer learning techniques for yield prediction and remote sensing data in the form of histograms can enable successful yield prediction, especially in emerging and developing countries, where reliable data are usually limited.

20.
Sensors (Basel) ; 24(7)2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38610329

ABSTRACT

Surface roughness prediction is a pivotal aspect of the manufacturing industry, as it directly influences product quality and process optimization. This study introduces a predictive model for surface roughness in the turning of complex-structured workpieces utilizing Gaussian Process Regression (GPR) informed by vibration signals. The model captures parameters from both the time and frequency domains of the turning tool, encompassing the mean, median, standard deviation (STD), and root mean square (RMS) values. The signal is from the time to frequency domain and it is executed using Welch's method complemented by time-frequency domain analysis employing three levels of Daubechies Wavelet Packet Transform (WPT). The selected features are then utilized as inputs for the GPR model to forecast surface roughness. Empirical evidence indicates that the GPR model can accurately predict the surface roughness of turned complex-structured workpieces. This predictive strategy has the potential to improve product quality, streamline manufacturing processes, and minimize waste within the industry.

SELECTION OF CITATIONS
SEARCH DETAIL