Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 590
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(32): e2403449121, 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39088394

RESUMO

Most problems within and beyond the scientific domain can be framed into one of the following three levels of complexity of function approximation. Type 1: Approximate an unknown function given input/output data. Type 2: Consider a collection of variables and functions, some of which are unknown, indexed by the nodes and hyperedges of a hypergraph (a generalized graph where edges can connect more than two vertices). Given partial observations of the variables of the hypergraph (satisfying the functional dependencies imposed by its structure), approximate all the unobserved variables and unknown functions. Type 3: Expanding on Type 2, if the hypergraph structure itself is unknown, use partial observations of the variables of the hypergraph to discover its structure and approximate its unknown functions. These hypergraphs offer a natural platform for organizing, communicating, and processing computational knowledge. While most scientific problems can be framed as the data-driven discovery of unknown functions in a computational hypergraph whose structure is known (Type 2), many require the data-driven discovery of the structure (connectivity) of the hypergraph itself (Type 3). We introduce an interpretable Gaussian Process (GP) framework for such (Type 3) problems that does not require randomization of the data, access to or control over its sampling, or sparsity of the unknown functions in a known or learned basis. Its polynomial complexity, which contrasts sharply with the super-exponential complexity of causal inference methods, is enabled by the nonlinear ANOVA capabilities of GPs used as a sensing mechanism.

2.
Proc Natl Acad Sci U S A ; 120(9): e2218375120, 2023 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-36821583

RESUMO

The recent increase in openly available ancient human DNA samples allows for large-scale meta-analysis applications. Trans-generational past human mobility is one of the key aspects that ancient genomics can contribute to since changes in genetic ancestry-unlike cultural changes seen in the archaeological record-necessarily reflect movements of people. Here, we present an algorithm for spatiotemporal mapping of genetic profiles, which allow for direct estimates of past human mobility from large ancient genomic datasets. The key idea of the method is to derive a spatial probability surface of genetic similarity for each individual in its respective past. This is achieved by first creating an interpolated ancestry field through space and time based on multivariate statistics and Gaussian process regression and then using this field to map the ancient individuals into space according to their genetic profile. We apply this algorithm to a dataset of 3138 aDNA samples with genome-wide data from Western Eurasia in the last 10,000 y. Finally, we condense this sample-wise record with a simple summary statistic into a diachronic measure of mobility for subregions in Western, Central, and Southern Europe. For regions and periods with sufficient data coverage, our similarity surfaces and mobility estimates show general concordance with previous results and provide a meta-perspective of genetic changes and human mobility.


Assuntos
DNA Antigo , Genômica , Humanos , História Antiga , DNA Antigo/análise , Europa (Continente)
3.
Mol Biol Evol ; 41(7)2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38958167

RESUMO

Admixture between populations and species is common in nature. Since the influx of new genetic material might be either facilitated or hindered by selection, variation in mixture proportions along the genome is expected in organisms undergoing recombination. Various graph-based models have been developed to better understand these evolutionary dynamics of population splits and mixtures. However, current models assume a single mixture rate for the entire genome and do not explicitly account for linkage. Here, we introduce TreeSwirl, a novel method for inferring branch lengths and locus-specific mixture proportions by using genome-wide allele frequency data, assuming that the admixture graph is known or has been inferred. TreeSwirl builds upon TreeMix that uses Gaussian processes to estimate the presence of gene flow between diverged populations. However, in contrast to TreeMix, our model infers locus-specific mixture proportions employing a hidden Markov model that accounts for linkage. Through simulated data, we demonstrate that TreeSwirl can accurately estimate locus-specific mixture proportions and handle complex demographic scenarios. It also outperforms related D- and f-statistics in terms of accuracy and sensitivity to detect introgressed loci.


Assuntos
Frequência do Gene , Modelos Genéticos , Genética Populacional/métodos , Cadeias de Markov , Fluxo Gênico , Genoma , Simulação por Computador , Ligação Genética
4.
Nano Lett ; 24(7): 2149-2156, 2024 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-38329715

RESUMO

The integration time and signal-to-noise ratio are inextricably linked when performing scanning probe microscopy based on raster scanning. This often yields a large lower bound on the measurement time, for example, in nano-optical imaging experiments performed using a scanning near-field optical microscope (SNOM). Here, we utilize sparse scanning augmented with Gaussian process regression to bypass the time constraint. We apply this approach to image charge-transfer polaritons in graphene residing on ruthenium trichloride (α-RuCl3) and obtain key features such as polariton damping and dispersion. Critically, nano-optical SNOM imaging data obtained via sparse sampling are in good agreement with those extracted from traditional raster scans but require 11 times fewer sampled points. As a result, Gaussian process-aided sparse spiral scans offer a major decrease in scanning time.

5.
BMC Bioinformatics ; 25(1): 104, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459430

RESUMO

The identification of tumor-specific molecular dependencies is essential for the development of effective cancer therapies. Genetic and chemical perturbations are powerful tools for discovering these dependencies. Even though chemical perturbations can be applied to primary cancer samples at large scale, the interpretation of experiment outcomes is often complicated by the fact that one chemical compound can affect multiple proteins. To overcome this challenge, Batzilla et al. (PLoS Comput Biol 18(8): e1010438, 2022) proposed DepInfeR, a regularized multi-response regression model designed to identify and estimate specific molecular dependencies of individual cancers from their ex-vivo drug sensitivity profiles. Inspired by their work, we propose a Bayesian extension to DepInfeR. Our proposed approach offers several advantages over DepInfeR, including e.g. the ability to handle missing values in both protein-drug affinity and drug sensitivity profiles without the need for data pre-processing steps such as imputation. Moreover, our approach uses Gaussian Processes to capture more complex molecular dependency structures, and provides probabilistic statements about whether a protein in the protein-drug affinity profiles is informative to the drug sensitivity profiles. Simulation studies demonstrate that our proposed approach achieves better prediction accuracy, and is able to discover unreported dependency structures.


Assuntos
Neoplasias , Humanos , Teorema de Bayes , Neoplasias/tratamento farmacológico , Neoplasias/metabolismo , Simulação por Computador
6.
Hum Brain Mapp ; 45(7): e26692, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38712767

RESUMO

In neuroimaging studies, combining data collected from multiple study sites or scanners is becoming common to increase the reproducibility of scientific discoveries. At the same time, unwanted variations arise by using different scanners (inter-scanner biases), which need to be corrected before downstream analyses to facilitate replicable research and prevent spurious findings. While statistical harmonization methods such as ComBat have become popular in mitigating inter-scanner biases in neuroimaging, recent methodological advances have shown that harmonizing heterogeneous covariances results in higher data quality. In vertex-level cortical thickness data, heterogeneity in spatial autocorrelation is a critical factor that affects covariance heterogeneity. Our work proposes a new statistical harmonization method called spatial autocorrelation normalization (SAN) that preserves homogeneous covariance vertex-level cortical thickness data across different scanners. We use an explicit Gaussian process to characterize scanner-invariant and scanner-specific variations to reconstruct spatially homogeneous data across scanners. SAN is computationally feasible, and it easily allows the integration of existing harmonization methods. We demonstrate the utility of the proposed method using cortical thickness data from the Social Processes Initiative in the Neurobiology of the Schizophrenia(s) (SPINS) study. SAN is publicly available as an R package.


Assuntos
Córtex Cerebral , Imageamento por Ressonância Magnética , Esquizofrenia , Humanos , Imageamento por Ressonância Magnética/normas , Imageamento por Ressonância Magnética/métodos , Esquizofrenia/diagnóstico por imagem , Esquizofrenia/patologia , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/anatomia & histologia , Neuroimagem/métodos , Neuroimagem/normas , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Masculino , Feminino , Adulto , Distribuição Normal , Espessura Cortical do Cérebro
7.
Hum Brain Mapp ; 45(10): e26763, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-38943369

RESUMO

In this article, we develop an analytical approach for estimating brain connectivity networks that accounts for subject heterogeneity. More specifically, we consider a novel extension of a multi-subject Bayesian vector autoregressive model that estimates group-specific directed brain connectivity networks and accounts for the effects of covariates on the network edges. We adopt a flexible approach, allowing for (possibly) nonlinear effects of the covariates on edge strength via a novel Bayesian nonparametric prior that employs a weighted mixture of Gaussian processes. For posterior inference, we achieve computational scalability by implementing a variational Bayes scheme. Our approach enables simultaneous estimation of group-specific networks and selection of relevant covariate effects. We show improved performance over competing two-stage approaches on simulated data. We apply our method on resting-state functional magnetic resonance imaging data from children with a history of traumatic brain injury (TBI) and healthy controls to estimate the effects of age and sex on the group-level connectivities. Our results highlight differences in the distribution of parent nodes. They also suggest alteration in the relation of age, with peak edge strength in children with TBI, and differences in effective connectivity strength between males and females.


Assuntos
Teorema de Bayes , Lesões Encefálicas Traumáticas , Conectoma , Imageamento por Ressonância Magnética , Humanos , Lesões Encefálicas Traumáticas/diagnóstico por imagem , Lesões Encefálicas Traumáticas/fisiopatologia , Feminino , Masculino , Criança , Adolescente , Conectoma/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiopatologia , Modelos Neurológicos
8.
Hum Brain Mapp ; 45(3): e26632, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38379519

RESUMO

Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease-specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations.


Assuntos
Doença de Alzheimer , Esquizofrenia , Humanos , Fluxo de Trabalho , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Esquizofrenia/diagnóstico por imagem , Esquizofrenia/patologia , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos
9.
J Comput Chem ; 45(10): 638-647, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38082539

RESUMO

In the last several years, there has been a surge in the development of machine learning potential (MLP) models for describing molecular systems. We are interested in a particular area of this field - the training of system-specific MLPs for reactive systems - with the goal of using these MLPs to accelerate free energy simulations of chemical and enzyme reactions. To help new members in our labs become familiar with the basic techniques, we have put together a self-guided Colab tutorial (https://cc-ats.github.io/mlp_tutorial/), which we expect to be also useful to other young researchers in the community. Our tutorial begins with the introduction of simple feedforward neural network (FNN) and kernel-based (using Gaussian process regression, GPR) models by fitting the two-dimensional Müller-Brown potential. Subsequently, two simple descriptors are presented for extracting features of molecular systems: symmetry functions (including the ANI variant) and embedding neural networks (such as DeepPot-SE). Lastly, these features will be fed into FNN and GPR models to reproduce the energies and forces for the molecular configurations in a Claisen rearrangement reaction.

10.
J Comput Chem ; 45(11): 761-776, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38124290

RESUMO

Structure and function in nanoscale atomistic assemblies are tightly coupled, and every atom with its specific position and even every electron will have a decisive effect on the electronic structure, and hence, on the molecular properties. Molecular simulations of nanoscopic atomistic structures therefore require accurately resolved three-dimensional input structures. If extracted from experiment, these structures often suffer from severe uncertainties, of which the lack of information on hydrogen atoms is a prominent example. Hence, experimental structures require careful review and curation, which is a time-consuming and error-prone process. Here, we present a fast and robust protocol for the automated structure analysis and pH-consistent protonation, in short, ASAP. For biomolecules as a target, the ASAP protocol integrates sequence analysis and error assessment of a given input structure. ASAP allows for p K a prediction from reference data through Gaussian process regression including uncertainty estimation and connects to system-focused atomistic modeling described in Brunken and Reiher (J. Chem. Theory Comput. 16, 2020, 1646). Although focused on biomolecules, ASAP can be extended to other nanoscopic objects, because most of its design elements rely on a general graph-based foundation guaranteeing transferability. The modular character of the underlying pipeline supports different degrees of automation, which allows for (i) efficient feedback loops for human-machine interaction with a low entrance barrier and for (ii) integration into autonomous procedures such as automated force field parametrizations. This facilitates fast switching of the pH-state through on-the-fly system-focused reparametrization during a molecular simulation at virtually no extra computational cost.

11.
J Comput Chem ; 45(15): 1235-1246, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38345165

RESUMO

Machine learning (ML) force fields are revolutionizing molecular dynamics (MD) simulations as they bypass the computational cost associated with ab initio methods but do not sacrifice accuracy in the process. In this work, the GPyTorch library is used to create Gaussian process regression (GPR) models that are interfaced with the next-generation ML force field FFLUX. These models predict atomic properties of different molecular configurations that appear in a progressing MD simulation. An improved kernel function is utilized to correctly capture the periodicity of the input descriptors. The first FFLUX molecular simulations of ammonia, methanol, and malondialdehyde with the updated kernel are performed. Geometry optimizations with the GPR models result in highly accurate final structures with a maximum root-mean-squared deviation of 0.064 Å and sub-kJ mol-1 total energy predictions. Additionally, the models are tested in 298 K MD simulations with FFLUX to benchmark for robustness. The resulting energy and force predictions throughout the simulation are in excellent agreement with ab initio data for ammonia and methanol but decrease in quality for malondialdehyde due to the increased system complexity. GPR model improvements are discussed, which will ensure the future scalability to larger systems.

12.
Biometrics ; 80(1)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38372403

RESUMO

Precision medicine is a promising framework for generating evidence to improve health and health care. Yet, a gap persists between the ever-growing number of statistical precision medicine strategies for evidence generation and implementation in real-world clinical settings, and the strategies for closing this gap will likely be context-dependent. In this paper, we consider the specific context of partial compliance to wound management among patients with peripheral artery disease. Using a Gaussian process surrogate for the value function, we show the feasibility of using Bayesian optimization to learn optimal individualized treatment rules. Further, we expand beyond the common precision medicine task of learning an optimal individualized treatment rule to the characterization of classes of individualized treatment rules and show how those findings can be translated into clinical contexts.


Assuntos
Medicina de Precisão , Humanos , Teorema de Bayes
13.
Stat Med ; 43(16): 3062-3072, 2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-38803150

RESUMO

This article is concerned with sample size determination methodology for prediction models. We propose to combine the individual calculations via learning-type curves. We suggest two distinct ways of doing so, a deterministic skeleton of a learning curve and a Gaussian process centered upon its deterministic counterpart. We employ several learning algorithms for modeling the primary endpoint and distinct measures for trial efficacy. We find that the performance may vary with the sample size, but borrowing information across sample size universally improves the performance of such calculations. The Gaussian process-based learning curve appears more robust and statistically efficient, while computational efficiency is comparable. We suggest that anchoring against historical evidence when extrapolating sample sizes should be adopted when such data are available. The methods are illustrated on binary and survival endpoints.


Assuntos
Algoritmos , Modelos Estatísticos , Humanos , Tamanho da Amostra , Curva de Aprendizado , Distribuição Normal , Simulação por Computador , Análise de Sobrevida
14.
Stat Med ; 43(6): 1135-1152, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38197220

RESUMO

The prevalence of chronic non-communicable diseases such as obesity has noticeably increased in the last decade. The study of these diseases in early life is of paramount importance in determining their course in adult life and in supporting clinical interventions. Recently, attention has been drawn to approaches that study the alteration of metabolic pathways in obese children. In this work, we propose a novel joint modeling approach for the analysis of growth biomarkers and metabolite associations, to unveil metabolic pathways related to childhood obesity. Within a Bayesian framework, we flexibly model the temporal evolution of growth trajectories and metabolic associations through the specification of a joint nonparametric random effect distribution, with the main goal of clustering subjects, thus identifying risk sub-groups. Growth profiles as well as patterns of metabolic associations determine the clustering structure. Inclusion of risk factors is straightforward through the specification of a regression term. We demonstrate the proposed approach on data from the Growing Up in Singapore Towards healthy Outcomes cohort study, based in Singapore. Posterior inference is obtained via a tailored MCMC algorithm, involving a nonparametric prior with mixed support. Our analysis has identified potential key pathways in obese children that allow for the exploration of possible molecular mechanisms associated with childhood obesity.


Assuntos
Obesidade Infantil , Adulto , Humanos , Criança , Obesidade Infantil/epidemiologia , Estudos de Coortes , Teorema de Bayes , Fatores de Risco , Biomarcadores
15.
J Anim Ecol ; 93(5): 632-645, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38297453

RESUMO

Identifying important demographic drivers of population dynamics is fundamental for understanding life-history evolution and implementing effective conservation measures. Integrated population models (IPMs) coupled with transient life table response experiments (tLTREs) allow ecologists to quantify the contributions of demographic parameters to observed population change. While IPMs can estimate parameters that are not estimable using any data source alone, for example, immigration, the estimated contribution of such parameters to population change is prone to bias. Currently, it is unclear when robust conclusions can be drawn from them. We sought to understand the drivers of a rebounding southern elephant seal population on Marion Island using the IPM-tLTRE framework, applied to count and mark-recapture data on 9500 female seals over nearly 40 years. Given the uncertainty around IPM-tLTRE estimates of immigration, we also aimed to investigate the utility of simulation and sensitivity analyses as general tools for evaluating the robustness of conclusions obtained in this framework. Using a Bayesian IPM and tLTRE analysis, we quantified the contributions of survival, immigration and population structure to population growth. We assessed the sensitivity of our estimates to choice of multivariate priors on immigration and other vital rates. To do so we make a novel application of Gaussian process priors, in comparison with commonly used shrinkage priors. Using simulation, we assessed our model's ability to estimate the demographic contribution of immigration under different levels of temporal variance in immigration. The tLTRE analysis suggested that adult survival and immigration were the most important drivers of recent population growth. While the contribution of immigration was sensitive to prior choices, the estimate was consistently large. Furthermore, our simulation study validated the importance of immigration by showing that our estimate of its demographic contribution is unlikely to result as a biased overestimate. Our results highlight the connectivity between distant populations of southern elephant seals, illustrating that female dispersal can be important in regulating the abundance of local populations even when natal site fidelity is high. More generally, we demonstrate how robust ecological conclusions may be obtained about immigration from the IPM-tLTRE framework, by combining sensitivity analysis and simulation.


Assuntos
Modelos Biológicos , Dinâmica Populacional , Focas Verdadeiras , Animais , Focas Verdadeiras/fisiologia , Feminino , Migração Animal , Teorema de Bayes , Simulação por Computador
16.
BMC Med Res Methodol ; 24(1): 26, 2024 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-38281017

RESUMO

BACKGROUND: The rapidly growing burden of non-communicable diseases (NCDs) among people living with HIV in sub-Saharan Africa (SSA) has expanded the number of multidisease models predicting future care needs and health system priorities. Usefulness of these models depends on their ability to replicate real-life data and be readily understood and applied by public health decision-makers; yet existing simulation models of HIV comorbidities are computationally expensive and require large numbers of parameters and long run times, which hinders their utility in resource-constrained settings. METHODS: We present a novel, user-friendly emulator that can efficiently approximate complex simulators of long-term HIV and NCD outcomes in Africa. We describe how to implement the emulator via a tutorial based on publicly available data from Kenya. Emulator parameters relating to incidence and prevalence of HIV, hypertension and depression were derived from our own agent-based simulation model and other published literature. Gaussian processes were used to fit the emulator to simulator estimates, assuming presence of noise for design points. Bayesian posterior predictive checks and leave-one-out cross validation confirmed the emulator's descriptive accuracy. RESULTS: In this example, our emulator resulted in a 13-fold (95% Confidence Interval (CI): 8-22) improvement in computing time compared to that of more complex chronic disease simulation models. One emulator run took 3.00 seconds (95% CI: 1.65-5.28) on a 64-bit operating system laptop with 8.00 gigabytes (GB) of Random Access Memory (RAM), compared to > 11 hours for 1000 simulator runs on a high-performance computing cluster with 1500 GBs of RAM. Pareto k estimates were < 0.70 for all emulations, which demonstrates sufficient predictive accuracy of the emulator. CONCLUSIONS: The emulator presented in this tutorial offers a practical and flexible modelling tool that can help inform health policy-making in countries with a generalized HIV epidemic and growing NCD burden. Future emulator applications could be used to forecast the changing burden of HIV, hypertension and depression over an extended (> 10 year) period, estimate longer-term prevalence of other co-occurring conditions (e.g., postpartum depression among women living with HIV), and project the impact of nationally-prioritized interventions such as national health insurance schemes and differentiated care models.


Assuntos
Infecções por HIV , Hipertensão , Doenças não Transmissíveis , Humanos , Feminino , Infecções por HIV/epidemiologia , Infecções por HIV/terapia , Doenças não Transmissíveis/epidemiologia , Doenças não Transmissíveis/terapia , Teorema de Bayes , Simulação por Computador , Hipertensão/epidemiologia , Hipertensão/terapia
17.
J Biopharm Stat ; : 1-11, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38557411

RESUMO

The incorporation of real-world data (RWD) into medical product development and evaluation has exhibited consistent growth. However, there is no universally adopted method of how much information to borrow from external data. This paper proposes a study design methodology called Tree-based Monte Carlo (TMC) that dynamically integrates patients from various RWD sources to calculate the treatment effect based on the similarity between clinical trial and RWD. Initially, a propensity score is developed to gauge the resemblance between clinical trial data and each real-world dataset. Utilizing this similarity metric, we construct a hierarchical clustering tree that delineates varying degrees of similarity between each RWD source and the clinical trial data. Ultimately, a Gaussian process methodology is employed across this hierarchical clustering framework to synthesize the projected treatment effects of the external group. Simulation result shows that our clustering tree could successfully identify similarity. Data sources exhibiting greater similarity with clinical trial are accorded higher weights in treatment estimation process, while less congruent sources receive comparatively lower emphasis. Compared with another Bayesian method, meta-analytic predictive prior (MAP), our proposed method's estimator is closer to the true value and has smaller bias.

18.
Phytochem Anal ; 35(6): 1345-1357, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38686612

RESUMO

INTRODUCTION: Nonstationary, nonlinear mass transfer in traditional Chinese medicine (TCM) extraction poses challenges to correlating process characteristics with quality parameters, particularly in defining clear parameter ranges for the process. OBJECTIVES: The aim of the study was to provide a solution for quality consistency analysis in TCM preparation processes. MATERIALS AND METHODS: Salvia miltiorrhiza was taken as an example for 15 batches of standard decoction. Using aqueous extract, alcoholic extract, and the content of salvianolic acid B as herb material key quality attributes, multiple nonlinear regression, Gaussian process regression, and artificial neural network models were employed to predict the key quality attributes including the paste yield, the content of salvianolic acid B, and the transfer rate. The evaluation criteria were root mean square error, mean absolute percentage error, and R2. RESULTS: The Gaussian process regression model had the best prediction effect on the paste yield, the content of salvianolic acid B, and the transfer rate, with R2 being 0.918, 0.934, and 0.919, respectively. Utilizing Gaussian process regression model confidence intervals, along with Shewhart control and intervals optimized through process capability index analysis, the quality control range of the standard decoction was determined as follows: paste yield, 25.14%-33.19%; salvianolic acid B content, 2.62%-4.78%; and transfer rate, 56.88%-64.80%. CONCLUSION: This study combined the preparation process of standard decoction with the Gaussian process regression model, accurately predicted the key quality attributes, and determined the quality parameter range by using process analysis tools, providing a new idea for the quality consistency standard of TCM processes.


Assuntos
Benzofuranos , Medicamentos de Ervas Chinesas , Salvia miltiorrhiza , Salvia miltiorrhiza/química , Medicamentos de Ervas Chinesas/química , Medicamentos de Ervas Chinesas/normas , Medicamentos de Ervas Chinesas/análise , Benzofuranos/análise , Análise de Regressão , Controle de Qualidade , Redes Neurais de Computação , Distribuição Normal , Depsídeos
19.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38676048

RESUMO

In addition to the filter coefficients, the location of the microphone array is a crucial factor in improving the overall performance of a beamformer. The optimal microphone array placement can considerably enhance speech quality. However, the optimization problem with microphone configuration variables is non-convex and highly non-linear. Heuristic algorithms that are frequently employed take a long time and have a chance of missing the optimal microphone array placement design. We extend the Bayesian optimization method to solve the microphone array configuration design problem. The proposed Bayesian optimization method does not depend on gradient and Hessian approximations and makes use of all the information available from prior evaluations. Furthermore, Gaussian process regression and acquisition functions make up the Bayesian optimization method. The objective function is given a prior probabilistic model through Gaussian process regression, which exploits this model while integrating out uncertainty. The acquisition function is adopted to decide the next placement point based upon the incumbent optimum with the posterior distribution. Numerical experiments have demonstrated that the Bayesian optimization method could find a similar or better microphone array placement compared with the hybrid descent method and computational time is significantly reduced. Our proposed method is at least four times faster than the hybrid descent method to find the optimal microphone array configuration from the numerical results.

20.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38732793

RESUMO

During the implementation of the Internet of Things (IoT), the performance of communication and sensing antennas that are embedded in smart surfaces or smart devices can be affected by objects in their reactive near field due to detuning and antenna mismatch. Matching networks have been proposed to re-establish impedance matching when antennas become detuned due to environmental factors. In this work, the change in the reflection coefficient at the antenna, due to the presence of objects, is first characterized as a function of the frequency and object distance by applying Gaussian process regression on experimental data. Based on this characterization, for random object positions, it is shown through simulation that a dynamic environment can lower the reliability of a matching network by up to 90%, depending on the type of object, the probability distribution of the object distance, and the required bandwidth. As an alternative to complex and power-consuming real-time adaptive matching, a new, resilient network tuning strategy is proposed that takes into account these random variations. This new approach increases the reliability of the system by 10% to 40% in these dynamic environment scenarios.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA