Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Chem Inf Model ; 63(11): 3288-3306, 2023 06 12.
Artigo em Inglês | MEDLINE | ID: mdl-37208794

RESUMO

While polymerization-induced self-assembly (PISA) has become a preferred synthetic route toward amphiphilic block copolymer self-assemblies, predicting their phase behavior from experimental design is extremely challenging, requiring time and work-intensive creation of empirical phase diagrams whenever self-assemblies of novel monomer pairs are sought for specific applications. To alleviate this burden, we develop here the first framework for a data-driven methodology for the probabilistic modeling of PISA morphologies based on a selection and suitable adaption of statistical machine learning methods. As the complexity of PISA precludes generating large volumes of training data with in silico simulations, we focus on interpretable low variance methods that can be interrogated for conformity with chemical intuition and that promise to work well with only 592 training data points which we curated from the PISA literature. We found that among the evaluated linear models, generalized additive models, and rule and tree ensembles, all but the linear models show a decent interpolation performance with around 0.2 estimated error rate and 1 bit expected cross entropy loss (surprisal) when predicting the mixture of morphologies formed from monomer pairs already encountered in the training data. When considering extrapolation to new monomer combinations, the model performance is weaker but the best model (random forest) still achieves highly nontrivial prediction performance (0.27 error rate, 1.6 bit surprisal), which renders it a good candidate to support the creation of empirical phase diagrams for new monomers and conditions. Indeed, we find in three case studies that, when used to actively learn phase diagrams, the model is able to select a smart set of experiments that lead to satisfactory phase diagrams after observing only relatively few data points (5-16) for the targeted conditions. The data set as well as all model training and evaluation codes are publicly available through the GitHub repository of the last author.


Assuntos
Aprendizado de Máquina , Polimerização , Polímeros/química , Modelos Lineares
2.
Neuroimage ; 263: 119592, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36031185

RESUMO

Neural processes are complex and difficult to image. This paper presents a new space-time resolved brain imaging framework, called Neurophysiological Process Imaging (NPI), that identifies neurophysiological processes within cerebral cortex at the macroscopic scale. By fitting uncoupled neural mass models to each electromagnetic source time-series using a novel nonlinear inference method, population averaged membrane potentials and synaptic connection strengths are efficiently and accurately inferred and imaged across the whole cerebral cortex at a resolution afforded by source imaging. The efficiency of the framework enables return of the augmented source imaging results overnight using high performance computing. This suggests it can be used as a practical and novel imaging tool. To demonstrate the framework, it has been applied to resting-state magnetoencephalographic source estimates. The results suggest that endogenous inputs to cingulate, occipital, and inferior frontal cortex are essential modulators of resting-state alpha power. Moreover, endogenous input and inhibitory and excitatory neural populations play varied roles in mediating alpha power in different resting-state sub-networks. The framework can be applied to arbitrary neural mass models and has broad applicability to image neural processes of different brain states.


Assuntos
Ritmo alfa , Imageamento por Ressonância Magnética , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Magnetoencefalografia , Mapeamento Encefálico
3.
J Chem Theory Comput ; 20(20): 8886-8896, 2024 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-39356714

RESUMO

Graph neural networks (GNNs) have emerged as powerful tools for quantum chemical property prediction, leveraging the inherent graph structure of molecular systems. GNNs depend on an edge-to-node aggregation mechanism for combining edge representations into node representations. Unfortunately, existing learnable edge-to-node aggregation methods substantially increase the number of parameters and, thus, the computational cost relative to simple sum aggregation. Worse, as we report here, they often fail to improve predictive accuracy. We therefore propose a novel learnable edge-to-node aggregation mechanism that aims to improve the accuracy and parameter efficiency of GNNs in predicting molecular properties. The new mechanism, called "patch aggregation", is inspired by the Multi-Head Attention and Mixture of Experts machine learning techniques. We have incorporated the patch aggregation method into the specialized, state-of-the-art GNN models SchNet, DimeNet++, SphereNet, TensorNet, and VisNet and show that patch aggregation consistently outperforms existing learnable and nonlearnable aggregation techniques (sum, multilayer perceptron, softmax, and set transformer aggregation) in the prediction of molecular properties such as QM9 thermodynamic properties and MD17 molecular dynamics trajectory energies and forces. We also find that patch aggregation not only improves prediction accuracy but also is parameter-efficient, making it an attractive option for practical applications for which computational resources are limited. Further, we show that Patch aggregation can be applied across different GNN models. Overall, Patch aggregation is a powerful edge-to-node aggregation mechanism that improves the accuracy of molecular property predictions by GNNs.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38082665

RESUMO

This study characterizes the neurophysiological mechanisms underlying electromagnetic imaging signals using stability analysis. Researchers have proposed that transitions between conscious awake and anaesthetised states, and other brain states more generally, may result from system stability changes. The concept of stability in dynamical systems theory provides a mathematical framework to describe this possibility. In particular, the degree to which a system's trajectory in phase space is affected by small perturbations determines the stability. Previous studies using linear or oscillator-based whole-brain models cannot represent complex cerebrocortical dynamics, or model parameters were pre-assumed or inferred from data but did not change over time. This study proposes a nonlinear neurophysiologically plausible whole-cortex modeling framework to analyze the stability of brain dynamics for the emergence and disappearance of consciousness using time-varying parameters estimated from the data.Clinical relevance- Depth of anaesthesia is typically measured through changes in EEG statistics like the bispectral index and spectral entropy. However, these monitors have been found to fail in preventing awareness during surgery and postoperative recall. Our whole-cortex stability analysis may be useful in measuring anaesthesia levels in clinical settings, as it changes with the level of consciousness and is independent of individual differences and anaesthetic agents. The proposed method can also be used to, for example, identify critical brain regions for consciousness, locate the epileptogenic zone and investigate the dominance of extrinsic or intrinsic factors in brain functions.


Assuntos
Anestesia , Anestésicos , Humanos , Xenônio , Eletroencefalografia/métodos , Encéfalo/fisiologia
5.
Int J Neural Syst ; 33(5): 2350024, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37103982

RESUMO

Recent work presented a framework for space-time-resolved neurophysiological process imaging that augments existing electromagnetic source imaging techniques. In particular, a nonlinear Analytic Kalman filter (AKF) has been developed to efficiently infer the states and parameters of neural mass models believed to underlie the generation of electromagnetic source currents. Unfortunately, as the initialization determines the performance of the Kalman filter, and the ground truth is typically unavailable for initialization, this framework might produce suboptimal results unless significant effort is spent on tuning the initialization. Notably, the relation between the initialization and overall filter performance is only given implicitly and is expensive to evaluate; implying that conventional optimization techniques, e.g. gradient or sampling based, are inapplicable. To address this problem, a novel efficient framework based on blackbox optimization has been developed to find the optimal initialization by reducing the signal prediction error. Multiple state-of-the-art optimization methods were compared and distinctively, Gaussian process optimization decreased the objective function by 82.1% and parameter estimation error by 62.5% on average with the simulation data compared to no optimization applied. The framework took only 1.6[Formula: see text]h and reduced the objective function by an average of 13.2% on 3.75[Formula: see text]min 4714-source channel magnetoencephalography data. This yields an improved method of neurophysiological process imaging that can be used to uncover complex underpinnings of brain dynamics.


Assuntos
Algoritmos , Encéfalo , Simulação por Computador , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia
6.
Nat Commun ; 11(1): 4428, 2020 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-32887879

RESUMO

Although machine learning (ML) models promise to substantially accelerate the discovery of novel materials, their performance is often still insufficient to draw reliable conclusions. Improved ML models are therefore actively researched, but their design is currently guided mainly by monitoring the average model test error. This can render different models indistinguishable although their performance differs substantially across materials, or it can make a model appear generally insufficient while it actually works well in specific sub-domains. Here, we present a method, based on subgroup discovery, for detecting domains of applicability (DA) of models within a materials class. The utility of this approach is demonstrated by analyzing three state-of-the-art ML models for predicting the formation energy of transparent conducting oxides. We find that, despite having a mutually indistinguishable and unsatisfactory average error, the models have DAs with distinctive features and notably improved performance.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA