Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 204
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(7): e2216415120, 2023 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-36763529

RESUMEN

Computational models have become a powerful tool in the quantitative sciences to understand the behavior of complex systems that evolve in time. However, they often contain a potentially large number of free parameters whose values cannot be obtained from theory but need to be inferred from data. This is especially the case for models in the social sciences, economics, or computational epidemiology. Yet, many current parameter estimation methods are mathematically involved and computationally slow to run. In this paper, we present a computationally simple and fast method to retrieve accurate probability densities for model parameters using neural differential equations. We present a pipeline comprising multiagent models acting as forward solvers for systems of ordinary or stochastic differential equations and a neural network to then extract parameters from the data generated by the model. The two combined create a powerful tool that can quickly estimate densities on model parameters, even for very large systems. We demonstrate the method on synthetic time series data of the SIR model of the spread of infection and perform an in-depth analysis of the Harris-Wilson model of economic activity on a network, representing a nonconvex problem. For the latter, we apply our method both to synthetic data and to data of economic activity across Greater London. We find that our method calibrates the model orders of magnitude more accurately than a previous study of the same dataset using classical techniques, while running between 195 and 390 times faster.

2.
Proc Natl Acad Sci U S A ; 119(47): e2202075119, 2022 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-36375059

RESUMEN

Traditional general circulation models, or GCMs-that is, three-dimensional dynamical models with unresolved terms represented in equations with tunable parameters-have been a mainstay of climate research for several decades, and some of the pioneering studies have recently been recognized by a Nobel prize in Physics. Yet, there is considerable debate around their continuing role in the future. Frequently mentioned as limitations of GCMs are the structural error and uncertainty across models with different representations of unresolved scales and the fact that the models are tuned to reproduce certain aspects of the observed Earth. We consider these shortcomings in the context of a future generation of models that may address these issues through substantially higher resolution and detail, or through the use of machine learning techniques to match them better to observations, theory, and process models. It is our contention that calibration, far from being a weakness of models, is an essential element in the simulation of complex systems, and contributes to our understanding of their inner workings. Models can be calibrated to reveal both fine-scale detail and the global response to external perturbations. New methods enable us to articulate and improve the connections between the different levels of abstract representation of climate processes, and our understanding resides in an entire hierarchy of models where GCMs will continue to play a central role for the foreseeable future.


Asunto(s)
Cambio Climático , Clima , Predicción , Simulación por Computador , Física
3.
Am J Physiol Heart Circ Physiol ; 327(2): H473-H503, 2024 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-38904851

RESUMEN

Computational, or in silico, models are an effective, noninvasive tool for investigating cardiovascular function. These models can be used in the analysis of experimental and clinical data to identify possible mechanisms of (ab)normal cardiovascular physiology. Recent advances in computing power and data management have led to innovative and complex modeling frameworks that simulate cardiovascular function across multiple scales. While commonly used in multiple disciplines, there is a lack of concise guidelines for the implementation of computer models in cardiovascular research. In line with recent calls for more reproducible research, it is imperative that scientists adhere to credible practices when developing and applying computational models to their research. The goal of this manuscript is to provide a consensus document that identifies best practices for in silico computational modeling in cardiovascular research. These guidelines provide the necessary methods for mechanistic model development, model analysis, and formal model calibration using fundamentals from statistics. We outline rigorous practices for computational, mechanistic modeling in cardiovascular research and discuss its synergistic value to experimental and clinical data.


Asunto(s)
Simulación por Computador , Modelos Cardiovasculares , Humanos , Investigación Biomédica/normas , Animales , Fenómenos Fisiológicos Cardiovasculares , Enfermedades Cardiovasculares/fisiopatología , Consenso
4.
Sensors (Basel) ; 24(3)2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38339637

RESUMEN

Surface electromyogram (sEMG)-based gesture recognition has emerged as a promising avenue for developing intelligent prostheses for upper limb amputees. However, the temporal variations in sEMG have rendered recognition models less efficient than anticipated. By using cross-session calibration and increasing the amount of training data, it is possible to reduce these variations. The impact of varying the amount of calibration and training data on gesture recognition performance for amputees is still unknown. To assess these effects, we present four datasets for the evaluation of calibration data and examine the impact of the amount of training data on benchmark performance. Two amputees who had undergone amputations years prior were recruited, and seven sessions of data were collected for analysis from each of them. Ninapro DB6, a publicly available database containing data from ten healthy subjects across ten sessions, was also included in this study. The experimental results show that the calibration data improved the average accuracy by 3.03%, 6.16%, and 9.73% for the two subjects and Ninapro DB6, respectively, compared to the baseline results. Moreover, it was discovered that increasing the number of training sessions was more effective in improving accuracy than increasing the number of trials. Three potential strategies are proposed in light of these findings to enhance cross-session models further. We consider these findings to be of the utmost importance for the commercialization of intelligent prostheses, as they demonstrate the criticality of gathering calibration and cross-session training data, while also offering effective strategies to maximize the utilization of the entire dataset.


Asunto(s)
Amputados , Miembros Artificiales , Humanos , Electromiografía/métodos , Calibración , Reconocimiento de Normas Patrones Automatizadas/métodos , Extremidad Superior , Algoritmos
5.
Sensors (Basel) ; 24(9)2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38732969

RESUMEN

The recent scientific literature abounds in proposals of seizure forecasting methods that exploit machine learning to automatically analyze electroencephalogram (EEG) signals. Deep learning algorithms seem to achieve a particularly remarkable performance, suggesting that the implementation of clinical devices for seizure prediction might be within reach. However, most of the research evaluated the robustness of automatic forecasting methods through randomized cross-validation techniques, while clinical applications require much more stringent validation based on patient-independent testing. In this study, we show that automatic seizure forecasting can be performed, to some extent, even on independent patients who have never been seen during the training phase, thanks to the implementation of a simple calibration pipeline that can fine-tune deep learning models, even on a single epileptic event recorded from a new patient. We evaluate our calibration procedure using two datasets containing EEG signals recorded from a large cohort of epileptic subjects, demonstrating that the forecast accuracy of deep learning methods can increase on average by more than 20%, and that performance improves systematically in all independent patients. We further show that our calibration procedure works best for deep learning models, but can also be successfully applied to machine learning algorithms based on engineered signal features. Although our method still requires at least one epileptic event per patient to calibrate the forecasting model, we conclude that focusing on realistic validation methods allows to more reliably compare different machine learning approaches for seizure prediction, enabling the implementation of robust and effective forecasting systems that can be used in daily healthcare practice.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Electroencefalografía , Convulsiones , Humanos , Electroencefalografía/métodos , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Calibración , Procesamiento de Señales Asistido por Computador , Epilepsia/diagnóstico , Epilepsia/fisiopatología , Aprendizaje Automático
6.
Sensors (Basel) ; 24(2)2024 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-38257613

RESUMEN

The use of low-cost sensors (LCSs) for the mobile monitoring of oil and gas emissions is an understudied application of low-cost air quality monitoring devices. To assess the efficacy of low-cost sensors as a screening tool for the mobile monitoring of fugitive methane emissions stemming from well sites in eastern Colorado, we colocated an array of low-cost sensors (XPOD) with a reference grade methane monitor (Aeris Ultra) on a mobile monitoring vehicle from 15 August through 27 September 2023. Fitting our low-cost sensor data with a bootstrap and aggregated random forest model, we found a high correlation between the reference and XPOD CH4 concentrations (r = 0.719) and a low experimental error (RMSD = 0.3673 ppm). Other calibration models, including multilinear regression and artificial neural networks (ANN), were either unable to distinguish individual methane spikes above baseline or had a significantly elevated error (RMSDANN = 0.4669 ppm) when compared to the random forest model. Using out-of-bag predictor permutations, we found that sensors that showed the highest correlation with methane displayed the greatest significance in our random forest model. As we reduced the percentage of colocation data employed in the random forest model, errors did not significantly increase until a specific threshold (50 percent of total calibration data). Using a peakfinding algorithm, we found that our model was able to predict 80 percent of methane spikes above 2.5 ppm throughout the duration of our field campaign, with a false response rate of 35 percent.

7.
Sensors (Basel) ; 24(8)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38676155

RESUMEN

This study aims to enhance diagnostic capabilities for optimising the performance of the anaerobic sewage treatment lagoon at Melbourne Water's Western Treatment Plant (WTP) through a novel machine learning (ML)-based monitoring strategy. This strategy employs ML to make accurate probabilistic predictions of biogas performance by leveraging diverse real-life operational and inspection sensor and other measurement data for asset management, decision making, and structural health monitoring (SHM). The paper commences with data analysis and preprocessing of complex irregular datasets to facilitate efficient learning in an artificial neural network. Subsequently, a Bayesian mixture density neural network model incorporating an attention-based mechanism in bidirectional long short-term memory (BiLSTM) was developed. This probabilistic approach uses a distribution output layer based on the Gaussian mixture model and Monte Carlo (MC) dropout technique in estimating data and model uncertainties, respectively. Furthermore, systematic hyperparameter optimisation revealed that the optimised model achieved a negative log-likelihood (NLL) of 0.074, significantly outperforming other configurations. It achieved an accuracy approximately 9 times greater than the average model performance (NLL = 0.753) and 22 times greater than the worst performing model (NLL = 1.677). Key factors influencing the model's accuracy, such as the input window size and the number of hidden units in the BiLSTM layer, were identified, while the number of neurons in the fully connected layer was found to have no significant impact on accuracy. Moreover, model calibration using the expected calibration error was performed to correct the model's predictive uncertainty. The findings suggest that the inherent data significantly contribute to the overall uncertainty of the model, highlighting the need for more high-quality data to enhance learning. This study lays the groundwork for applying ML in transforming high-value assets into intelligent structures and has broader implications for ML in asset management, SHM applications, and renewable energy sectors.


Asunto(s)
Teorema de Bayes , Biocombustibles , Redes Neurales de la Computación , Anaerobiosis , Calibración , Método de Montecarlo , Aguas del Alcantarillado , Aprendizaje Automático
8.
Biostatistics ; 23(3): 875-890, 2022 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-33616159

RESUMEN

When validating a risk model in an independent cohort, some predictors may be missing for some subjects. Missingness can be unplanned or by design, as in case-cohort or nested case-control studies, in which some covariates are measured only in subsampled subjects. Weighting methods and imputation are used to handle missing data. We propose methods to increase the efficiency of weighting to assess calibration of a risk model (i.e. bias in model predictions), which is quantified by the ratio of the number of observed events, $\mathcal{O}$, to expected events, $\mathcal{E}$, computed from the model. We adjust known inverse probability weights by incorporating auxiliary information available for all cohort members. We use survey calibration that requires the weighted sum of the auxiliary statistics in the complete data subset to equal their sum in the full cohort. We show that a pseudo-risk estimate that approximates the actual risk value but uses only variables available for the entire cohort is an excellent auxiliary statistic to estimate $\mathcal{E}$. We derive analytic variance formulas for $\mathcal{O}/\mathcal{E}$ with adjusted weights. In simulations, weight adjustment with pseudo-risk was much more efficient than inverse probability weighting and yielded consistent estimates even when the pseudo-risk was a poor approximation. Multiple imputation was often efficient but yielded biased estimates when the imputation model was misspecified. Using these methods, we assessed calibration of an absolute risk model for second primary thyroid cancer in an independent cohort.


Asunto(s)
Calibración , Sesgo , Estudios de Casos y Controles , Estudios de Cohortes , Simulación por Computador , Humanos , Probabilidad
9.
J Theor Biol ; 559: 111377, 2023 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-36470468

RESUMEN

The Lotka-Volterra model is widely used to model interactions between two species. Here, we generate synthetic data mimicking competitive, mutualistic and antagonistic interactions between two tumor cell lines, and then use the Lotka-Volterra model to infer the interaction type. Structural identifiability of the Lotka-Volterra model is confirmed, and practical identifiability is assessed for three experimental designs: (a) use of a single data set, with a mixture of both cell lines observed over time, (b) a sequential design where growth rates and carrying capacities are estimated using data from experiments in which each cell line is grown in isolation, and then interaction parameters are estimated from an experiment involving a mixture of both cell lines, and (c) a parallel experimental design where all model parameters are fitted to data from two mixtures (containing both cell lines but with different initial ratios) simultaneously. Each design is tested on data generated from the Lotka-Volterra model with noise added, to determine efficacy in an ideal sense. In addition to assessing each design for practical identifiability, we investigate how the predictive power of the model - i.e., its ability to fit data for initial ratios other than those to which it was calibrated - is affected by the choice of experimental design. The parallel calibration procedure is found to be optimal and is further tested on in silico data generated from a spatially-resolved cellular automaton model, which accounts for oxygen consumption and allows for variation in the intensity level of the interaction between the two cell lines. We use this study to highlight the care that must be taken when interpreting parameter estimates for the spatially-averaged Lotka-Volterra model when it is calibrated against data produced by the spatially-resolved cellular automaton model, since baseline competition for space and resources in the CA model may contribute to a discrepancy between the type of interaction used to generate the CA data and the type of interaction inferred by the LV model.


Asunto(s)
Modelos Biológicos , Simbiosis , Línea Celular Tumoral
10.
Bull Math Biol ; 85(10): 90, 2023 08 31.
Artículo en Inglés | MEDLINE | ID: mdl-37650951

RESUMEN

Estimating model parameters is a crucial step in mathematical modelling and typically involves minimizing the disagreement between model predictions and experimental data. This calibration data can change throughout a study, particularly if modelling is performed simultaneously with the calibration experiments, or during an on-going public health crisis as in the case of the COVID-19 pandemic. Consequently, the optimal parameter set, or maximal likelihood estimator (MLE), is a function of the experimental data set. Here, we develop a numerical technique to predict the evolution of the MLE as a function of the experimental data. We show that, when considering perturbations from an initial data set, our approach is significantly more computationally efficient that re-fitting model parameters while producing acceptable model fits to the updated data. We use the continuation technique to develop an explicit functional relationship between fit model parameters and experimental data that can be used to measure the sensitivity of the MLE to experimental data. We then leverage this technique to select between model fits with similar information criteria, a priori determine the experimental measurements to which the MLE is most sensitive, and suggest additional experiment measurements that can resolve parameter uncertainty.


Asunto(s)
COVID-19 , Modelos Biológicos , Humanos , Pandemias , COVID-19/epidemiología , Conceptos Matemáticos , Calibración
11.
J Math Biol ; 86(2): 20, 2023 01 10.
Artículo en Inglés | MEDLINE | ID: mdl-36625956

RESUMEN

In this paper, we provide a simple ODEs model with a generic nonlinear incidence rate function and incorporate two treatments, blocking the virus binding and inhibiting the virus replication to investigate the impact of calibration on model predictions for the SARS-CoV-2 infection dynamics. We derive conditions of the infection eradication for the long-term dynamics using the basic reproduction number, and complement the characterization of the dynamics at short-time using the resilience and reactivity of the virus-free equilibrium are considered to inform on the average time of recovery and sensitivity to perturbations in the initial virus free stage. Then, we calibrate the treatment model to clinical datasets for viral load in mild and severe cases and immune cells in severe cases. Based on the analysis, the model calibrated to these different datasets predicts distinct scenarios: eradication with a non reactive virus-free equilibrium, eradication with a reactive virus-free equilibrium, and failure of infection eradication. Moreover, severe cases generate richer dynamics and different outcomes with the same treatment. Calibration to different datasets can lead to diverse model predictions, but combining long- and short-term dynamics indicators allows the categorization of model predictions and determination of infection severity.


Asunto(s)
COVID-19 , Humanos , Calibración , SARS-CoV-2 , Modelos Teóricos
12.
Artículo en Inglés | MEDLINE | ID: mdl-37386340

RESUMEN

Validation of a quantitative model is a critical step in establishing confidence in the model's suitability for whatever analysis it was designed. While processes for validation are well-established in the statistical sciences, the field of quantitative systems pharmacology (QSP) has taken a more piecemeal approach to defining and demonstrating validation. Although classical statistical methods can be used in a QSP context, proper validation of a mechanistic systems model requires a more nuanced approach to what precisely is being validated, and what role said validation plays in the larger context of the analysis. In this review, we summarize current thoughts of QSP validation in the scientific community, contrast the aims of statistical validation from several contexts (including inference, pharmacometrics analysis, and machine learning) with the challenges faced in QSP analysis, and use examples from published QSP models to define different stages or levels of validation, any of which may be sufficient depending on the context at hand.

13.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-36850449

RESUMEN

Satellite remote sensing provides a unique opportunity for calibrating land surface models due to their direct measurements of various hydrological variables as well as extensive spatial and temporal coverage. This study aims to apply terrestrial water storage (TWS) estimated from the gravity recovery and climate experiment (GRACE) mission as well as soil moisture products from advanced microwave scanning radiometer-earth observing system (AMSR-E) to calibrate a land surface model using multi-objective evolutionary algorithms. For this purpose, the non-dominated sorting genetic algorithm (NSGA) is used to improve the model's parameters. The calibration is carried out for the period of two years 2003 and 2010 (calibration period) in Australia, and the impact is further monitored over 2011 (forecasting period). A new combined objective function based on the observations' uncertainty is developed to efficiently improve the model parameters for a consistent and reliable forecasting skill. According to the evaluation of the results against independent measurements, it is found that the calibrated model parameters lead to better model simulations both in the calibration and forecasting period.

14.
J Environ Manage ; 326(Pt B): 116712, 2023 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-36402022

RESUMEN

Controlling non-point source pollution is often difficult and costly. Therefore, focusing on areas that contribute the most, so-called critical source areas (CSAs), can have economic and ecological benefits. CSAs are often determined using a modelling approach, yet it has proved difficult to calibrate the models in regions with limited data availability. Since identifying CSAs is based on the relative contributions of sub-basins to the total load, it has been suggested that uncalibrated models could be used to identify CSAs to overcome data scarcity issues. Here, we use the SWAT model to study the extent to which an uncalibrated model can be applied to determine CSAs. We classify and rank sub-basins to identify CSAs for sediment, total nitrogen (TN), and total phosphorus (TP) in the Fengyu River Watershed (China) with and without model calibration. The results show high similarity (81%-93%) between the identified sediment and TP CSA number and locations before and after calibration both on the yearly and seasonal scale. For TN alone, the results show moderate similarity on the yearly scale (73%). This may be because, in our study area, TN is determined more by groundwater flow after calibration than by surface water flow. We conclude that CSA identification with the uncalibrated model for TP is always good because its CSA number and locations changed least, and for sediment, it is generally satisfactory. The use of the uncalibrated model for TN is acceptable, as its CSA locations did not change after calibration; however, the TN CSA number changed by over 60% compared to the figures before calibration on both yearly and seasonal scales. Therefore, we advise using an uncalibrated model to identify CSAs for TN only if water yield composition changes are expected to be limited. This study shows that CSAs can be identified based on relative loading estimates with uncalibrated models in data-deficient regions.


Asunto(s)
Contaminación Difusa , Contaminantes Químicos del Agua , Contaminantes Químicos del Agua/análisis , Ríos , Fósforo/análisis , Nitrógeno/análisis , China , Nutrientes , Agua , Monitoreo del Ambiente
15.
Brief Bioinform ; 21(1): 24-35, 2020 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-30239570

RESUMEN

Computational and mathematical modelling has become a valuable tool for investigating biological systems. Modelling enables prediction of how biological components interact to deliver system-level properties and extrapolation of biological system performance to contexts and experimental conditions where this is unknown. A model's value hinges on knowing that it faithfully represents the biology under the contexts of use, or clearly ascertaining otherwise and thus motivating further model refinement. These qualities are evaluated through calibration, typically formulated as identifying model parameter values that align model and biological behaviours as measured through a metric applied to both. Calibration is critical to modelling but is often underappreciated. A failure to appropriately calibrate risks unrepresentative models that generate erroneous insights. Here, we review a suite of strategies to more rigorously challenge a model's representation of a biological system. All are motivated by features of biological systems, and illustrative examples are drawn from the modelling literature. We examine the calibration of a model against distributions of biological behaviours or outcomes, not only average values. We argue for calibration even where model parameter values are experimentally ascertained. We explore how single metrics can be non-distinguishing for complex systems, with multiple-component dynamic and interaction configurations giving rise to the same metric output. Under these conditions, calibration is insufficiently constraining and the model non-identifiable: multiple solutions to the calibration problem exist. We draw an analogy to curve fitting and argue that calibrating a biological model against a single experiment or context is akin to curve fitting against a single data point. Though useful for communicating model results, we explore how metrics that quantify heavily emergent properties may not be suitable for use in calibration. Lastly, we consider the role of sensitivity and uncertainty analysis in calibration and the interpretation of model results. Our goal in this manuscript is to encourage a deeper consideration of calibration, and how to increase its capacity to either deliver faithful models or demonstrate them otherwise.

16.
Biotechnol Bioeng ; 119(6): 1426-1438, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35119107

RESUMEN

Partial nitration-anammox is a resource-efficient pathway for nitrogen removal from wastewater. However, the advantages of this nitrogen removal technology may be counter-acted by the emission of N2 O, a potent greenhouse gas. In this study, mathematical modelling was applied to analyse N2 O formation and emission dynamics and to develop N2 O mitigation strategies for a one-stage partial nitritation-anammox granular sludge reactor. Dynamic model calibration for such a full-scale reactor was performed, applying a one-dimensional biofilm model and including several N2 O formation pathways. Simultaneous calibration of liquid phase concentrations and N2 O emissions leads to improved model fit compared to their consecutive calibration. The model could quantitatively predict the average N2 O emissions and qualitatively characterize the N2 O dynamics, adjusting only seven parameter values. The model was validated with N2 O data from an independent data set at different aeration conditions. Nitrifier nitrification was identified as the dominating N2 O formation pathway. Off-gas recirculation as a potential N2 O emission reduction strategy was tested by simulation and showed indeed some improvement, be it at the cost of higher aeration energy consumption.


Asunto(s)
Reactores Biológicos , Aguas del Alcantarillado , Oxidación Anaeróbica del Amoníaco , Nitrificación , Nitrógeno , Oxidación-Reducción , Aguas Residuales
17.
J Theor Biol ; 545: 111104, 2022 07 21.
Artículo en Inglés | MEDLINE | ID: mdl-35337794

RESUMEN

New experimental data have shown how the periodic exposure of cells to low oxygen levels (i.e., cyclic hypoxia) impacts their progress through the cell-cycle. Cyclic hypoxia has been detected in tumours and linked to poor prognosis and treatment failure. While fluctuating oxygen environments can be reproduced in vitro, the range of oxygen cycles that can be tested is limited. By contrast, mathematical models can be used to predict the response to a wide range of cyclic dynamics. Accordingly, in this paper we develop a mechanistic model of the cell-cycle that can be combined with in vitro experiments to better understand the link between cyclic hypoxia and cell-cycle dysregulation. A distinguishing feature of our model is the inclusion of impaired DNA synthesis and cell-cycle arrest due to periodic exposure to severely low oxygen levels. Our model decomposes the cell population into five compartments and a time-dependent delay accounts for the variability in the duration of the S phase which increases in severe hypoxia due to reduced rates of DNA synthesis. We calibrate our model against experimental data and show that it recapitulates the observed cell-cycle dynamics. We use the calibrated model to investigate the response of cells to oxygen cycles not yet tested experimentally. When the re-oxygenation phase is sufficiently long, our model predicts that cyclic hypoxia simply slows cell proliferation since cells spend more time in the S phase. On the contrary, cycles with short periods of re-oxygenation are predicted to lead to inhibition of proliferation, with cells arresting from the cell-cycle in the G2 phase. While model predictions on short time scales (about a day) are fairly accurate (i.e, confidence intervals are small), the predictions become more uncertain over longer periods. Hence, we use our model to inform experimental design that can lead to improved model parameter estimates and validate model predictions.


Asunto(s)
Hipoxia , Oxígeno , Hipoxia de la Célula/fisiología , ADN/metabolismo , Humanos , Modelos Teóricos , Oxígeno/metabolismo
18.
Anim Cogn ; 25(2): 463-472, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34664156

RESUMEN

Social insects are classic examples of cooperation and coordination. For instance, laboratory studies of colony relocation, or house-hunting, have investigated how workers coordinate their efforts to swiftly move the colony to the best nesting site available while preserving colony integrity, i.e. avoiding a split. However, several studies have shown that, in some other contexts, individuals may use private rather than social information and may act solitarily rather than in a coordinated way. Here, we study resource allocation by a mature ant colony when it reproduces by fissioning into several colonies. This is a very different task than house hunting in that colony fission seeks the split of the colony. We develop a simple individual-based model to test if colony fission and resource allocation may be carried out by workers acting solitarily with no coordination. Our model reproduces well the pattern of allocation observed in nature (number and size of new colonies). This does not show that workers do not communicate nor coordinate. Rather, it suggests that independent decision making may be an important component of the process of resource allocation.


Asunto(s)
Insectos , Conducta Social , Animales , Asignación de Recursos
19.
Environ Sci Technol ; 56(18): 13485-13498, 2022 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-36052879

RESUMEN

There is a growing realization that the complexity of model ensemble studies depends not only on the models used but also on the experience and approach used by modelers to calibrate and validate results, which remain a source of uncertainty. Here, we applied a multi-criteria decision-making method to investigate the rationale applied by modelers in a model ensemble study where 12 process-based different biogeochemical model types were compared across five successive calibration stages. The modelers shared a common level of agreement about the importance of the variables used to initialize their models for calibration. However, we found inconsistency among modelers when judging the importance of input variables across different calibration stages. The level of subjective weighting attributed by modelers to calibration data decreased sequentially as the extent and number of variables provided increased. In this context, the perceived importance attributed to variables such as the fertilization rate, irrigation regime, soil texture, pH, and initial levels of soil organic carbon and nitrogen stocks was statistically different when classified according to model types. The importance attributed to input variables such as experimental duration, gross primary production, and net ecosystem exchange varied significantly according to the length of the modeler's experience. We argue that the gradual access to input data across the five calibration stages negatively influenced the consistency of the interpretations made by the modelers, with cognitive bias in "trial-and-error" calibration routines. Our study highlights that overlooking human and social attributes is critical in the outcomes of modeling and model intercomparison studies. While complexity of the processes captured in the model algorithms and parameterization is important, we contend that (1) the modeler's assumptions on the extent to which parameters should be altered and (2) modeler perceptions of the importance of model parameters are just as critical in obtaining a quality model calibration as numerical or analytical details.


Asunto(s)
Carbono , Suelo , Ecosistema , Humanos , Nitrógeno , Incertidumbre
20.
Environ Res ; 212(Pt E): 113554, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35644493

RESUMEN

Anaerobic ammonia oxidation (Anammox) is an innovative technology for cost-efficient nitrogen removal without intensive aeration. However, effective control of the competition between nitrite oxidizing bacteria (XNOB) and Anammox bacteria (XANA) for nitrite is a key challenge for broad applications of single-stage Anammox processes in real wastewater treatment. Therefore, a real-time aeration scheme was proposed to determine dissolved oxygen (DO) based on nitrite concentration for effective control of XNOB growth while maintaining the XANA activity in a single-stage Anammox process. In this study, a non-steady state mathematical model was developed and calibrated using previously reported lab-scale Anammox results to investigate the efficiency of the proposed real-time aeration scheme in enhancing the Anammox process. Based on the calibrated model simulation results, DO of about 0.10 mg-O2/L was found to be ideal for maintaining effective nitrite creation by ammonia oxidizing bacteria (XAOB) while slowing down the growth of XNOB. If DO is too low (e.g., 0.01 mg-O2/L or lower), the overall rate of the ammonia removal is limited due to slow growth of XAOB. On the other hand, high DO (e.g., 1.0 mg-O2/L or higher) inhibits the growth of XANA, resulting in dominancy of XAOB and XNOB. According to the simulation results, nitrite concentration was found to be a rate-limiting parameter on effective nitrogen removal in single-stage Anammox processes. We also found that nitrite concentration can be used as a real-time switch for aeration in a single-stage Anammox process. A schematic aeration method based on real-time nitrite concentration was proposed and examined to control the competition between XANA and XNOB. In the model simulation, the XANA activity was successfully maintained because the schematic aeration prevented an outgrowth of XNOB, allowing energy-efficient nitrogen removal using single-stage Anammox processes.


Asunto(s)
Nitritos , Purificación del Agua , Amoníaco , Reactores Biológicos/microbiología , Nitrógeno , Oxidación-Reducción , Oxígeno , Aguas del Alcantarillado , Aguas Residuales/análisis , Purificación del Agua/métodos
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda