Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 166: 85-104, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37480771

RESUMEN

Artificial Intelligence and Machine learning have been widely used in various fields of mathematical computing, physical modeling, computational science, communication science, and stochastic analysis. Approaches based on Deep Artificial Neural Networks (DANN) are very popular in our days. Depending on the learning task, the exact form of DANNs is determined via their multi-layer architecture, activation functions and the so-called loss function. However, for a majority of deep learning approaches based on DANNs, the kernel structure of neural signal processing remains the same, where the node response is encoded as a linear superposition of neural activity, while the non-linearity is triggered by the activation functions. In the current paper, we suggest to analyze the neural signal processing in DANNs from the point of view of homogeneous chaos theory as known from polynomial chaos expansion (PCE). From the PCE perspective, the (linear) response on each node of a DANN could be seen as a 1st degree multi-variate polynomial of single neurons from the previous layer, i.e. linear weighted sum of monomials. From this point of view, the conventional DANN structure relies implicitly (but erroneously) on a Gaussian distribution of neural signals. Additionally, this view revels that by design DANNs do not necessarily fulfill any orthogonality or orthonormality condition for a majority of data-driven applications. Therefore, the prevailing handling of neural signals in DANNs could lead to redundant representation as any neural signal could contain some partial information from other neural signals. To tackle that challenge, we suggest to employ the data-driven generalization of PCE theory known as arbitrary polynomial chaos (aPC) to construct a corresponding multi-variate orthonormal representations on each node of a DANN. Doing so, we generalize the conventional structure of DANNs to Deep arbitrary polynomial chaos neural networks (DaPC NN). They decompose the neural signals that travel through the multi-layer structure by an adaptive construction of data-driven multi-variate orthonormal bases for each layer. Moreover, the introduced DaPC NN provides an opportunity to go beyond the linear weighted superposition of single neurons on each node. Inheriting fundamentals of PCE theory, the DaPC NN offers an additional possibility to account for high-order neural effects reflecting simultaneous interaction in multi-layer networks. Introducing the high-order weighted superposition on each node of the network mitigates the necessity to introduce non-linearity via activation functions and, hence, reduces the room for potential subjectivity in the modeling procedure. Although the current DaPC NN framework has no theoretical restrictions on the use of activation functions. The current paper also summarizes relevant properties of DaPC NNs inherited from aPC as analytical expressions for statistical quantities and sensitivity indexes on each node. We also offer an analytical form of partial derivatives that could be used in various training algorithms. Technically, DaPC NNs require similar training procedures as conventional DANNs, and all trained weights determine automatically the corresponding multi-variate data-driven orthonormal bases for all layers of DaPC NN. The paper makes use of three test cases to illustrate the performance of DaPC NN, comparing it with the performance of the conventional DANN and also with plain aPC expansion. Evidence of convergence over the training data size against validation data sets demonstrates that the DaPC NN outperforms the conventional DANN systematically. Overall, the suggested re-formulation of the kernel network structure in terms of homogeneous chaos theory is not limited to any particular architecture or any particular definition of the loss function. The DaPC NN Matlab Toolbox is available online and users are invited to adopt it for own needs.


Asunto(s)
Inteligencia Artificial , Dinámicas no Lineales , Redes Neurales de la Computación , Algoritmos , Neuronas
2.
Entropy (Basel) ; 22(8)2020 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-33286660

RESUMEN

Gaussian process emulators (GPE) are a machine learning approach that replicates computational demanding models using training runs of that model. Constructing such a surrogate is very challenging and, in the context of Bayesian inference, the training runs should be well invested. The current paper offers a fully Bayesian view on GPEs for Bayesian inference accompanied by Bayesian active learning (BAL). We introduce three BAL strategies that adaptively identify training sets for the GPE using information-theoretic arguments. The first strategy relies on Bayesian model evidence that indicates the GPE's quality of matching the measurement data, the second strategy is based on relative entropy that indicates the relative information gain for the GPE, and the third is founded on information entropy that indicates the missing information in the GPE. We illustrate the performance of our three strategies using analytical- and carbon-dioxide benchmarks. The paper shows evidence of convergence against a reference solution and demonstrates quantification of post-calibration uncertainty by comparing the introduced three strategies. We conclude that Bayesian model evidence-based and relative entropy-based strategies outperform the entropy-based strategy because the latter can be misleading during the BAL. The relative entropy-based strategy demonstrates superior performance to the Bayesian model evidence-based strategy.

3.
Ground Water ; 58(1): 93-109, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-30906991

RESUMEN

Hyporheic exchange is the interaction of river water and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic exchange has been attributed to the representation of heterogeneous subsurface properties. Our study evaluates the trade-offs between intrinsic (irreducible) and epistemic (reducible) model errors when choosing between homogeneous and highly complex subsurface parameter structures. We modeled the Steinlach River Test Site in Southwest Germany using a fully coupled surface water-groundwater model to simulate hyporheic exchange and to assess the predictive errors and uncertainties of transit time distributions. A highly parameterized model was built, treated as a "virtual reality" and used as a reference. We found that if the parameter structure is too simple, it will be limited by intrinsic model errors. By increasing subsurface complexity through the addition of zones or heterogeneity, we can begin to exchange intrinsic for epistemic errors. Thus, the appropriate level of detail to represent the subsurface depends on the acceptable range of intrinsic structural errors for the given modeling objectives and the available site data. We found that a zonated model is capable of reproducing the transit time distributions of a more detailed model, but only if the geological structures are known. An interpolated heterogeneous parameter field (cf. pilot points) showed the best trade-offs between the two errors, indicating fitness for practical applications. Parameter fields generated by multiple-point geostatistics (MPS) produce transit time distributions with the largest uncertainties, however, these are reducible by additional hydrogeological data, particularly flux measurements.


Asunto(s)
Agua Subterránea , Ríos , Agua Dulce , Alemania , Movimientos del Agua
4.
Front Artif Intell ; 3: 52, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33733169

RESUMEN

Methods for sequential design of computer experiments typically consist of two phases. In the first phase, the exploratory phase, a space-filling initial design is used to estimate hyperparameters of a Gaussian process emulator (GPE) and to provide some initial global exploration of the model function. In the second phase, more design points are added one by one to improve the GPE and to solve the actual problem at hand (e.g., Bayesian optimization, estimation of failure probabilities, solving Bayesian inverse problems). In this article, we investigate whether hyperparameters can be estimated without a separate exploratory phase. Such an approach will leave hyperparameters uncertain in the first iterations, so the acquisition function (which tells where to evaluate the model function next) and the GPE-based estimator need to be adapted to non-Gaussian random fields. Numerical experiments are performed exemplarily on a sequential method for solving Bayesian inverse problems. These experiments show that hyperparameters can indeed be estimated without an exploratory phase and the resulting method works almost as efficient as if the hyperparameters had been known beforehand. This means that the estimation of hyperparameters should not be the reason for including an exploratory phase. Furthermore, we show numerical examples, where these results allow us to eliminate the exploratory phase to make the sequential design method both faster (requiring fewer model evaluations) and easier to use (requiring fewer choices by the user).

5.
J Environ Manage ; 249: 109364, 2019 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-31404854

RESUMEN

Recent studies suggested hybrid green-blue-gray infrastructures (HGBGI) as the most promising urban drainage systems that can simultaneously combine reliability, resilience, and acceptability of gray infrastructures (networks of pipes) with multi-functionality, sustainability, and adaptability of green-blue infrastructures (GBI). Combining GBI and gray measures for designing new urban drainage systems forms a nonlinear multimodal mixed integer-real optimization problem that is highly constrained and intractable. For this purpose, this study presents a simulation-optimization framework to optimize urban drainage systems considering HGBGI alternatives and different degrees of centralization. The proposed framework begins with the characterization of the site under design and drawing the base graph. Then, different layouts with different degrees of centralization are generated and hydraulically designed using a recent algorithm called hanging gardens algorithm. After introducing the feasible GBI to the model, we now perform second optimization to find the optimum distribution of GBIs in a way that minimizes total life cycle costs of GBIs and pipe networks. Finally, resiliency and sustainability of different scenarios are evaluated using several design storms that provide material for final assessment and decision-making. The performance of the proposed framework is evaluated using a real large-scale case study, a part of the city of Ahvaz in Iran. Finally, results are presented and discussed with recommendations for future studies.


Asunto(s)
Modelos Teóricos , Ciudades , Color , Irán , Reproducibilidad de los Resultados
6.
Ground Water ; 57(3): 378-391, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30069873

RESUMEN

This study determines the aspects of river bathymetry that have the greatest influence on the predictive biases when simulating hyporheic exchange. To investigate this, we build a highly parameterized HydroGeoSphere model of the Steinlach River Test Site in southwest Germany as a reference. This model is then modified with simpler bathymetries, evaluating the changes to hyporheic exchange fluxes and transit time distributions. Results indicate that simulating hyporheic exchange with a high-resolution detailed bathymetry using a three-dimensional fully coupled model leads to nested multiscale hyporheic exchange systems. A poorly resolved bathymetry will underestimate the small-scale hyporheic exchange, biasing the simulated hyporheic exchange towards larger scales, thus leading to overestimates of hyporheic exchange residence times. This can lead to gross biases in the estimation of a catchment's capacity to attenuate pollutants when extrapolated to account for all meanders along an entire river within a watershed. The detailed river slope alone is not enough to accurately simulate the locations and magnitudes of losing and gaining river reaches. Thus, local bedforms in terms of bathymetric highs and lows within the river are required. Bathymetry surveying campaigns can be more effective by prioritizing bathymetry measurements along the thalweg and gegenweg of a meandering channel. We define the gegenweg as the line that connects the shallowest points in successive cross-sections along a river opposite to the thalweg under average flow conditions. Incorporating local bedforms will likely capture the nested nature of hyporheic exchange, leading to more physically meaningful simulations of hyporheic exchange fluxes and transit times.


Asunto(s)
Agua Subterránea , Ríos , Alemania
7.
J Contam Hydrol ; 195: 11-22, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27866081

RESUMEN

This study addresses the delineation of areas that contribute baseflow to a stream reach, also known as stream capture zones. Such areas can be delineated using standard well capture zone delineation methods, with three important differences: (1) natural gradients are smaller compared to those produced by supply wells and are therefore subject to greater numerical errors, (2) stream discharge varies seasonally, and (3) stream discharge varies spatially. This study focuses on model-related uncertainties due to model characteristics, discretization schemes, delineation methods, and particle tracking algorithms. The methodology is applied to the Alder Creek watershed in southwestern Ontario. Four different model codes are compared: HydroGeoSphere, WATFLOW, MODFLOW, and FEFLOW. In addition, two delineation methods are compared: reverse particle tracking and reverse transport, where the latter considers local-scale parameter uncertainty by using a macrodispersion term to produce a capture probability plume. The results from this study indicate that different models can calibrate acceptably well to the same data and produce very similar distributions of hydraulic head, but can produce different capture zones. The stream capture zone is found to be highly sensitive to the particle tracking algorithm. It was also found that particle tracking by itself, if applied to complex systems such as the Alder Creek watershed, would require considerable subjective judgement in the delineation of stream capture zones. Reverse transport is an alternative and more reliable approach that provides probability intervals for the baseflow contribution areas, taking uncertainty into account. The two approaches can be used together to enhance the confidence in the final outcome.


Asunto(s)
Agua Subterránea/química , Modelos Teóricos , Ríos/química , Movimientos del Agua , Algoritmos , Hidrología , Ontario , Incertidumbre
8.
Ground Water ; 54(6): 861-870, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27144615

RESUMEN

Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types.


Asunto(s)
Monitoreo del Ambiente , Agua Subterránea , Alemania , Incertidumbre
9.
Environ Int ; 79: 85-105, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25801101

RESUMEN

Anthropogenic Trace Compounds (ATCs) that continuously grow in numbers and concentrations are an emerging issue for water quality in both natural and technical environments. The complex web of exposure pathways as well as the variety in the chemical structure and potency of ATCs represents immense challenges for future research and policy initiatives. This review summarizes current trends and identifies knowledge gaps in innovative, effective monitoring and management strategies while addressing the research questions concerning ATC occurrence, fate, detection and toxicity. We highlight the progressing sensitivity of chemical analytics and the challenges in harmonization of sampling protocols and methods, as well as the need for ATC indicator substances to enable cross-national valid monitoring routine. Secondly, the status quo in ecotoxicology is described to advocate for a better implementation of long-term tests, to address toxicity on community and environmental as well as on human-health levels, and to adapt various test levels and endpoints. Moreover, we discuss potential sources of ATCs and the current removal efficiency of wastewater treatment plants (WWTPs) to indicate the most effective places and elimination strategies. Knowledge gaps in transport and/or detainment of ATCs through their passage in surface waters and groundwaters are further emphasized in relation to their physico-chemical properties, abiotic conditions and biological interactions in order to highlight fundamental research needs. Finally, we demonstrate the importance and remaining challenges of an appropriate ATC risk assessment since this will greatly assist in identifying the most urgent calls for action, in selecting the most promising measures, and in evaluating the success of implemented management strategies.


Asunto(s)
Monitoreo del Ambiente/métodos , Oligoelementos/análisis , Contaminantes Químicos del Agua/análisis , Contaminación Química del Agua/análisis , Ecosistema , Humanos , Medición de Riesgo/métodos , Oligoelementos/toxicidad , Contaminación Química del Agua/legislación & jurisprudencia , Contaminación Química del Agua/prevención & control
10.
Water Resour Res ; 50(12): 9484-9513, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25745272

RESUMEN

Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

11.
J Contam Hydrol ; 138-139: 22-39, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22796625

RESUMEN

We study plumes originating from continuous sources that require a dissolved reaction partner for their degradation. The length of such plumes is typically controlled by transverse mixing. While analytical expressions have been derived for homogeneous flow fields, incomplete characterization of the hydraulic conductivity field causes uncertainty in predicting plume lengths in heterogeneous domains. In this context, we analyze the effects of three sources of uncertainty: (i) The uncertainty of the effective mixing rate along the plume fringes due to spatially varying flow focusing, (ii) the uncertainty of the volumetric discharge through (and thus total mass flux leaving) the source area, and (iii) different parameterizations of the Darcy-scale transverse dispersion coefficient. The first two are directly related to heterogeneity of hydraulic conductivity. In this paper, we derive semi-analytical expressions for the probability distribution of plume lengths at different levels of complexity. The results are compared to numerical Monte Carlo simulations. Uncertainties in mixing and in the source strength result in a statistical distribution of possible plume lengths. For unconditional random hydraulic conductivity fields, plume lengths may vary by more than one order of magnitude even for moderate degrees of heterogeneity. Our results show that the uncertainty of volumetric flux through the source is the most relevant contribution to the variance of the plume length. The choice of different parameterizations for the local dispersion coefficient leads to differences in the mean estimated plume length.


Asunto(s)
Agua Subterránea/química , Movimientos del Agua , Contaminantes del Agua/química , Simulación por Computador , Modelos Químicos , Método de Montecarlo , Procesos Estocásticos , Incertidumbre
12.
IEEE Trans Vis Comput Graph ; 17(12): 1949-58, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22034312

RESUMEN

A new type of glyph is introduced to visualize unsteady flow with static images, allowing easier analysis of time-dependent phenomena compared to animated visualization. Adopting the visual metaphor of radar displays, this glyph represents flow directions by angles and time by radius in spherical coordinates. Dense seeding of flow radar glyphs on the flow domain naturally lends itself to multi-scale visualization: zoomed-out views show aggregated overviews, zooming-in enables detailed analysis of spatial and temporal characteristics. Uncertainty visualization is supported by extending the glyph to display possible ranges of flow directions. The paper focuses on 2D flow, but includes a discussion of 3D flow as well. Examples from CFD and the field of stochastic hydrogeology show that it is easy to discriminate regions of different spatiotemporal flow behavior and regions of different uncertainty variations in space and time. The examples also demonstrate that parameter studies can be analyzed because the glyph design facilitates comparative visualization. Finally, different variants of interactive GPU-accelerated implementations are discussed.

13.
J Contam Hydrol ; 116(1-4): 24-34, 2010 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-20541835

RESUMEN

The initial width of contaminant plumes is known to have a key influence on expected plume development, dispersion and travel time statistics. In past studies, initial plume width has been perceived identical to the geometric width of a contaminant source or injection volume. A recent study on optimal sampling layouts (Nowak et al., 2009) showed that a significant portion of uncertainty in predicting plume migration stems from the uncertain total hydraulic flux through the source area. This result points towards a missing link between source geometry and plume statistics, which we denote as the effective source width. We define the effective source width by the ratio between the actual and expected hydraulic fluxes times the geometric source width. The actual hydraulic flux through the source area is given by individual realizations while the expected one represents the mean over the ensemble. It is a stochastic quantity that may strongly differ from the actual geometric source width for geometrically small sources, and becomes identical only at the limit of wide sources (approaching ergodicity). We derive its stochastic ensemble moments in order to explore the dependency on source scale. We show that, if the effective source width is known rather than the geometric width, predictions of plume development can greatly increase in predictive power. This is illustrated on plume statistics such as the distribution of plume length, average width, transverse dispersion, total mass flux and overall concentration variance. The analysis is limited to 2D depth-averaged systems, but implications hold for 3D cases.


Asunto(s)
Contaminación Ambiental/análisis , Modelos Teóricos , Contaminantes del Agua/análisis , Método de Montecarlo , Procesos Estocásticos , Incertidumbre , Movimientos del Agua
14.
J Contam Hydrol ; 80(3-4): 130-48, 2005 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-16115697

RESUMEN

Vertical transverse mixing is known to be a controlling factor in natural attenuation of extended biodegradable plumes originating from continuously emitting sources. We perform conservative and reactive tracer tests in a quasi two-dimensional 14 m long sand box in order to quantify vertical mixing in heterogeneous media. The filling mimics natural sediments including a distribution of different hydro-facies, made of different sand mixtures, and micro-structures within the sand lenses. We quantify the concentration distribution of the conservative tracer by the analysis of digital images taken at steady state during the tracer-dye experiment. Heterogeneity causes plume meandering, leading to distorted concentration profiles. Without knowledge about the velocity distribution, it is not possible to determine meaningful vertical dispersion coefficients from the concentration profiles. Using the stream-line pattern resulting from an inverse model of previous experiments in the sand box, we can correct for the plume meandering. The resulting vertical dispersion coefficient is approximately approximately 4 x 10(-)(9) m(2)/s. We observe no distinct increase in the vertical dispersion coefficient with increasing travel distance, indicating that heterogeneity has hardly any impact on vertical transverse mixing. In the reactive tracer test, we continuously inject an alkaline solution over a certain height into the domain that is occupied otherwise by an acidic solution. The outline of the alkaline plume is visualized by adding a pH indicator into both solutions. From the height and length of the reactive plume, we estimate a transverse dispersion coefficient of approximately 3 x 10(-)(9) m(2)/s. Overall, the vertical transverse dispersion coefficients are less than an order of magnitude larger than pore diffusion coefficients and hardly increase due to heterogeneity. Thus, we conclude for the assessment of natural attenuation that reactive plumes might become very large if they are controlled by vertical dispersive mixing.


Asunto(s)
Contaminación Ambiental/prevención & control , Agua Dulce/química , Modelos Teóricos , Movimientos del Agua , Contaminantes Químicos del Agua/análisis , Colorantes/análisis , Sedimentos Geológicos/análisis
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA