Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Total Environ ; 912: 169120, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38070558

RESUMO

Multi-hazard events, characterized by the simultaneous, cascading, or cumulative occurrence of multiple natural hazards, pose a significant threat to human lives and assets. This is primarily due to the cumulative and cascading effects arising from the interplay of various natural hazards across space and time. However, their identification is challenging, which is attributable to the complex nature of natural hazard interactions and the limited availability of multi-hazard observations. This study presents an approach for identifying multi-hazard events during the past 123 years (1900-2023) using the EM-DAT global disaster database. Leveraging the 'associated hazard' information in EM-DAT, multi-hazard events are detected and assessed in relation to their frequency, impact on human lives and assets, and reporting trends. The interactions between various combinations of natural hazard pairs are explored, reclassifying them into four categories: preconditioned/triggering, multivariate, temporally compounding, and spatially compounding multi-hazard events. The results show, globally, approximately 19 % of the 16,535 disasters recorded in EM-DAT can be classified as multi-hazard events. However, the multi-hazard events recorded in EM-DAT are disproportionately responsible for nearly 59 % of the estimated global economic losses. Conversely, single hazard events resulted in higher fatalities compared to multi-hazard events. The largest proportion of multi-hazard events are associated with floods, storms, and earthquakes. Landslides emerge as the predominant secondary hazards within multi-hazard pairs, primarily triggered by floods, storms, and earthquakes, with the majority of multi-hazard events exhibiting preconditioned/triggering and multivariate characteristics. There is a higher prevalence of multi-hazard events in Asia and North America, whilst temporal overlaps of multiple hazards predominate in Europe. These results can be used to increase the integration of multi-hazard thinking in risk assessments, emergency management response plans and mitigation policies at both national and international levels.

2.
Neural Netw ; 118: 338-351, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31369950

RESUMO

This paper describes a robust and computationally feasible method to train and quantify the uncertainty of Neural Networks. Specifically, we propose a back propagation algorithm for Neural Networks with interval predictions. In order to maintain numerical stability we propose minimising the maximum of the batch of errors at each step. Our approach can accommodate incertitude in the training data, and therefore adversarial examples from a commonly used attack model can be trivially accounted for. We present results on a test function example, and a more realistic engineering test case. The reliability of the predictions of these networks is guaranteed by the non-convex Scenario approach to chance constrained optimisation, which takes place following training, and is hence robust to the performance of the optimiser. A key result is that, by using minibatches of size M, the complexity of the proposed approach scales as O(M⋅Niter), and does not depend upon the number of training data points as with other Interval Predictor Model methods. In addition, troublesome penalty function methods are avoided. To the authors' knowledge this contribution presents the first computationally feasible approach for dealing with convex set based epistemic uncertainty in huge datasets.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Confiabilidade dos Dados , Humanos , Incerteza
3.
R Soc Open Sci ; 6(12): 191986, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31903220

RESUMO

[This corrects the article DOI: 10.1098/rsos.180687.].

4.
R Soc Open Sci ; 5(11): 180687, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-30564387

RESUMO

A new validation metric is proposed that combines the use of a threshold based on the uncertainty in the measurement data with a normalized relative error, and that is robust in the presence of large variations in the data. The outcome from the metric is the probability that a model's predictions are representative of the real world based on the specific conditions and confidence level pertaining to the experiment from which the measurements were acquired. Relative error metrics are traditionally designed for use with a series of data values, but orthogonal decomposition has been employed to reduce the dimensionality of data matrices to feature vectors so that the metric can be applied to fields of data. Three previously published case studies are employed to demonstrate the efficacy of this quantitative approach to the validation process in the discipline of structural analysis, for which historical data were available; however, the concept could be applied to a wide range of disciplines and sectors where modelling and simulation play a pivotal role.

5.
Neural Netw ; 96: 80-90, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28987979

RESUMO

Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R2 cannot determine if the prediction made by ANN is biased. Additionally, R2 does not indicate if a model is adequate, as it is possible to have a low R2 for a good model and a high R2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy.


Assuntos
Redes Neurais de Computação , Dinâmica não Linear , Algoritmos , Teorema de Bayes , Reprodutibilidade dos Testes
6.
Finite Elem Anal Des ; 51(1): 31-48, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22474398

RESUMO

The aim of this paper is to demonstrate that stochastic analyses can be performed on large and complex models within affordable costs. Stochastic analyses offer a much more realistic approach for analysis and design of components and systems although generally computationally demanding. Hence, resorting to efficient approaches and high performance computing is required in order to reduce the execution time.A general purpose software that provides an integration between deterministic solvers (i.e. finite element solvers), efficient algorithms for uncertainty management and high performance computing is presented. The software is intended for a wide range of applications, which includes optimization analysis, life-cycle management, reliability and risk analysis, fatigue and fractures simulation, robust design.The applicability of the proposed tools for practical applications is demonstrated by means of a number of case studies of industrial interest involving detailed models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...