Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Value Health ; 27(8): 1073-1084, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38641056

RESUMO

OBJECTIVES: Health economic (HE) models are often considered as "black boxes" because they are not publicly available and lack transparency, which prevents independent scrutiny of HE models. Additionally, validation efforts and validation status of HE models are not systematically reported. Methods to validate HE models in absence of their full underlying code are therefore urgently needed to improve health policy making. This study aimed to develop and test a generic dashboard to systematically explore the workings of HE models and validate their model parameters and outcomes. METHODS: The Probabilistic Analysis Check dashBOARD (PACBOARD) was developed using insights from literature, health economists, and a data scientist. Functionalities of PACBOARD are (1) exploring and validating model parameters and outcomes using standardized validation tests and interactive plots, (2) visualizing and investigating the relationship between model parameters and outcomes using metamodeling, and (3) predicting HE outcomes using the fitted metamodel. To test PACBOARD, 2 mock HE models were developed, and errors were introduced in these models, eg, negative costs inputs, utility values exceeding 1. PACBOARD metamodeling predictions of incremental net monetary benefit were validated against the original model's outcomes. RESULTS: PACBOARD automatically identified all errors introduced in the erroneous HE models. Metamodel predictions were accurate compared with the original model outcomes. CONCLUSIONS: PACBOARD is a unique dashboard aiming at improving the feasibility and transparency of validation efforts of HE models. PACBOARD allows users to explore the working of HE models using metamodeling based on HE models' parameters and outcomes.


Assuntos
Modelos Econômicos , Humanos , Análise Custo-Benefício , Modelos Estatísticos , Economia Médica , Reprodutibilidade dos Testes , Política de Saúde
2.
Sensors (Basel) ; 22(15)2022 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-35898063

RESUMO

Software-defined networking (SDN) is an innovative network architecture that splits the control and management planes from the data plane. It helps in simplifying network manageability and programmability, along with several other benefits. Due to the programmability features, SDN is gaining popularity in both academia and industry. However, this emerging paradigm has been facing diverse kinds of challenges during the SDN implementation process and with respect to adoption of existing technologies. This paper evaluates several existing approaches in SDN and compares and analyzes the findings. The paper is organized into seven categories, namely network testing and verification, flow rule installation mechanisms, network security and management issues related to SDN implementation, memory management studies, SDN simulators and emulators, SDN programming languages, and SDN controller platforms. Each category has significance in the implementation of SDN networks. During the implementation process, network testing and verification is very important to avoid packet violations and network inefficiencies. Similarly, consistent flow rule installation, especially in the case of policy change at the controller, needs to be carefully implemented. Effective network security and memory management, at both the network control and data planes, play a vital role in SDN. Furthermore, SDN simulation tools, controller platforms, and programming languages help academia and industry to implement and test their developed network applications. We also compare the existing SDN studies in detail in terms of classification and discuss their benefits and limitations. Finally, future research guidelines are provided, and the paper is concluded.

3.
Geohealth ; 6(6): e2021GH000567, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35765413

RESUMO

Anthropogenic emissions and ambient fine particulate matter (PM2.5) concentrations have declined in recent years across China. However, PM2.5 exposure remains high, ozone (O3) exposure is increasing, and the public health impacts are substantial. We used emulators to explore how emission changes (averaged per sector over all species) have contributed to changes in air quality and public health in China over 2010-2020. We show that PM2.5 exposure peaked in 2012 at 52.8 µg m-3, with contributions of 31% from industry and 22% from residential emissions. In 2020, PM2.5 exposure declined by 36% to 33.5 µg m-3, where the contributions from industry and residential sources reduced to 15% and 17%, respectively. The PM2.5 disease burden decreased by only 9% over 2012 where the contributions from industry and residential sources reduced to 15% and 17%, respectively 2020, partly due to an aging population with greater susceptibility to air pollution. Most of the reduction in PM2.5 exposure and associated public health benefits occurred due to reductions in industrial (58%) and residential (29%) emissions. Reducing national PM2.5 exposure below the World Health Organization Interim Target 2 (25 µg m-3) would require a further 80% reduction in residential and industrial emissions, highlighting the challenges that remain to improve air quality in China.

4.
Geophys Res Lett ; 49(20): e2022GL099788, 2022 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-36589268

RESUMO

The IPCC's scientific assessment of the timing of net-zero emissions and 2030 emission reduction targets consistent with limiting warming to 1.5°C or 2°C rests on large scenario databases. Updates to this assessment, such as between the IPCC's Special Report on Global Warming of 1.5°C (SR1.5) of warming and the Sixth Assessment Report (AR6), are the result of intertwined, sometimes opaque, factors. Here we isolate one factor: the Earth System Model emulators used to estimate the global warming implications of scenarios. We show that warming projections using AR6-calibrated emulators are consistent, to within around 0.1°C, with projections made by the emulators used in SR1.5. The consistency is due to two almost compensating changes: the increase in assessed historical warming between SR1.5 (based on AR5) and AR6, and a reduction in projected warming due to improved agreement between the emulators' response to emissions and the assessment to which it is calibrated.

5.
Med Decis Making ; 42(1): 28-42, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34098793

RESUMO

BACKGROUND: Metamodeling may substantially reduce the computational expense of individual-level state transition simulation models (IL-STM) for calibration, uncertainty quantification, and health policy evaluation. However, because of the lack of guidance and readily available computer code, metamodels are still not widely used in health economics and public health. In this study, we provide guidance on how to choose a metamodel for uncertainty quantification. METHODS: We built a simulation study to evaluate the prediction accuracy and computational expense of metamodels for uncertainty quantification using life-years gained (LYG) by treatment as the IL-STM outcome. We analyzed how metamodel accuracy changes with the characteristics of the simulation model using a linear model (LM), Gaussian process regression (GP), generalized additive models (GAMs), and artificial neural networks (ANNs). Finally, we tested these metamodels in a case study consisting of a probabilistic analysis of a lung cancer IL-STM. RESULTS: In a scenario with low uncertainty in model parameters (i.e., small confidence interval), sufficient numbers of simulated life histories, and simulation model runs, commonly used metamodels (LM, ANNs, GAMs, and GP) have similar, good accuracy, with errors smaller than 1% for predicting LYG. With a higher level of uncertainty in model parameters, the prediction accuracy of GP and ANN is superior to LM. In the case study, we found that in the worst case, the best metamodel had an error of about 2.1%. CONCLUSION: To obtain good prediction accuracy, in an efficient way, we recommend starting with LM, and if the resulting accuracy is insufficient, we recommend trying ANNs and eventually also GP regression.


Assuntos
Redes Neurais de Computação , Simulação por Computador , Humanos , Modelos Lineares , Distribuição Normal , Incerteza
6.
Front Artif Intell ; 4: 673062, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34151255

RESUMO

Weak gravitational lensing mass maps play a crucial role in understanding the evolution of structures in the Universe and our ability to constrain cosmological models. The prediction of these mass maps is based on expensive N-body simulations, which can create a computational bottleneck for cosmological analyses. Simulation-based emulators of map summary statistics, such as the matter power spectrum and its covariance, are starting to play increasingly important role, as the analytical predictions are expected to reach their precision limits for upcoming experiments. Creating an emulator of the cosmological mass maps themselves, rather than their summary statistics, is a more challenging task. Modern deep generative models, such as Generative Adversarial Networks (GAN), have demonstrated their potential to achieve this goal. Most existing GAN approaches produce simulations for a fixed value of the cosmological parameters, which limits their practical applicability. We propose a novel conditional GAN model that is able to generate mass maps for any pair of matter density Ω m and matter clustering strength σ 8, parameters which have the largest impact on the evolution of structures in the Universe, for a given source galaxy redshift distribution n(z). Our results show that our conditional GAN can interpolate efficiently within the space of simulated cosmologies, and generate maps anywhere inside this space with good visual quality high statistical accuracy. We perform an extensive quantitative comparison of the N-body and GAN -generated maps using a range of metrics: the pixel histograms, peak counts, power spectra, bispectra, Minkowski functionals, correlation matrices of the power spectra, the Multi-Scale Structural Similarity Index (MS-SSIM) and our equivalent of the Fréchet Inception Distance. We find a very good agreement on these metrics, with typical differences are <5% at the center of the simulation grid, and slightly worse for cosmologies at the grid edges. The agreement for the bispectrum is slightly worse, on the <20% level. This contribution is a step toward building emulators of mass maps directly, capturing both the cosmological signal and its variability. We make the code and the data publicly available.

7.
Front Physiol ; 12: 662314, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34113262

RESUMO

Purpose: Bayesian calibration is generally superior to standard direct-search algorithms in that it estimates the full joint posterior distribution of the calibrated parameters. However, there are many barriers to using Bayesian calibration in health decision sciences stemming from the need to program complex models in probabilistic programming languages and the associated computational burden of applying Bayesian calibration. In this paper, we propose to use artificial neural networks (ANN) as one practical solution to these challenges. Methods: Bayesian Calibration using Artificial Neural Networks (BayCANN) involves (1) training an ANN metamodel on a sample of model inputs and outputs, and (2) then calibrating the trained ANN metamodel instead of the full model in a probabilistic programming language to obtain the posterior joint distribution of the calibrated parameters. We illustrate BayCANN using a colorectal cancer natural history model. We conduct a confirmatory simulation analysis by first obtaining parameter estimates from the literature and then using them to generate adenoma prevalence and cancer incidence targets. We compare the performance of BayCANN in recovering these "true" parameter values against performing a Bayesian calibration directly on the simulation model using an incremental mixture importance sampling (IMIS) algorithm. Results: We were able to apply BayCANN using only a dataset of the model inputs and outputs and minor modification of BayCANN's code. In this example, BayCANN was slightly more accurate in recovering the true posterior parameter estimates compared to IMIS. Obtaining the dataset of samples, and running BayCANN took 15 min compared to the IMIS which took 80 min. In applications involving computationally more expensive simulations (e.g., microsimulations), BayCANN may offer higher relative speed gains. Conclusions: BayCANN only uses a dataset of model inputs and outputs to obtain the calibrated joint parameter distributions. Thus, it can be adapted to models of various levels of complexity with minor or no change to its structure. In addition, BayCANN's efficiency can be especially useful in computationally expensive models. To facilitate BayCANN's wider adoption, we provide BayCANN's open-source implementation in R and Stan.

8.
Brain Cogn ; 145: 105628, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33007685

RESUMO

Our study was designed to test a recent proposal by Cayol and Nazir (2020), according to which language processing takes advantage of motor system "emulators". An emulator is a brain mechanism that learns the causal relationship between an action and its sensory consequences. Emulators predict the outcome of a motor command in terms of its sensory reafference and serve monitoring ongoing movements. For the purpose of motor planning/learning, emulators can "run offline", decoupled from sensory input and motor output. Such offline simulations are equivalent to mental imagery (Grush, 2004). If language processing can profit from the associative-memory network of emulators, mental-imagery-aptitude should predict language skills. However, this should hold only for language content that is imageable. We tested this assumption in typically developing adolescents using two motor-imagery paradigms. One that measured participant's error in estimating their motor ability, and another that measured the time to perform a mental simulation. When the time to perform a mental simulation is taken as measure, mental-imagery-aptitude does indeed selectively predict word-definition performance for high imageable words. These results provide an alternative position relative to the question of why language processes recruit modality-specific brain regions and support the often-hypothesized link between language and motor skills.


Assuntos
Aptidão , Idioma , Memória , Adolescente , Encéfalo , Humanos , Imaginação , Destreza Motora
9.
J Cogn ; 3(1): 35, 2020 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-33043245

RESUMO

Whether language comprehension requires the participation of brain structures that evolved for perception and action has been a subject of intense debate. While brain-imaging evidence for the involvement of such modality-specific regions has grown, the fact that lesions to these structures do not necessarily erase word knowledge has invited the conclusion that language-induced activity in these structures might not be essential for word recognition. Why language processing recruits these structures remains unanswered, however. Here, we examine the original findings from a slightly different perspective. We first consider the 'original' function of structures in modality-specific brain regions that are recruited by language activity. We propose that these structures help elaborate 'internal forward models' in motor control (c.f. emulators). Emulators are brain systems that capture the relationship between an action and its sensory consequences. During language processing emulators could thus allow accessing associative memories. We further postulate the existence of a linguistic system that exploits, in a rule-based manner, emulators and other nonlinguistic brain systems, to gain complementary (and redundant) information during language processing. Emulators are therefore just one of several sources of information. We emphasize that whether a given word-form triggers activity in modality-specific brain regions depends on the linguistic context and not on the word-form as such. The role of modality-specific systems in language processing is thus not to help understanding words but to model the verbally depicted situation by supplying memorized context information. We present a model derived from these assumptions and provide predictions and perspectives for future research.

10.
Med Decis Making ; 40(3): 348-363, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32428428

RESUMO

Metamodels can be used to reduce the computational burden associated with computationally demanding analyses of simulation models, although applications within health economics are still scarce. Besides a lack of awareness of their potential within health economics, the absence of guidance on the conceivably complex and time-consuming process of developing and validating metamodels may contribute to their limited uptake. To address these issues, this article introduces metamodeling to the wider health economic audience and presents a process for applying metamodeling in this context, including suitable methods and directions for their selection and use. General (i.e., non-health economic specific) metamodeling literature, clinical prediction modeling literature, and a previously published literature review were exploited to consolidate a process and to identify candidate metamodeling methods. Methods were considered applicable to health economics if they are able to account for mixed (i.e., continuous and discrete) input parameters and continuous outcomes. Six steps were identified as relevant for applying metamodeling methods within health economics: 1) the identification of a suitable metamodeling technique, 2) simulation of data sets according to a design of experiments, 3) fitting of the metamodel, 4) assessment of metamodel performance, 5) conducting the required analysis using the metamodel, and 6) verification of the results. Different methods are discussed to support each step, including their characteristics, directions for use, key references, and relevant R and Python packages. To address challenges regarding metamodeling methods selection, a first guide was developed toward using metamodels to reduce the computational burden of analyses of health economic models. This guidance may increase applications of metamodeling in health economics, enabling increased use of state-of-the-art analyses (e.g., value of information analysis) with computationally burdensome simulation models.


Assuntos
Simulação por Computador/normas , Economia Médica/normas , Computação Matemática , Técnicas de Apoio para a Decisão , Humanos
11.
Front Artif Intell ; 3: 52, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33733169

RESUMO

Methods for sequential design of computer experiments typically consist of two phases. In the first phase, the exploratory phase, a space-filling initial design is used to estimate hyperparameters of a Gaussian process emulator (GPE) and to provide some initial global exploration of the model function. In the second phase, more design points are added one by one to improve the GPE and to solve the actual problem at hand (e.g., Bayesian optimization, estimation of failure probabilities, solving Bayesian inverse problems). In this article, we investigate whether hyperparameters can be estimated without a separate exploratory phase. Such an approach will leave hyperparameters uncertain in the first iterations, so the acquisition function (which tells where to evaluate the model function next) and the GPE-based estimator need to be adapted to non-Gaussian random fields. Numerical experiments are performed exemplarily on a sequential method for solving Bayesian inverse problems. These experiments show that hyperparameters can indeed be estimated without an exploratory phase and the resulting method works almost as efficient as if the hyperparameters had been known beforehand. This means that the estimation of hyperparameters should not be the reason for including an exploratory phase. Furthermore, we show numerical examples, where these results allow us to eliminate the exploratory phase to make the sequential design method both faster (requiring fewer model evaluations) and easier to use (requiring fewer choices by the user).

12.
Math Biosci ; 318: 108273, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31647934

RESUMO

Mathematical modelling is a useful technique to help elucidate the connection between non-transmural ischaemia and ST elevation and depression of the ECG. Generally, models represent non-transmural ischaemia using an ischaemic zone that extends from the endocardium partway to the epicardium. However, recent experimental work has suggested that ischaemia typically arises within the heart wall. This work examines the effect of modelling cardiac ischaemia in the left ventricle using two different models: subendocardial ischaemia and partial thickness ischaemia, representing the first and second scenarios, respectively. We found that it is possible, only in the model of subendocardial ischaemia, to see a single minimum on the epicardial surface above the ischaemic region, and this only occurs for low ischaemic thicknesses. This may help to explain the rarity of ST depression that is located over the ischaemic region. It was also found that, in both models, the epicardial potential distribution is most sensitive to the proximity of the ischaemic region to the epicardium, rather than to the thickness of the ischaemic region. Since proximity does not indicate the thickness of the ischaemic region, this suggests a reason why it may be difficult to determine the degree of ischaemia using the ST segment of the ECG.


Assuntos
Fenômenos Eletrofisiológicos/fisiologia , Modelos Cardiovasculares , Isquemia Miocárdica/fisiopatologia , Pericárdio/fisiopatologia , Humanos
13.
Med Decis Making ; 39(4): 405-413, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31179833

RESUMO

Background. Microsimulation models have been extensively used in the field of cancer modeling. However, there is substantial uncertainty regarding estimates from these models, for example, overdiagnosis in prostate cancer. This is usually not thoroughly examined due to the high computational effort required. Objective. To quantify uncertainty in model outcomes due to uncertainty in model parameters, using a computationally efficient emulator (Gaussian process regression) instead of the model. Methods. We use a microsimulation model of prostate cancer (microsimulation screening analysis [MISCAN]) to simulate individual life histories. We analyze the effect of parametric uncertainty on overdiagnosis with probabilistic sensitivity analyses (ProbSAs). To minimize the number of MISCAN runs needed for ProbSAs, we emulate MISCAN, using data pairs of parameter values and outcomes to fit a Gaussian process regression model. We evaluate to what extent the emulator accurately reproduces MISCAN by computing its prediction error. Results. Using an emulator instead of MISCAN, we may reduce the computation time necessary to run a ProbSA by more than 85%. The average relative prediction error of the emulator for overdiagnosis equaled 1.7%. We predicted that 42% of screen-detected men are overdiagnosed, with an associated empirical confidence interval between 38% and 48%. Sensitivity analyses show that the accuracy of the emulator is sensitive to which model parameters are included in the training runs. Conclusions. For a computationally expensive simulation model with a large number of parameters, we show it is possible to conduct a ProbSA, within a reasonable computation time, by using a Gaussian process regression emulator instead of the original simulation model.


Assuntos
Simulação de Paciente , Neoplasias da Próstata/classificação , Incerteza , Adulto , Humanos , Masculino , Modelos Estatísticos , Distribuição Normal , Neoplasias da Próstata/fisiopatologia
14.
Expert Rev Pharmacoecon Outcomes Res ; 19(2): 181-187, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30426801

RESUMO

INTRODUCTION: Metamodels, also known as meta-models, surrogate models, or emulators, are used in several fields of research to negate runtime issues with analyzing computational demanding simulation models. This study introduces metamodeling and presents results of a review on metamodeling applications in health economics. AREAS COVERED: A scoping review was performed to identify studies that applied metamodeling methods in a health economic context. After search and selection, 13 publications were found to employ metamodeling methods in health economics. Metamodels were used to perform value of information analysis (n = 5, 38%), deterministic sensitivity analysis (n = 4, 31%), model calibration (n = 1, 8%), probabilistic sensitivity analysis (n = 1), or optimization (n = 1, 8%). One study was found to extrapolate a simulation model to other countries (n = 1, 8%). Applied metamodeling techniques varied considerably between studies, with linear regression being most frequently applied (n = 7, 54%). EXPERT COMMENTARY: Although it has great potential to enable computational demanding analyses of health economic models, metamodeling in health economics is still in its infancy, as illustrated by the limited number of applications and the relatively simple metamodeling methods applied. Comprehensive guidance specific to health economics is needed to provide modelers with the information and tools needed to utilize the full potential of metamodels.


Assuntos
Atenção à Saúde/economia , Economia Médica , Modelos Econômicos , Simulação por Computador , Tomada de Decisões , Humanos , Modelos Lineares
15.
Comput Biol Med ; 102: 288-299, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-29914695

RESUMO

Although computational studies are increasingly used to gain insight into diseases such as myocardial ischaemia, there is still considerable uncertainty about the values for many of the parameters in these studies. This is particularly true for the bidomain conductivity values that are used in normal tissue and, even more so, in ischaemic tissue, when modelling ischaemia. In this work, we extended a previous study that used a half-ellipsoidal model and a realistic model to study subendocardial ischaemia during the ST segment, so that we could simulate both early and late stage ischaemia. We found that, for both stages of ischaemia, there was still the same connection between the degree of ischaemia and the development of features such as minima and maxima in the epicardial potential distribution (EPD), although the magnitudes of the potentials were very often less, which may be significant in terms of detecting them experimentally. Using uncertainty quantification associated with the ischaemic region conductivities, we also determined that the EPD features were sensitive to the ischaemic region extracellular normal and longitudinal conductivities during early stage ischaemia, whereas, during late stage ischaemia, the intracellular longitudinal conductivity was the most significant. However, since we again found that these effects were minor compared with the effects of fibre rotation angle and ischaemic depth, this might suggest that it is not necessary to use different conductivity values inside and outside the ischaemic region when modelling ST segment subendocardial ischaemia, unless the magnitudes of the potentials are an important part of the study.


Assuntos
Arritmias Cardíacas/diagnóstico por imagem , Isquemia Miocárdica/diagnóstico por imagem , Pericárdio/fisiologia , Potenciais de Ação , Simulação por Computador , Eletrocardiografia , Coração , Sistema de Condução Cardíaco/fisiologia , Humanos , Isquemia , Modelos Cardiovasculares , Distribuição Normal , Análise de Regressão
16.
Comput Biol Med ; 95: 75-89, 2018 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29459293

RESUMO

There is considerable interest in simulating ischaemia in the ventricle and its effect on the electrocardiogram, because a better understanding of the connection between the two may lead to improvements in diagnosis of myocardial ischaemia. In this work we studied subendocardial ischaemia, in a simplified half-ellipsoidal bidomain model of a ventricle, and its effect on ST segment epicardial potential distributions (EPDs). We found that the EPD changed as the ischaemic depth increased, from a single minimum (min1) over the ischaemic region to a maximum (max) there, with min1 over the border of the region. Lastly, a second minimum (min2) developed on the opposite side of the ischaemic region, in addition to min1 and max. We replicated these results in a realistic ventricular model and showed that the min1 only case could be found for ischaemic depths of up to around 35% of the ventricular wall. In addition, we systematically examined the sensitivity of EPD parameters, such as the potentials and positions of min1, max and min2, to various inputs to the half-ellipsoidal model, such as fibre rotation angle, ischaemic depth and conductivities. We found that the EPD parameters were not sensitive to the blood or transverse bidomain conductivities and were most sensitive to either ischaemic depth and/or fibre rotation angle. This allowed us to conclude that the asynchronous development of the two minima might provide a way of distinguishing between low and high thickness subendocardial ischaemia, and that this method may well be valid despite variability in the population.


Assuntos
Eletrocardiografia , Modelos Cardiovasculares , Isquemia Miocárdica/fisiopatologia , Pericárdio/fisiopatologia , Humanos , Isquemia Miocárdica/patologia , Pericárdio/patologia
17.
Med Biol Eng Comput ; 56(5): 761-780, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-28933043

RESUMO

Reduced blood flow in the coronary arteries can lead to damaged heart tissue (myocardial ischaemia). Although one method for detecting myocardial ischaemia involves changes in the ST segment of the electrocardiogram, the relationship between these changes and subendocardial ischaemia is not fully understood. In this study, we modelled ST-segment epicardial potentials in a slab model of cardiac ventricular tissue, with a central ischaemic region, using the bidomain model, which considers conduction longitudinal, transverse and normal to the cardiac fibres. We systematically quantified the effect of uncertainty on the input parameters, fibre rotation angle, ischaemic depth, blood conductivity and six bidomain conductivities, on outputs that characterise the epicardial potential distribution. We found that three typical types of epicardial potential distributions (one minimum over the central ischaemic region, a tripole of minima, and two minima flanking a central maximum) could all occur for a wide range of ischaemic depths. In addition, the positions of the minima were affected by both the fibre rotation angle and the ischaemic depth, but not by changes in the conductivity values. We also showed that the magnitude of ST depression is affected only by changes in the longitudinal and normal conductivities, but not by the transverse conductivities.


Assuntos
Modelos Cardiovasculares , Isquemia Miocárdica/patologia , Incerteza , Potenciais de Ação/fisiologia , Algoritmos , Animais , Simulação por Computador , Sistema de Condução Cardíaco/fisiologia , Humanos , Análise dos Mínimos Quadrados , Pericárdio/patologia
18.
Proc Math Phys Eng Sci ; 473(2200): 20170026, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28484339

RESUMO

Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA