RESUMO
In order to deal with an increasingly complex world, we need ever more sophisticated computational models that can help us make decisions wisely and understand the potential consequences of choices. But creating a model requires far more than just raw data and technical skills: it requires a close collaboration between model commissioners, developers, users and reviewers. Good modelling requires its users and commissioners to understand more about the whole process, including the different kinds of purpose a model can have and the different technical bases. This paper offers a guide to the process of commissioning, developing and deploying models across a wide range of domains from public policy to science and engineering. It provides two checklists to help potential modellers, commissioners and users ensure they have considered the most significant factors that will determine success. We conclude there is a need to reinforce modelling as a discipline, so that misconstruction is less likely; to increase understanding of modelling in all domains, so that the misuse of models is reduced; and to bring commissioners closer to modelling, so that the results are more useful.
RESUMO
This paper is a critical recasting of some of Robert Rosen's thought. It is argued that a lot of the thrust of Rosen's work can be better understood when recast in terms of the context dependency of causal models. When recast in this way, I seek to highlight how his thought does not lead to the abandonment of formal modelling and a descent into relativism, but a more careful and rigours science of complex systems. This also sheds light on several aspects of modelling, including the need for multiple models, the nature of modelling noise, and why adaptive systems cause particular problems to modellers. In this way, I hope to decrease researchers fear that, by taking Rosen's criticisms seriously, they would have to abandon the realm of acceptable science.
Assuntos
Modelos TeóricosRESUMO
Modellers of complex biological or social systems are often faced with an invidious choice: to use simple models with few mechanisms that can be fully analysed, or to construct complicated models that include all the features which are thought relevant. The former ensures rigour, the latter relevance. We discuss a method that combines these two approaches, beginning with a complex model and then modelling the complicated model with simpler models. The resulting "chain" of models ensures some rigour and relevance. We illustrate this process on a complex model of voting intentions, constructing a reduced model which agrees well with the predictions of the full model. Experiments with variations of the simpler model yield additional insights which are hidden by the complexity of the full model. This approach facilitated collaboration between social scientists and physicists-the complex model was specified based on the social science literature, and the simpler model constrained to agree (in core aspects) with the complicated model.