Do you have to be a clairvoyant to be a good modeller?

Do you have to be a clairvoyant to be a good modeller?

By Roland Mathijssen

What is a model

At ESI, we are constantly working with models and talking about them. This often sounds very complex, and on one hand it is. We sometimes make very complex models. The basic questions, however, are: why do we make models and which problems are we solving with models?

When we ask this, we start to touch upon an almost philosophical question. What is a model?

Once you start to ask that, you come to the notion that we - as human beings or even living creatures - want to predict the future. We want to know what will happen if you do something, such as crossing the road. Or we want to know where we can find food, or which other creatures will see us as food. To survive, you need to be a clairvoyant, some sort of Nostradamus. Well, this may sound dramatic, but we are able to do this to a certain extent. Every living creature has methods to respond to its surrounding in ways that usually suit its needs. This is only possible because of the models in our heads or systems, which direct us to what is hopefully a correct choice.

Mental model

So, we have models in our heads. If you see a hazy sky, with dark clouds instead of nice white ones or the sun, you may make the choice to take an umbrella with you as it may rain soon. If you’re holding a tray with glasses of beer, you know that you have to keep it steady - because if you don’t, gravity can cause the glasses to fall to the floor, spilling the precious beer. You have a mental model of gravity in your head, or at least the effects of gravity, without needing to know precisely how gravity works (even physicists are still trying to figure this out!) or even the actual value of the gravitational acceleration. You don’t need all that detail to know that you have to keep your tray steady.

Modelling complex systems

And this is also important when we start modelling complex systems and reason on them. We want to predict how the system which we are designing will behave, even though it doesn’t really exist yet. We do not want to first design and build the full system, only to find out after a lengthy process that it doesn’t work as desired. We want to know this upfront. So we want to be true clairvoyants, the Nostradamuses of the high-tech industry. We try to make the models as simple as possible while giving the right amount of information and enough accuracy to trust that the new system will work as expected.

There is a nice book by Margaret Heffernan on this way of reasoning with models: Uncharted: How to Map the Future. However, this book also points to the fact that predictions are quite difficult, if not impossible. As every model is only an abstraction of reality and can never contain all the aspects of the system or, more importantly, its environment, the reality could always be different from the result of the model. Or as Margaret phrases it: History will never repeat itself.

And this is true, of course. Trying to predict the future using the past always involves the risks that you have missed the important aspects of the past or that new aspects have popped up, resulting in a future which is different from the one you predicted.

And this is perhaps also what makes modelling the most difficult task for us at ESI: you need to be a true clairvoyant or Nostradamus to know which aspects of the system and its environment are important and where the model has its limitations.