Editorial: This Year’s Model

Editorial: This Year’s Model

By Ric Parkin

Overload, 19(102):2, April 2011

Design and development requires us to think about the world. Ric Parkin considers some ways of coping with the complexity.

How do you predict the future? Lacking a time machine or a handy crystal ball, we have to resort to more mundane methods. These usually involve making a model, perhaps unconsciously, that can answer usefully relevant questions. Sometimes these models can be of breathtaking simplicity: want to predict what the weather will be like tomorrow? Look out of the window – there’s a good chance that it’ll be the same as today! This not actually that bad a model – in many places the weather tends to change only slowly over time (and in many tropical climes stays pretty much constant over the whole year). Also there are often stable weather patterns which persist for many days, so making prediction easy – a high pressure over continental Europe can remain there for many days, stretching into weeks.

The weather-obsessed UK is actually one of the hardest to predict – it sits at the end of the northern jet stream, a high altitude band of fast winds which start over north Africa and encircle the globe moving slowly northward, across India, Japan, the US, and finally dissipating above the UK. But as this ‘river in the sky’ is buffetted around it moves to the north or south of the UK. As many Atlantic weather systems are guided by the jet stream, if it’s sending them towards you then you know that the weather will be very changeable with bands of rain followed by clear spells. If it’s dumping them over Iceland while a European high pressure extends over the UK, you’ll have very stable weather – hot clear weeks over summer. There – we’ve just extended our mental model to allow for even better predictions, by understanding some of the processes that affect the result we’re interested in. To go further you might start writing proper mathematical or computer models of atmospheric circulations, to try and predict finer details of how and when these large scale features change.

So what’s this got to do with software? Well, we use models a lot as well. Sometimes they’re what we’re programming, but most likely they’re more subtle than that. One will be a model of people’s intuition: if you’re designing a user interface it is a good idea to understand how a user thinks about what they want to do, and how your interface will fit into that ‘narrative’. A poor interface will cause them to come to a shuddering halt as they work out what they need to do; a good interface by contrast meshes well with their model and allows them to carry out their work with little impediment. A good interface should appear ‘transparent’ to the user – they just use it without consciously thinking.

One example I’ve had of this was when I was working on a program that had a very strong visual aspect – networks of information were represented by icons with links between them, and you could just pick up and move the icons. One problem came at the edges of the screen – we wanted an ‘auto-scroll’ feature to reveal more of the virtual sheet of paper. To make thing more complex you could drag from one window and drop into another, so the obvious solution of scrolling when you dragged outside didn’t work. To start with I tried having a ‘sensitive zone’ just inside the window which would activate the scrolling. Unfortunately people found they couldn’t control it. It would start scrolling when they were doing something else, or scroll too fast so they overshot they target, or scroll too slow so they were sitting there waiting. I got a lot of bug reports and many change requests for this suggesting all sorts of ideas, usually contradictory, and sometimes suggesting a complete rethink, or wanting many options to ‘control’ it (I’m of the opinion that many options are added that only give the appearance of control, to hide the problem of things not working properly – instead it transfers the problem of fixing it onto the user)! In this case I persevered, and using the many complaints as inputs as to what sort of things didn’t work finally came up with a simple but effective solution – a delay before scolling started, long enough that it wouldn’t get triggered accidentally; a fast outer and a slower inner sensitive zone, where scroll speed increased the further out you went, increasing gently to start with but quickly up to the edge. Suddenly people could control the scrolling, its simplicity was easy to predict, and very quickly it became automatic and all the change requests dried up – it had become invisible.

Another type of interface is an API used by programmers. These too should strive to mesh with the mental model of what an interface should do (and should not), otherwise confusion, frustration and bugs become the norm. Arun Saha’s article in this issue deals with exactly this problem.

What other models do we use? Task time estimation is a very common one, but how do we do it? We don’t just guess, we use our experiences to come up with a reasonable estimate based on various factors. A start would be a quick guess at how much work is involved, perhaps based on a comparison with a similar task we’ve done before. We can also make adjustments based on knowing how difficult it is to change code in the relevant area – a good example of this is date and time processing, which ought to be simple and yet we still see problems [ BBC ]. I have a theory as to why this particular example is so error prone – it seems superficially simple so people dive in, and yet when you look at the details needed for various applications there are many subtle complications, from calender changes (and countries changing at different times), time zones (including historical changes), summer time change rules (and exceptions), all the way down to taking into account leap seconds, the varying spin of the earth, and the time dilation due to General Relativistic effects!

One factor that isn’t captured by a simple estimate is the spread of possible outcomes – ‘two months’ sounds definite, but in reality it’ll normally be ‘around two months, a week earlier if all goes well, but could be three months if we find problems. Four if they’re bad’. It’s hard to plan with that sort of uncertainty. But models can come to our rescue here – we could expect that as the worst case is really bad, but the best case is only a bit better, then on average the most likely time will be a bit worse than the simple estimate. If we use the expected time rather than the estimate time, we’ve taken into account some of the inevitable problems.

However this assumes that the chance of a problem in a task is independent of the probability of a problem in another task. While this may be true for relatively separate tasks, quite often in code the tasks will be related in some way, perhaps by being in the same area of nasty buggy code. In which case they are no longer independent, and our model is going to be wrong, because if Task A is late, then the chances of Task B being late is more than we suspected, so we have been optimistic.

We do have some hope though – if we suspect a group of tasks are not independent, then we can use the actual time taken for some to adjust our estimates for the later ones. eg Task A’s expected time was 1 month, but took a week longer. So assuming Task B is dependent on the same issues that caused that delay, we could adjust the estimate from 2 months to 2 weeks longer. This is very similar to Baysian Inference [ Bayes ] where you adjust a probability based on new information gained from a non-independent observation. (This is a strangely counter-intuitive subject, but can be very powerful.)

I would be interested to hear if anyone has tried this sort of adjustment – I suspect it could work if Task A leaves the code with the same latent problems, however refactoring to leave it in a better state will reduce the adjustment needed. Perhaps an iterative adjustment is needed: in the light of Task A adjust Task B, and in the light of that adjust Task C and so on.

This shows one of the fundamental things to be aware of in models – they are a simplification of the world in order to make a prediction, but you should be aware of what you have simplified and assumed, and when those assumptions break down. In this issue we have an article by Alex Yakyma on an attempt to build a model of software complexity, and it generated quite a bit of discussion for many of these reasons – what were the model’s assumptions and were they reasonable, were the factors really independent, etc. All good questions – it would be interesting to see what effects changing one of these assumptions would have? Or a more detailed look at how task estimates combine under various assumptions, or some other aspect of software development.


Being a bi-monthly magazine, there are always a few notable anniversaries coming and going, but this issue has had a few interesting ones. There’s been much in the media about the 5th anniversary of Twitter. That’s not very long, and yet it has become quite pervasive. To take two recent examples – the recent ‘arab spring’ wave of demonstrations seem to have been organised on an ad hoc basis by ordinary people using modern decentralised communications. Even when the mainstream news were controlled by a government, people were reporting events themselves via mobile video, twitter updates and Facebook groups. The speed at which these events unfolded was remarkable, based in no small part on cheap fast mobile phones and computers.

There was also an uglier side to Twitter in the news as well. You may have heard of a 13 year old called Rebecca Black. She’d recorded a song and video [ Black ] which went viral and has had (at the last count) 66.9 million views on Youtube (whose 6th anniversary is in April [ Youtube ]) and over 1.1 million comments, and was a top trend on Twitter. Unfortunately a lot of the reaction was not just negative, but downright nasty. I won’t comment on the song, but it seems the speed of modern commenting plus the ability to be anonymous (or just in a crowd) can bring out the vicious side of some people. This is not a new phenomenon either – there’s been flame wars on email and newsgroups since they were invented, and with ‘fast reaction’ communication like Twitter it’s even easier to fire off an ill-thought through, or even nasty, message. Perhaps a return to ‘slow’ communication would help? There’s been an add in for gmail for some time now that forces you to answer some simple sums before it’ll send a message, on the theory that if you’re tired and/or drunk enough to fail, you’ll probably regret the email [ Gmail ]

More important to many of us I suspect, we have just passed the 30th anniversary of the Sinclair ZX81 [ ZX81 ]. This was the time when many people were getting their first glimpse of home computing, even if most didn’t know what to do with it! But there were many who didn’t care, and just loved playing with getting this funny black box to do strange things. With only 1K of RAM, which was used for data, code, and also for video memory, applications were limited (you could get a ‘Ram pack’ to extend memory by a whole 16K, but these were notoriously wobbly. Some swore by blu-tac, I found a fabric plaster stabilised it enough). But that curse was also a blessing – it forced people to be extremely clever at finding neat ways of getting the most out of it, which some have suggested led to the UK having such a lot of ingenious programmers.

Mobile phones are even older – the first call was made on 3rd April 1973 by Martin Cooper [ Cooper ], who was leading the research team at Motorola to buld one. Who did he call? His rival at AT&T to tell him he’d got one working first.


[Bayes] http://en.wikipedia.org/wiki/Bayesian_statistics

[BBC] http://www.bbc.co.uk/news/technology-12104890 and http://www.bbc.co.uk/news/technology-12878517

[Black] http://en.wikipedia.org/wiki/Friday_(Rebecca_Black_song)

[Cooper] http://inventors.about.com/cs/inventorsalphabet/a/martin_cooper.htm

[Gmail] http://gmailblog.blogspot.com/2008/10/new-in-labs-stop-sending-mail-you-later.html

[Youtube] First ever video: http://www.youtube.com/watch?v=jNQXAC9IVRw

[ZX81] http://en.wikipedia.org/wiki/ZX81

Your Privacy

By clicking "Accept Non-Essential Cookies" you agree ACCU can store non-essential cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

Current Setting: Non-Essential Cookies REJECTED

By clicking "Include Third Party Content" you agree ACCU can forward your IP address to third-party sites (such as YouTube) to enhance the information presented on this site, and that third-party sites may store cookies on your device.

Current Setting: Third Party Content EXCLUDED

Settings can be changed at any time from the Cookie Policy page.