ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Google+ ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

pinAfterwood

Overload Journal #141 - October 2017 + Process Topics   Author: Chris Oldwood
Too soon! Chris Oldwood reviews optimisation in the development process.

The most famous quote about optimisation, at least in programming circles, is almost certainly “premature optimisation is the root of all evil”. When I was growing up, this was attributable to Donald Knuth, but he ’fessed up saying that he was just quoting Sir Tony Hoare, although Sir Tony seemed reluctant to claim ownership. According to the fount of all knowledge, Wikipedia, things appear to have been straightened out now and Sir Tony has graciously accepted attribution. That quote was originally about the performance of code and should ideally be presented in its greater detail so as not to lose the context in which it was said. The surrounding lines “We should forget about small efficiencies, say about 97% of the time: […]. Yet we should not pass up our opportunities in that critical 3%.” remind us that there is a time and a place for optimisation.

Of course, we’re all agile these days and do not pamper to speculative requirements – we only consider performance when there is a clear business need to. Poppycock! Herb Sutter and Andrei Alexandrescu certainly didn’t believe such nonsense and popularised the antonym ‘pessimization’ in the process. Item 9 in their excellent book C++ Coding Standards: 101 Rules, Guidelines, and Best Practices tells us not to prematurely pessimize either; i.e. we shouldn’t go out of our way to avoid appearing to prematurely optimise by simply writing stupid code – choosing a sensible design or algorithm is not premature optimisation. For most of us, the sorting algorithm was a problem solved years ago and the out-of-the box Quicksort variation (something else attributable to Sir Tony Hoare) that we get in our language runtime is almost certainly an excellent starting point.

Another favourite quote on the subject of optimisation comes from Rob Pike where he tells us “fancy algorithms are slow when N is small, and N is usually small”. While there are many new products which aim to scale to the heady heights of Twitter and Facebook, most are destined for a user base many orders of magnitude lower than them. Whilst it’s all very interesting to read up on how a company like Facebook has to design its own hardware to deal with its scaling issues, that’s definitely something for the back pocket in case you really do end up on The Next Big Thing rather than an architectural stance which you should adopt by default.

On Twitter, John Carmack once extrapolated from Sir Tony and observed that performance is not the only thing which we can be accused of prematurely optimising: “you can prematurely optimize maintainability, flexibility, security, and robustness [too]”. Although I didn’t realise it at first, I eventually discovered that my own C++ unit testing framework, which I thought I was being super clever with by eliding all those really difficult bits, like naming, was actually a big mistake. By focusing so heavily on the short term goals I had written a framework that was optimised for writing tests, but not reading them. As such, many of my earliest unit tests were shockingly incoherent mere days later and not worth the (virtual) paper they were written on. Every time I revisited them I probably spent orders of magnitude more time trying to understand them than what it would have taken in the first place to slow down and write them more lucidly.

Outside the codebase, the development process is another area where it’s all too easy to end up optimising for the wrong audience. The primary measure of progress should be working software, and yet far too much effort can be put into finding proxies for this metric. Teams that choose internally to track their work using tools and metrics to help them improve their rate of delivery are laudable, whereas imposing complex work tracking processes on a team to make it easier to measure progress from afar is unproductive. For example The Gemba Walk – the practice of management directly observing the work – allows those doing the work to dedicate more of their time to generating value rather than finding arbitrary ways to represent their progress.

Tooling is a common area where a rift between those who do and those who manage arises. For example, it’s not uncommon to find effort duplicated between the source code or version control system and the work tracking tool because the management wants it in a form they can easily consume, even if that comes at the expense of more developer time. For example I’ve seen code diffs and commit comments pasted into change request Word documents for review, and acceptance criteria in Gherkin synchronised between JIRA stories and test code because those in control get to call the shots on the format of any handover so they can minimise their own workload.

Engineering is all about trade-offs, both within the product itself and the process used to deliver it. As software engineers, we might find it easier to trade off the dimensions of time and space in the guise of CPU cycles and memory and find it harder to weigh-up, or more likely, control, trading off time between our present and future selves. The increasing recognition of metaphors like Technical Debt and management styles such as Servant Leadership will continue to help raise the profile of some of the more common sources of tension, but we still need to be on the lookout for those moments where our apparent cleverness may really be a rod for our own backs.

Overload Journal #141 - October 2017 + Process Topics