ACCU Home page ACCU Conference Page ACCU 2017 Conference Registration Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Google+ ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation


Overload Journal #49 - Jun 2002 + Journal Editorial   Author: John Merrells

In a recent issue of CVu, Pete Goodliffe wrote an excellent article surveying the history of popular methodologies. In this editorial I would like to chart my own personal journey through that history to discover where I now stand.

At the start of my computing experience, before any formal computing education, my methodology was to just hack everything into existence. I was programming, rather than engineering. I would assign myself a project, perhaps a simple line drawing program, and then set about coding. My methodology was to start with the part of the program that I knew least about. That way, if that piece defeated me, I would have wasted the least amount of time on the project. But, once the puzzle was solved, I would decide that the rest of the program was pretty obvious, and therefore there wasn't much point in completing it, so I might as well just move onto the next project.

Much later, at university, I learned about SSADM [SSADM] and various lecturer-devised methodologies that seemed very suitable for building large payroll applications. They didn't seem to offer much for my day-to-day software engineering projects. My experience was with very small teams (a team of one), short term (midnight-6am), with a well-defined customer (me and my friends). My products were games. I spent four years porting and enhancing a variety of bulletin boards, chat systems, and multi-user dungeons. [MUD]

After graduation, I was straight off onto the games industry proper, at that time predominantly the vocation of small groups of underpaid youths in cramped quarters having a whale of a time making up crazy stuff all day. My chums and I were ensconced in a one room, fourth flour, lower Chelsea, studio that was reputed to have once been the office of a popular beat combo named the Rolling Stones. [PBC] Our methodology? Write a proposal, tart it about some publishers, code like crazy, beg for more money, code like crazy, beg for more money, etc.

I had to grow up a bit after that, and went to work on some terminal emulation software. It was all OO, so I dusted off my Booch and OMT books. OOP had always seemed pretty natural for me, so the methodologies made sense, but the process didn't do much for me. I just went through the motions. The diagramming notations were great though; a common language for communicating ideas efficiently and effectively.

Then I joined a small research and development company that was staffing a project. They were an independent start-up that used the RAD [RAD] approach to produce a demonstration product. A dozen engineers spent a year building a working demo to show off to investors and customers. This was a great success, Mr Gates said he thought it was 'very cool' and wanted to be a customer, so the company was sold to an American corporation.

They were a largish, established, telecommunications supplier, so the system was to deliver telecom level reliability. The project was blue sky; a paradigm shift in voice messaging. It was to be the next generation system to replace two aging product lines. There was plenty of money to invest, and plenty of time to do it right. This was an ideal opportunity to try a more formal engineering approach.

The project was already staffed with three teams of seven when I joined. We proceeded to write documents for six months. Yes, six months. We wrote functional specifications for the user facing components and detailed design documents for all the systems' components. Each document was based on a standard template to ensure that every engineer considered issues such as performance, testing, and internationalization.

As each document was completed it went to a five-person review team, each person having a distinct role in the review: author, two readers, scribe, and a moderator. The readers would raise defects against the document, which were recorded by the scribe. The moderator ensured that the correct process was followed, and that nobody got too upset. The review process would repeat until all defects had been dealt with to the satisfaction of the readers.

This was very formal, and more than some could take (me included). When no one was looking I would Alt- Tab out of Microsoft Word/Rational Rose and hack some code until I was happy my design was going to work. But this alternating between the abstract and the concrete was a valuable way to ground myself and prevent me from designing a castle in the air.

This is all very well, but did it work? Yes, and no. Yes, the software was developed as designed, and pretty much on time, without any serious unforeseen complications arising. And the software has provided a very solid foundation for the past eight years of functional enhancements. To my continued amazement my original code is still largely intact.

But the project did fail in a couple of respects. Firstly, for political expediency, management, against the advice of engineering, signed off on a poor quality design for one component. None of the engineers wanted to build the component, so a contractor was brought in. He did his best, but at the end of the project we had to junk his work and redesign and rebuild it from scratch. Secondly, the business has never been much of a success. The product still doesn't have many customers. But, it did help the company sell itself, again, to an even bigger company.

That was a natural point for me to move on, and, having had a taste of America, I decided to move to Silicon Valley to do the Internet thing. I thought I had missed the big boom of software development, and didn't see another one coming down the pike. I can recall thinking that $35 was far too much to register a domain name, and, in any case, that would be a pollution of the namespace. How would the inter-network be paid for now that the US government had backed out? By attaching an advert to each packet, or message, or something, we joked. Chuckle. Sigh.

I ended up at Netscape working on a server product. I had never experienced such contrasts between projects. That was where I realised the inverse relationship between code quality and profitability. I went from a four year project with a formally designed object-oriented architecture written in the most advanced C++, which made very little money, to a project with twelve month cycles with little design or expressible architecture written in the most awful C I'd ever witnessed which, of course, made more money than you could imagine. There was little regularity between anything, there was no data encapsulation, everything messed with everything else's privates; it was an orgy of code.

Their success came from moving quickly. If it worked it shipped, it didn't need to look pretty. We had internal customers who were very demanding and vocal. We had a very small and tightly nit team that intimately understood the problem domain. We had a program of continuous build and test cycles. We had no formal process of any kind, just gut feel and rule of thumb. If a concept was too hard to explain on a white board in a couple of minutes, then an email was needed, if that didn't suffice then a document was needed. All team communication, within and between teams, was via mailing lists, and internal web sites. When Kent Beck's XP book appeared the content seemed very familiar to us. [Beck]

Of course, as the project matured, and grew larger, it became exponentially harder to add new features to the code base. We spent increasing amounts of time rewriting swathes of code to componentise the system, and to try to cram the jinni back into the bottle.

I've now come full circle, and am back to the team of one, working at home, often in the evenings, on distributed semi-structured database systems. Not MUDs this time, but XML databases. Oddly enough, they have quite a lot in common.


So, what I've learned on this journey is to solve the hardest problems first, and that CASE tools draw nice diagrams. That writing documents works, if you have lots of time, and that having a good architecture in place provides a long-term development foundation. And, finally, that solving real problems is better than writing nice code, and that team communication is critical to project success.

New Reader

Richard Blundell has been dabbling with computers for a quarter century, but has been developing software professionally for half that time. His beginnings were in Basic and 6502/Z80 machine code on Commodore Pets and TRS-80s. He progressed through the likes of Pascal, Fortran, C and even things like PostScript, on PCs, Vaxen and Sparc systems. Today he is Principal Developer for a management consultancy company in Surrey, developing business visualisation software in C++ for the Windows platform. Some of his interests include: security/cryptography, DIY, extreme programming, gardening, algorithms, Linux.


[SSADM] Structured Systems Analysis and Design Method, a methodology favoured by the public sector in the UK.

[MUD] Essex Mud, Abermud, TinyMud, LPMud, Hunt, Sun of Bullet.

[PBC] To my knowledge the phrase 'popular beat combo' was coined by a barrister in response to a judge's query as to who 'The Beatles' were. 'I believe they are a popular beat combo m'lud'.

[RAD] Rapid Application Development.

[Beck] Extreme Programming Explained, Kent Beck, Addison Wesley.

Overload Journal #49 - Jun 2002 + Journal Editorial