The way that one goes about developing and delivering a software project depends critically on the scale of the project. There is no "one size fits all" approach. As a trivial example to illustrate this, no one would consider writing a test harness for a "hello world" program. (Actually, I have tried this question out on some TDD proponents over the last year - and I have only found one that insists that they would do so.)
Why shouldn't one write a test harness for "hello world"? As in all design questions it is a matter of trade-offs: there is a cost to doing it (a test harness for this program is typically more complex than the program itself) and a benefit (a test harness that confirms the program works). In this case the cost doesn't justify the benefit - as one respondent put it "I can just run the program to see that it works".
OK, you might say that is a silly example, but it reflects the basis on which the habits of those working in software development form. Their first programs are that simple. And they write them accordingly. As they become more accomplished they tackle bigger and bigger problems the same way - usually long past the point at which this is the most effective approach. Because they can still succeed the need for change isn't apparent to them. Typically it takes both a failure of catastrophic proportions, and also an open mind, before they appreciate the need for a different approach. Even so, old habits die hard and resisting the temptation to fix an urgent problem with "a quick 'low risk' hack" requires determination and vigilance.
This form of inertia (clinging to approaches that have become inappropriate) isn't restricted to the individual developer - one only has to look at the recent G8 response to climate change to see it operating at the scale of nations. But that example is outside the scope of this publication. What is relevant here is that it also applies to larger software development units: teams, departments and whole organisations.
There are many ways to attempt to characterise the scale of a project:
the number of developers;
the value (and risk) to the business;
the count of function points (or use cases, or stories…);
the size of the codebase;
the length of time the project is active;
the range of technologies employed;
All of these affect the effectiveness of different methods of working. I hope no-one would claim that what is appropriate for one developer working on a low profile project for a few days is appropriate for a couple of dozen working on a high profile project for a couple of years. The choice of appropriate methods is an important decision and may need revision as a project progresses. Alastair Cockburn provides a lucid discussion of his research into the effect of scale on development process in "Agile Software Development".
It is very easy for an organisation that is accustomed to running projects of one size to continue to deploy the same practices on a project for which they are inappropriate. And, as with our developer in the first example, it often takes a significant failure before the assumptions that underlie this decision can be questioned. In fact, the practices are often habituated to an extent that means they are not even examined as a potential cause.
Opinions as to the cause of failure all too often fail to bear close examination, or are superficial - they identify a mistake by an individual without following up and discovering the circumstances that made that mistake inevitable. (There are many examples of this cited in "High-Pressure Steam Engines and Computer Software" - http://www.safeware-eng.com/index.php/publications/HiPreStEn.) If we look at those cases where there has been a thorough examination of failing projects it has been found that the individual errors of engineers and operators compounded inappropriate working and management practices. (Commonly cited examples include: Three Mile Island, Chernobyl, Challenger, Bhopal, and Flixborough.)
A typical development department will begin by delivering small projects, and its approach to the development and deployment of software is appropriate to this circumstance. In a small project everyone involved in development (which will often be the one and only developer) can be expected to have an understanding of each of the elements of the project, their interactions and the context in which these elements work. They will also have an understanding of the status of any ongoing work.
Sooner or later the department will begin to undertake medium (or even large) scale projects - I'll explain what I mean by "medium" and "large" in a moment. The point I want to make first is that there is nothing obvious to alert an organisation that a new project with an extra developer or two, lasting a little longer, using an extra technology or two, and with a bit more functionality than usual requires a different approach.
It would be great to have a rule that took the size of a project and gave the appropriate development practices. Sadly, it isn't that simple. There are just too many factors affecting both the size of the project and the level of risk that an organisation is prepared to accept. In practice, I find that it is only by observing as the project progresses that a useful measure emerges. But it is rare to find an organisation that will take early indicators on board and make changes when it is not clear that anything will go wrong.
The distinction I use is based upon whether the whole project will "fit in the head" of all developers (small project), a few of the developers (medium project), or none of the developers (large project). It may not be immediately apparent, but a project can move between these categories during its lifecycle - and not just in one direction. (One project I am aware of moved from small to medium to large and then back to medium, and if the current initiatives succeed may yet make it back to small.)
In a typical medium sized project there will be one or more people with this general understanding and a number of people with specialised knowledge of particular areas. It is possible to run such a project without taking account of these specialisations without a big risk of disaster. Disaster can happen when firstly, a subsystem reaches a state that requires specialised knowledge to work on it; secondly, that this specialised knowledge is lost to the team; and thirdly, work is undertaken that involves that subsystem. There are development practices that mitigate these risks but, like any insurance scheme, these have a cost that eliminates them from consideration on a small project.
In a large project no-one will have an understanding of the whole thing in detail - those with a general understanding of the overall structure will rely on the expertise of those with a detailed knowledge of individual components and vice versa. Any significant development activity will cross boundaries of expertise - and, in consequence, will entail a risk of changing things without the specialised knowledge that makes the change safe. (There are ways to mitigate this risk, but not with a "small project" mentality.)
It is rare, therefore, to find a "large project" running smoothly with "small project" practices - typically many pieces of work will be failing. But even with this clue that something is systemically wrong the reaction is often inappropriate - after all the developer(s) implementing any piece of work that went wrong clearly made mistakes. And this brings me back to the arguments presented in "High-Pressure Steam Engines and Computer Software" (this really is a good paper - if you are not familiar with it, go and read it). Working practices should be chosen to both minimise the likelihood of mistakes and to ensure that any mistakes are detected and corrected before they derail the project.
So what do you do when you are on a project that is showing symptoms of a mismatch between the working practices and the nature of the project? If you are a single developer working on a three-day project then it is probably easy to decide not to allocate work based on "SecondBestResource". (Indeed, if you succeed in employing this pattern, then you probably have worse problems than the project failing!) But problems can be subtle - is the cost of setting up and maintaining a build server for the project really justified? (Even if it is required for conformance to departmental policy!)
On a larger project it is much harder to institute change - not least because changes need to be negotiated with other project members (who will not be inclined to change unless you first convince them that there is a need). But even when you've successfully convinced everyone that a build server would be a good idea someone needs to spend time setting it up and maintaining it. And this is often the sticking point - there are "brownie points" to be had implementing features the customer will use, and the customer doesn't care less about version control, test infrastructure, or the internal review of work items. In these circumstances who would want to be seen spending time on this stuff? It requires co-operation to tackle these issues.
There are two basic strategies for co-operation: either someone takes responsibility and orchestrates the activities of others, or everyone takes responsibility for dealing with the problems she or he identifies.
Both can work for small and medium sized projects and, in many cases, it is easier to get one person to take responsibility than to ensure that everyone does - which can make the first strategy easier to implement. However, as the size of a project increases, it becomes harder and harder for one person to keep track of everything that needs doing and on large projects it becomes impossible. There are, of course, ways to scale the first strategy - break down the project's issues into groups (by sub-team, by technology, by geography, or whatever) and ensure that someone takes responsibility for each group. However, this always seems to leave some issues that everyone disowns.
The strategy of everyone taking responsibility does scale a lot better if everyone co-operates. The difficulty is getting everyone to "buy into" this approach to begin with. It takes trust - and at the beginning of a project this has typically not been earned. It can be very difficult to convince everyone that "freeloaders" will not be a problem - until they've participated in a team that works this way. The thing that is missed is that the team is a small enough social unit that "freeloaders" are quickly identified and dealt with along with other problems.
As a member of a project one should behave as one believes others ought to behave. The worst thing that can be done on encountering a problem is to ignore it on the basis that "someone else" should deal with it. The next worst thing is to bury it in a write-only "issues list" in the hope that one day someone will deal with it. If everyone behaves like that then nobody deals with anything.
Everyone - including you and me - who encounters a problem has a responsibility to do something with it: either deal with it, or find someone better qualified to agrees to take responsibility.
Overload Journal #68 - Aug 2005 + Project Management + Journal Editorial
|Browse in :||
All > Topics > Management (90)
All > Journal Columns > Editorial (179)
Any of these categories - All of these categories