ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Google+ ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

pinIntegrating Testers Into An Agile Team

Overload Journal #104 - August 2011 + Process Topics   Author: Allan Kelly
Agile has usually concentrated on how to organise developers. Allan Kelly shows how testers fit in.

When a Software Tester first looks at Agile process descriptions they see something missing: Testers, and, for that matter: Testing.

It is not so much that Agile processes and teams ignore testing, it’s more that they ignore Testers. This is for two reasons: originally Agile came from Developers. Hence it is very developer centric and does not pay enough attention to such roles as Tester, Product Managers and Business Analysis. Second, Developers – like many others in the software world – don’t really want to admit the need for Testers. Developers like to think they can do it right and nobody needs to check on them. Call it arrogance if you like.

Yet testing itself is taken very, very, seriously. One of the tenets of Agile software development is a commitment to higher quality. If the thing Developers produce is of better quality then there is less rework to be done, so there is less disruption and overall productivity rises. This is the Quality is Free argument advanced by Philip Crosby in the 1980s [Crosby80].

If proof is required look at Capers Jones’ book Applied Software Measurement [Jones08]. Jones describes how IBM first discovered in the early 1970s that the projects with the highest quality (lowest number of bugs, bugs found and removed earliest) also had the shortest delivery schedules and as a consequence the lowest costs. Jones’ data collected over four decades supports this assertion.

Thus the Agile approach to testing is to inject more quality early in the process. Practices such as automated unit tests – whether we call it Test Driven or Test First Development, automated acceptance tests, pair-programming and continuous integration all serve to improve the initial quality of the software.

If these practices are not adopted teams have little or no chance of making a successful release at the end of an iteration. If quality is not built into the code from the start of the iteration it cannot be retrofitted at the end. In order to make regular scheduled release of software it is necessary to abolish the indeterminable test-fix-test cycle at the end of development. See Figure 1 for how we imagine testing fits with development, and Figure 2 for the reality.

Figure 1
Figure 2

Software developers, and in particular the managers of development teams, seem capable of a specific form of doublethink. When it first becomes clear that a project is going to be late they get optimistic about later phases. When something needs to be done the first place the schedule is changed is in the test phase. Optimism wins and the time allowed for testing is reduced – ‘this time we’ll be better’ while the final date is held fixed.

When a project is eventually closed and people look back they inevitably say ‘We should have allowed more time for testing’, yet a few weeks later when the next project starts to slip the first place they cut is testing again.

If, through Agile techniques, we manage to fix this problem we are left with the question: where do Testers fit in?

Furthermore, if we remove traditional requirements documents, then how do Testers know what to test? Let me answer the second question in the expectation that an answer to the first question will emerge.

Acceptance criteria v. acceptance tests
In a recent e-mail to Paul Grenyer and me, Rachel Davies pointed out that there is an often overlooked difference between Acceptance Criteria and Acceptance Tests:

Acceptance criteria are part of what you agree the story should cover. These can be simply notes about what scenarios are agreed to be in scope for the iteration. These are what need to be reviewed and agreed before committing to delivering a story in the next iteration.

Acceptance tests are scripts – manual or automated – that detail specific steps to test a feature. I suggest write these in the same iteration as the code, before or in parallel with developing the code. I do not suggest writing them in a planning meeting. I’m not suggesting that you write all the acceptance tests for all the stories at the beginning of the iteration. Just that when work starts on a story, the first thing to do is to get really clear about what tests should pass by following an Acceptance Test Driven Development way – this can be by writing one acceptance test at a time or the whole set for the story.

How do I know what to test?

Dialogue over document is the norm for requirements in Agile teams. Rather than try and write everything down in some kind of binding contract, ideas are captured but details deferred until development. This just-in-time approach to requirements elaboration relies on conversations between the Product Owner and Developers.

Testers need to be involved in the same dialogue as the Developers and Product Owner. Problems that remain are not so much because there is no document but rather because there is a problem with the requirements, or because Testers are cut out of the loop altogether. Either way, there is a problem that needs to be addressed and documentation is not necessarily the answer.

(I’m using the term Product Owner here. In most organizations this person goes by the name of is a Business Analyst or Product Manager. The term Product Owner originates from Scrum. What ever they are called, this is the person who decides what should be in, and how it should be.)

Traditionally, team members focused on the document because, despite its imperfections, it is the only thing that is available. The deeper problem is often that nobody pro-actively owns the business need. All too often it is assumed that developers know best. The document was expected to answer three questions:

  1. What needs testing?
  2. How should the functionality perform?
  3. How should the functionality be tested?

Tests need to be created and run on a just-in-time basis. While tests need to be created before a task is marked as done – otherwise how do you know it is done? – they should not be done too far in advance.

When tests are created too far in advance two problems emerge: work may be removed or postponed. Creating tests for work that is removed is a waste of time. Secondly, only when the work is about to begin is it clearly defined. In fact, at any time until the work begins information may arrive which changes that which is to be done. So any tests there are written in advance may need changing.

The mechanism which tells Developers what to work on next is the same mechanism that should tell Testers what needs testing. This could be your iteration planning meeting, or it could be the visual tracking system used by the team. Either way, it should be clear that code is about to be cut, therefore tests need to be created. It should also be clear that the code has been developed and the tests are ready to run.

Before the code progresses to automated acceptance tests Testers may be involved in keeping developers honest. They may ask the developer for evidence that the work has been performed in a quality manner: Were unit tests written? Was the code developed using pair programming? Or has a second developer reviewed the code?

Testers should have an automated acceptance test ready by the time developers finish their coding. This means Testers, like Developers, need to understand what the Product Owner expects from the software. Again the dialogue over document principle applies.

Testers need to be part of the dialogue, they need to part of the conversation when the Product Owner and Developer discuss the work so they can understand what is required at the end. If the work is contentious, or poorly understood, they may take notes and confirm these with the Product Owner later. They can then devise tests to tell when these requirements are met.

In this way Testers can understand what needs testing and how it should perform, they then use their professional skills to understand how to go about testing it.

It is worth adding here: automate, automate, automate. Tests which are not automated are expensive to run and therefore don't get run as often as they could. Automated tests are cheap to run and so maintain the quality into the future.

The testers’ role

The testers’ role is to close the loop, following from the Product Owner role, not the Developers role. They need to understand the problem the Product Owner has and which the developer is solving so they can check the problem is solved.

If Testers are having a problem knowing how something should be then it is probably a question of Product Ownership. When this role is understaffed or done badly, people take short-cuts and then it becomes difficult to know how things should be. One of the reasons many people like a signed-off document is that it ensures this process is done – or at least, it ensures something is done. But freezing things far too early, and for too long, reduces flexibility and inhibits change.

As software quality improves there may well be less need for Testers. However the need will never go away completely. There are some things that still require human validation – perhaps a GUI design, or a page layout, or printer alignment.

Three levels

There is a recurring model of testing found in many organizations and shown in Figure 3.

Figure 3

Many more organizations have a similar model but with one, or even two, levels missing. Instead they agonize about the level they ‘should’ do.

  • The innermost ring is Unit Testing. This is the developers’ responsibility, the testers role here is largely to keep developers honest, and ensure they have written and performed the tests. Unfortunately this level is the one that is most commonly absent, which is a shame because it is probably the most important ring of the three.
  • The second ring is variously called System Testing, Integration Testing, System Integration Testing or just SIT. This is the realm of the professional tester. Like Unit Testing the ideal is that this ring is completely automated. When this is possible Testers can write their tests before developers begin work, and the tests then become the developers completion criteria.
  • The outermost ring differs depending on the nature of the customers. When customs are outside the organization, companies conduct Beta Test Programmes; when they are inside it is usually called User Acceptance Tests. As the name implies this is where actual users get involved.

    Sometimes, when quality is very low the UAT/Beta cycle becomes effectively a second SIT loop to catch more of the bugs which slipped through the first.

    Because this ring is about customers it is more difficult to automate. Testing is only part of the objective: one of the less spoken aims of this ring is to win over customers acceptance of the product, to make them feel involved and give them a voice. Unfortunately, this voice is often heard too late to make any real difference.

    Another problem with this loop is that actual users – particularly in corporations – are either too busy or do not value this loop. Therefore the loop ends up being run by professional testers again – in the extreme these are testers attached to the user group not the development group. When this happens the secondary objective of the loop is entirely lost because actual users are not involved.

Personally I believe it should be possible to merge the two outer loops. Firstly, UAT should not be used to catch bugs which slip through SIT: quality should be high enough to stop them at source or catch them the first time. Secondly, customer/user voices should be heard earlier, and repeatedly, in the development process at a time when they can influence the process.

Capers Jones, by the way, says that on average each round of testing removes 30% of all defects, which probably explains the three tier model. If one ring is to be removed then defect detection and removal rates need to improve in the other layers.

The future

In the short run there is a lot of software out there and quality isn’t going to improve overnight. Removing the need for testing, and reducing the number of testers, may well be an aspirational goal for a team but it is going to be a while before the goal is met in the majority of cases. Even then someone will still need to write the acceptance tests.

If this weren’t enough, the success of Agile should make more work for everyone. Agile teams are more productive and add more business value. Therefore the organization will succeed, therefore there will be more business, and thus there will be more work to do.

Over time as quality increases Testers’ roles will change. The role will be less test centric and more quality assurance centric. Instead of bashing a keyboard to check the software doesn’t crash Testers will increasingly close the loop with customers and Product Owners to ensure that what is delivered satisfies their need.

References

[Crosby80] Crosby, P. B. 1980. Quality is free : the art of making quality certain, New American Library.

[Jones08] Jones, C. 2008. Applied Software Measurement, McGraw Hill.

Overload Journal #104 - August 2011 + Process Topics