Adding Stakeholder Metrics to Agile Projects (Article in Overload 68)
Tom, Alan,
Whilst I found this article interesting, and accept that Tom has found "Evo" to be a useful process for developing software, there are a few comments in the article that indicate a lack of awareness of other agile processes.
In the detail of the process description, item 1 (gather critical goals from stakeholders), the article says By contrast, the typical agile model focuses on a user/customer 'in the next room'. Good enough if they were the only stakeholder, but disastrous for most real projects . Most agile processes view the "Customer" as a role, and whoever fills that role has the responsibility of obtaining the key goals from all stakeholders, not just one; this role is often filled by the project manager. Most agile processes also recommend that the Customer is in the same room as the rest of the team, not the next one.
- TG:
-
In that case we should call it a 'Stakeholder Representative'. My experience is that:
-
People are very incomplete (not just in Agile methods) in defining the many stakeholders, and their requirements
-
I would not trust a representative to define their goals clearly, nor would I trust them to give feedback from real evolutionary deliveries, that should be trialled by real stakeholders as far as possible.
-
Keep in mind that even on small projects we find there are 20 to 30 or more interesting stakeholders, each with a set of requirements.
-
- AW:
-
I agree with you on the importance of identifying who the real stakeholders are, and ensuring they actively participate in the development process.
Scott Ambler has an article on the subject of Active Stakeholder Participation, at: http://www.agilemodeling.com/essays/activeStakeholderParticipation.htm
- TG:
-
Great paper - I heartily agree with him and wish more of these thoughts got into agile culture.
Item 2 says Using Evo, a project is initially defined in terms of clearly stated, quantified, critical objectives. Agile methods do not have any such quantification concept . Again, this flies in the face of my experience with agile methods - most agile processes recommend that requirements are expressed in the form of executable acceptance tests; I cannot think of any more quantified objective than a hard pass/fail from an automated test.
- TG:
-
his remark just proves my point! A testable requirement is NOT identical with a quantified requirement. Please study Sample Chapter: How to quantify any quality: http://books.elsevier.com/bookscat/samples/0750665076/0750665076.pdf to understand the concept of quantification.
- AW:
-
If my comment proves your point, I guess I didn't explain myself correctly, or I really don't understand where you're coming from.
Mike Cohn says in User Stories Applied that stories need to be testable: a bad story is "a user must find the software easy to use", or "a user must never have to wait very long for a screen to appear"; he then goes on to point out that better stories are "novice users are able to complete common workflows without training", and "new screens appear within two seconds in 95% of cases". The first is still about usability, so needs manual testing with real "novice users", but the second can be automated. Agile practices prefer automated tests to manual ones.
Do Mike's examples better reflect your quantification goals?
- TG:
-
YES we are getting there, I found his chapter at http://www.mountaingoatsoftware.com/articles/usa_sample.pdf and he is moving in the right direction. I never saw things like that from any other agile source. I wonder how frequently this is applied in practice? The term 'story' is misleading here since he is really speaking about what others would call requirements. But OK!
Thanks for pointing this excellent source out to me I will certainly quote it to others in the agile field! I wrote him an email and copied you. Nice website.
Though item 5 doesn't explicitly say anything about what agile processes do or don't do, the implication is that they don't do what is recommended here. Popular agile processes recommend that the person or people in the Customer role writes the acceptance tests; it is their responsibility to ensure that what they specify represents the needs of the stakeholders.
- TG:
-
Craig Larman (Agile and Iterative Development: a Managers Guide) has carefully studied my Evo method and compared it to other Agile methods in his 2003 book.
The difference here between what I am trying to say and what a reader might wrongly guess I am saying is probably due to misunderstandings. A good picture of my Evo method in practice is the Overload 65 paper by Trond Johansen of FIRM. Part of this can deeper arguments can be found by downloading my XP4 talk "WHAT IS MISSING FROM THE CONVENTIONAL AGILE AND eXtreme METHODS? …" available at http://xpday4.xpday.org/slides.php
- AW:
-
I've just looked at the slides; I think they explain your point better than the Overload article.
Finally, the Summary says "A number of agile methods have appeared ... They have all missed two central, fundamental points; namely quantification and feedback". Quite the contrary: these are fundamental principles of agile methods.
- TG:
-
the summary must be interpreted in the light of the preceding detail. I agree there is some feedback and some quantification in agile methods, but my point was there is no quantification of primary quality and performance objectives (merely of time and story burn rates). The main reasons for a project are NOT quantified in any of the other Agile methods. Consequently the feedback is not based on measurement of step results compared to estimated effect (see above FIRM example to see how this works). I do apologize that my paper might not have been long enough or detailed enough to make their points clear to people from a non quantified quality culture (most programmers). But I hope the above resources help to clarify.
The relentless testing at both unit level (TDD) and acceptance test level is designed to get fast feedback as to the state of the project, both in terms of regressions, and in terms of progress against the goals.
- TG:
-
I agree, but this is not the quantified feedback I was referring to.
The iterative cycle is designed to allow fast feedback from the stakeholders as to whether the software is doing what they want. It also provides feedback on the rate of development, and how much progress is being made, which allows for realistic estimates of project completion to be made.
- TG:
-
There is no quantified feedback, in conventional agile methods, as to progress towards the quantified goals. For example FIRM set a release 9.5 target of 80% intuitiveness and met it. Such concepts are entirely alien to the rest of the agile world. They cannot even conceive of quantification of such usability attributes. The rate of story burn is a very crude measure of progress. But the actual impact on most quality levels is impossible to say anything about from a story burn rate.
- AW:
-
I have no idea what it can possibly mean for delete "to" for something to have "80% intuitiveness", so I guess you're right that "they cannot even conceive of quantification of such usability attributes". Could you explain in more detail?
Story burn rates give you real measures of progress in terms of money spent, features completed, and estimated project completion time. If you want to know details about how close you are to specific goals, then you need to look at what the features are that have been completed.
Automated acceptance tests provide the "quantification" Tom seeks: the stakeholders specify desired results, and the acceptance tests tell them whether the software meets their targets. These don't have to be functional tests, they can be non-functional too - if it is important that the time for a given operation is less than a specified limit, the team is encouraged to write a test for that. Likewise, if it is important that a certain number of simultaneous users are allowed, or the system must handle a certain amount of data, then there should be tests for those things, too.
- TG:
-
Well this sounds like moving in the right direction. But it is not anywhere near enough, for real systems, and not done at all as far as I have seen, and not instructed in the agile textbooks. Automated acceptance tests DO NOT by any stretch of the imagination provide the quantification I seek! Nowhere near!
- AW:
-
I agree that this is not necessarily focused on in the agile textbooks. They generally skip over it by saying that the acceptance tests define what the application should do. It is up to the team to realize that if the performance matters, then they need a test for that; if the usability matters, they need a test for that, etc.
However, agile processes are not just defined by the textbooks. They are defined by the community of people who use them, and are continually evolving; there's recently been discussion of performance testing on the agile-testing mailing list, for example, and there is a whole mailing list dedicated to the discussion of usability, and how to achieve it with agile methods.
- TG:
-
Show me a real case study that like FIRM tracks 25 simultaneous quality attributes over 10 evo steps to a single release cycle, and in addition spends one week a month working towards about 20 additional internal stakeholder measures of quality (I am happy to supply illustrations for the latter, they are recent).
- AW:
-
Examples of the tracked quality attributes would be useful.
Just to recap, whilst I don't disagree with Tom's recommendations, he does other agile methods an injustice in suggesting that these are new recommendations: much of what he suggests is already a key part of agile methods.
- TG:
-
I maintain they are new, if you understand what I mean by quantification. I do apologize if I have done conventional agile methods an injustice. I have no such intent. But of course I cannot know undocumented private practices! I have made these assertions to several agile conferences and not one participant, who heard and saw what I said suggested that I had misrepresented the current state of agile methods. I think the main problem here is things like believing that testing of the conventional kind constitutes measurement of quantified qualities. I hope that a study of the above links will help clarify this. My apologies if I have not in my paper been able to convey what I intended immediately to some readers. I hope this extensive reply to Anthony will enable readers to progress their understanding of my intent.
- AW:
-
I now feel that I understand what you were getting at. Most agile methodologies are very generic in their descriptions, and just identify that the customer representative should write acceptance tests for the stories, with no explicit focus on what the tests should look like, or what should be tested. Your key recommendation, as I understand it, is that when writing stories and acceptance tests, then the customer should focus on the benefits the stakeholders are expecting to see, and identify strict quantifiable measurements for these benefits, which can then be used for evaluating the software, even if these benefits are not obviously quantifiable, like "improved usability".
If I have understood correctly, then I think this is indeed an important focus, and not something that the descriptions of most agile methods focus on in any great depth. As a reader, I would appreciate an article explaining how to achieve such focus, and how to quantify such things as "ease of use" and "intuitiveness".
- TG:
-
See CE [ 1 ] chapter 51, but here are some other: (See "How to Quantify Quality" in this Overload -AG)
- AW:
-
Primarily, my reaction was to the tone of your Overload article, which struck me as very adversarial - EVO good, other agile methods bad. From my point of view, everything you have suggested falls on the shoulders of the agile Customer; it is their responsibility to identify the stakeholders and their needs, and write acceptance tests that reflect those needs. If your article had started with this in mind, and presented the same information as a way of assisting the agile Customer in fulfilling their responsibilities, it would have come across better to me; I felt that the slide from your XPDay presentation did a better job of this than the Overload article.
- TG:
-
It is of course the responsibility of the customer and stakeholders to articulate what they want. But they are not professionally trained to do it well and they usually do it badly. So one possibility is that 'we' train ourselves to help them articulate what they really want, and not expect too much clarity from them. But we do expect them to know their business and be able to have a dialogue leading to clarity with professional assistance.
- AW:
-
Again, thank you for taking the time to respond to my comments. I have found the discussion most enlightening.
- TG:
-
Me too thanks Anthony, hope to meet you someday soon, maybe at XP5?
[ 1 ] CE = "Competitive Engineering: A Handbook for Systems Engineering, Requirements Engineering, and Software Engineering using Planguage" by Tom Gild published by Elsevier.