Disclaimer: as usual, opinions within this article are those of ‘No Bugs’ Bunny, and do not necessarily coincide with opinions of translator and Overload editors; please also keep in mind that translation difficulties from Lapine (like those described in [LoganBerry]) might have prevented from providing an exact translation. In addition, both translator and Overload expressly disclaim all responsibility from any action or inaction resulting from reading this article.
Laynt Preenahlarny naylte vao aisi nao?
Was Laburnum a good or bad rabbit?
Users and developers a.k.a. Elil and Naylte
Program users and program developers are two camps which are traditionally not that fond of each other (to put it mildly). Users tend to think that developers are stupid idiots doing nothing more than intentionally inserting bugs into the programs; advanced users are often even more annoying to developers, arguing that certain features (the ones they want) can be added without any problems in two days (whereas from the developer’s perspective it will take two months and will break a dozen other features that millions of other users rely on). Developers, on the other hand, tend to forget about users at all, and if forced to speak on this subject, will rarely characterize users any better than ‘mindless creatures without brain or purpose’1.
The user has the upper hand, whether we like it or not
On the surface, it may seem that this mutual dislike between users and developers is symmetrical in nature, but in fact it is not. If the users don’t value the product (in whatever way they define value), they won’t use it and the whole project will be a failure. And as it is the user who eventually decides if the project is successful, the relationship between users and developers is an inherently asymmetrical one, with users having the upper hand. Obviously developers have the option to ignore users, but in a modern economy if suppliers (in our case – developers) don’t have a monopoly and ignore the needs of their consumers (in our case – users), the chances of success of the supplier/developer become infinitesimally small. In a market economy suppliers exist for only one purpose – to satisfy the needs of their consumers, and if the supplier ignores these needs – it dies, usually sooner rather than later.
Here I need to mention that for the purposes of this article the term ‘user’ does not necessarily mean an end-user. For example, if you’re writing a software library your user is the guy who uses your library. The same guy is usually a developer of another product and is therefore a supplier for another developer or for an end-user. This kind of multi-tier supplier-consumer relations is nothing new, and goes back at least for a thousand years, to the time when the carpenter acted both as producer of a house for the end-user and as a consumer of nails produced by the blacksmith.
Relevance of business requirements
In traditional (non-agile) development models users rarely interact with developers directly. In non-agile teams, as well is in many agile ones, the tasks usually come to developers (or business analysts) in the form of business requirements. Unfortunately, way too often these requirements are not clear enough. But even worse, often there are requirements which are not really relevant to keeping users happy. In such cases the impact on development can easily be devastating – if developers are forced to do something outright stupid, one cannot possibly expect them to work with enthusiasm.
The big question here is how to distinguish relevant business requirements from irrelevant ones? The answer is quite straightforward: whatever is related to keeping users happy is potentially relevant. Applying this principle to practical situations can lead to not so trivial results, so let’s consider a few examples. Let’s consider a situation when an application is being developed for a mobile phone. One potentially valid business requirement in this case is ‘our application should run on an iPhone’, and if developers are trying to fight it (on any grounds) they’re most likely out of their depths. It is worth noting that this requirement should be specified exactly as ‘our application should run on an iPhone’, and not as ‘our application should use iOS’ – even if using iOS will eventually turn out to be the only way to run the application on an iPhone, it is an ‘implementation detail’, and therefore a decision which should be made at the architectural level rather than at the business level. As an alternative example, if the product is a software library then the requirement ‘it should be portable to iOS’ is a perfectly valid one – in this case the OS requirement becomes a characteristic which can be observed by the product user.
It’s so 1990-ish
One issue which often emerges within development teams is the question: ‘Hey, why don’t we use this new cool technology? C++ is so 1990-ish!’. My usual answer (perfectly consistent with the logic I’ve described above) is that ‘cool’ doesn’t have any standing in my books and that we should think about the user first, and that with this new cool technology user will suffer in this or that way. Usually this kind of explanation about overall project success and being user-oriented does help, but recently I’ve run into a counter-argument: ‘Hey, you’re talking about the importance of the end-user, but the end-user clearly wants something ‘cool’, look at the iPhone and iPad! So why don’t you allow us to use cool stuff?!’.
While this logic is still flawed, to illustrate why will need a bit more of an explanation. When users use the word ‘cool’ they’re completely within their rights to ask for whatever they want and developers should listen to them. In other words, within ‘userland’ (a.k.a. ‘managerland’ and ‘marketingland’ – and don’t confuse it with *nix ‘userland’) the word ‘cool’ is a perfectly legitimate argument, and hence a valid business requirement and developers must learn to live with it. But when developers starts to use word ‘cool’ to describe technology which their users do not care about, it has nothing to do with users and therefore should have much less priority if considered. There are two completely separate worlds: one is ‘userland’, the other is ‘developerland’, and ‘cool’ only has standing within ‘userland’. While it may seem ‘unfair’ to developers it is a direct result of the asymmetry described above and users having the upper hand.
Developers and user interfaces
Another area of everlasting conflict between users and developers are user interfaces, with many a fight over usability. One of my fellow-rabbits even uses the special term ‘developer’s UI’ to describe one which was convenient to write but is hardly usable. The worst example I’ve personally seen to date was a certain fax machine (I will not name the company here, but anybody who’s seen it should recognize it easily). It was a nightmare UI to deal with, despite having all the necessary features. For example, after the fax has been sent it showed the notification ‘N pages sent ok’, but why did this disappear after a few seconds? Did they expect me to be right next to the machine all the time to catch a glimpse of it? Or why, if the sending had failed did it go into one of two different, but visually very similar, modes – one with a retry being scheduled and another with the whole thing aborted? And in order to cancel the retry, why did you need to go three levels deep into the menus, under the heading ‘memory settings’? I am a developer myself and I perfectly understand why it was written this way – for a developer it is so much easier to design a UI around the implementation (or even worse – around an unsuitable existing implementation), but as a user I clearly have difficulties with finding non-foul words to describe the experience; needless to say, chances of me buying another fax machine from the same company are on the order of me voluntary paying a visit to a pre-heated farmer’s oven.
While technically speaking it is not a job of a software developer to design a UI (ideally, the task should belong to business analysts), whenever a developer (who wants the project to succeed) is implementing a UI (whether inventing it him/herself, or implementing specification), s/he should think about the user who will use the product. While it doesn’t help 100% of the time – an average user can have expectations which are very different from an average developer – it still can help to avoid at least the most blatant problems. Just don’t forget to discuss it with the business analyst before deviating from the existing specification – it might be good not only to save you some trouble, but also sometimes can be useful for the project and end-user too.
Eating our own rabbit food
It should be mentioned that it is often difficult to think about your own code from the point of view of a user, especially when it is already written. In this case it becomes very similar to testing your own code, which is known for fellow-rabbits to be very difficult. One reason for this difficulty is that such testing puts you into position of perceived conflict of interest: if you find the bug or other flaw (which is your job as a tester), it means that you have made a mistake as a developer. While this conflict of interest is usually only perceived and is not a real one, it often still leads to situations when the developer/tester subconsciously avoids testing scenarios which can be dangerous. Another (probably even bigger) problem in this way is that during such testing the developer tends to concentrate on the areas which he thinks are of interest from the point of view of implementation; while such ‘white-box testing’ is indeed useful, it tends to differ from the usage patterns of users.
One obvious way to deal with these issues is to have an independent QA department; another technique which helps is known as ‘eating your own rabbit food’ (or ‘eating your own dog food’ among some lesser species). This means that the company should use its own products as much as possible, to experience them as a user. While this technique alone does not provide any guarantees, it certainly can be a good tool to improve the overall user experience.
The manager’s perspective (team-leads included)
In this ‘user vs developer’ conflict, managers find that being between the user and the developer is very similar to being between a hammer and an anvil. It applies to all levels of the management, from the top level down to team leads. From one side there is a pressure to make a product successful (and to achieve that by making users happy), from the other side an obvious lack of understanding (and therefore inertia, if not outright opposition) from the developers. It is indeed a difficult problem for management, but it can be solved (as described, for example, as early as in [Parkinson60]) by promoting a culture where everybody works towards a well-defined goal – project success (and therefore making user happy). How to achieve this is not a trivial management task (it goes much further than simple stock options and other incentives), but it is certainly do-able. One notorious example of succeeding at this is Louis Gerstner’s highly successful restructuring of IBM in the 1990s; while re-establishing a customer-oriented culture obviously wasn’t the only change which led to this success, this cultural shift certainly was a significant part of Gerstner’s plan. As several fellow-rabbits who had a chance to work in IBM have told me, it was Gerstner who allowed IBM integrators to use non-IBM solutions when it was necessary to make customers happy. And as we can see 10 years down the road, it was a highly successful strategy.
The developer’s perspective
One question some developers ask – ‘ok, you have shown that project success depends on the user, but why I should care?’. Unfortunately (consistent with [Parkinson60]), there is no good answer to this question, except that organization where nobody cares about results is inevitably doomed. If all you want in this life is to be able to pay your bills, and caring about results (and therefore about the user) is not strictly necessary. Still, as the experience of the whole rabbit community shows, projects which are successful have a much higher chance of being kept even during a crisis, and to provide higher raises when the economy is booming, so thinking about user often pays off even in a direct monetary sense.
Going a bit further with this analysis: if you're working for a company (department, project, etc.) where management and developers don’t care about the eventual success of what they’re doing, it often means that the company is likely to fail. Working for a company which is doomed to failure is never a good thing. It is bad for your personal bottom line, not really helpful for your career, and can be devastating for your self-esteem. In short – a developer who can do better than fail should aim to avoid such workplaces, and try to get into an environment where the culture of project success is predominant on all the levels. It will certainly require more effort, but has much more potential to be much more rewarding, both financially and emotionally.
[Loganberry] David ‘Loganberry’, Frithaes! - an Introduction to Colloquial Lapine!, http://www.scribd.com/doc/97067/Conlang-Lapine
Overload Journal #103 - June 2011 + Project Management
|Browse in :||
All > Topics > Management (90)
Any of these categories - All of these categories