I'm dreadful at testing code, as I rarely bother to do it. Yeah, yeah, I know that I should. It's just a mental blind spot that I have.
Have you ever experienced the phenomenon of reading what you intended to write instead of what you have actually written? It's the same situation when you have a piece of broken code that you stare and stare at but can't see any errors. You then show it to a peer, claiming that the compiler is busted, the microprocessor is messing up, and the laws of physics no longer hold true. They look at it for ten seconds, go 'ugh', and point at 'if(x=y)...'. I've extrapolated this mental assumption upwards from the level of syntactic correctness to the level of semantic correctness. I assume that because I intended the code to work, it must actually work. Thankfully, due to some introspection, and a few people politely, and not so politely, pointing out that it would be nice if I tested my code properly, I am aware of my short-comings in this area.
But, I'm not the only person who is bad at testing; the majority of development organizations are bad at it too. I have never discovered a development organization that conducts excellent testing. Most organizations are content to test to the 'good enough' level. I don't believe the individuals involved intend to build a shoddy product, there's just something wrong with the way most people conduct testing.
Testing has always been a hard problem that few software engineering organizations put much intellectual effort into solving. In evidence, we have the fact that most test groups spend much of their time executing test cases by hand. I've experienced developer groups where the prevailing attitude was that all responsibility for testing lay with the Quality Assurance group. The developers throw the release candidate over a wall; the testers kick it about for a while, and then throw it back if they find something wrong with it. The cycle then continues, seemingly endlessly, until the QA manager and the engineering manager resolve their differences over a drunken fistfight in the carpark .
A common solution to poor product quality is to throw more testers at the problem. Microsoft demonstrates that this doesn't work. They have a very high ratio of testers to developers, yet still produce 'good enough' quality products. Are they unwilling to produce quality products? I don't think so.
Another problem is that the role of test engineer is not well regarded by the software engineering profession at large. Most testers I've worked with don't want to be test engineers. They want to be developers. For them the test group is a steppingstone into the sustaining engineering, or development engineering groups. I'm in favour of career progression, but when everybody wants out and nobody wants in there's a problem.
Is the software quality problem due to testers just not being very good at testing? I don't think so. I've encountered some very smart and highly productive, dedicated and motivated testers.
I've talked to colleagues with an Electrical Engineering background about their experience of testing in hardware development organizations. They have told me that test engineering is a well-respected role that is staffed by engineers who are just as qualified as engineers in the development organization, and that testers are involved in all phases of the development process to ensure that the product can be efficiently tested for correctness.
The point to learn here is that testability has to be designed into the system, and the engineering group bears the responsibility for doing that. Typically the development engineering team doesn't think much further than unit testing. More upfront thought needs to be put into integration and system testing at the specification and design stages of development.
I've tried to reflect this lesson into my current project. I've thought about testing right from the beginning of the development process. There's a line item on the product requirements that states that the product must be easy to test. The design and implementation follow through against this by exposing interfaces specifically for testing. In this case our user interface is a C++ API, so the test interface we chose was a scripting language. The scripting API includes methods that expose some of the internals as text strings, so that the test cases can make assertions about the internal state of the system.
I don't know the solution to the software quality problem, and I'm not sure if anyone really does, but there are a few small things that we could be doing to improve matters. In summary, designing testability into the system allows us to fully leverage the QA group and therefore improve software quality.
In Overload 50 I mistakenly published a draft version of Pete Goodliffe's STL-style Circular Buffers article. There were a few minor errors in the text, which have been corrected in the online version, which can be found on Pete's website. 
Overload Journal #51 - Oct 2002 + Project Management + Journal Editorial
|Browse in :||
All > Topics > Management (90)
All > Journal Columns > Editorial (179)
Any of these categories - All of these categories