Automated Test Hell, or There and Back Again

By Wojciech Seliga

Almost 2 years ago we faced the fact that we are hitting the wall with our large scale automated testing of Atlassian JIRA - the product we have been developing in agile way for almost a decade. The cost of running and maintaining tests was growing exponentially. We analysed the problems and possible solutions and started the improvement program. Since then we've implemented a lot of significant changes and learnt new quite unexpected things.

This session shows the findings from our journey - escaping from Test Hell - back to the (almost) normality. If you are interested in hearing what problems you can (and probably will) face if you have thousands of automated tests on on levels of abstractions (functional, integration, unit, UI, performance) across multiple platforms and what solutions can be applied to remedy them - this presentation is for you.

You will learn that there are no sacred cows and your biggest problems may be hidden everywhere - including the least suspected place.

The session describes our story - the story of JIRA team (joint work of Atlassian teams in Sydney and Spartez team in Gdansk) which started religiously (or maybe not really) writing automated tests of all kinds (unit, functional, integration, performance) more than 10 years ago and which painfully learnt that what it means to maintain thousands of tests.

So the session explains how the XP promise of flat cost curve (in time) really means in practice (for us of course).

Whereas we use Java, the session does not concentrate on any language. It concentrates on engineering practices and reveals various pitfalls around writing, maintaining and running automated tests. You will hear about Continuos Integration, SCM (git), build system, dependencies, compilation times, coverage, etc.

I delivered similar presentation at a few conferences: part I (Getting There) - at 33rd Degree and GeeCON in 2012. Part II (Back Again) at 33rd Degree and XPDays 2013.