Monday, August 24, 2009

The Agile Testing Quadrants

I think one of the areas where our department has the most to improve is testing. We had a couple of regressions lately that seem to prove the point that a more robust testing scheme must be put in place. That is the reason why I decided to focus my first day at the Agile 2009 Conference on Agile testing.

I attended a presentation by Janet Gregory called "Using the Agile Testing Quadrants to Plan Your Testing Efforts". Right off the bat, in the five first minutes, I realized that the presentation was intended for a testers audience! That reminded me that software teams usually have both developers and testers. At this point I got in a discussion with the people at my table about the developer/tester ratio. For most of them, the ratio was between 2 and 4 developers for 1 tester. Another interesting point is that some companies have a separate Scrum team for testers.

The presentation was mostly about showing the contrast between the "old" approach, where most of the effort is spent on manual functional tests, and the Agile approach, where most of the effort is spent on automated/unit testing. The presenter insisted on the need for planning the tests and allocating a budget for building the test infrastructure. She also insisted that the testers should give the tests to the developers very early in the process, even before the coding actually takes place.

The "Agile Testing Quadrants" themselves (from the title of the presentation) are a way to classify the tests according to different dimensions. The diagram from Janet Gregory's presentation is probably the best way to explain this:

Some tools were mentioned by the presenter (mostly for web development though):


  1. Some companies have complete departments of just testers which is usually called the QA department.

    Unfortunately, I feel we are trying to have individuals play all these roles at once (developpers, designers and tester) and this makes us loose our focus. Ok we have developper testing, offshore testing and the normative ATPs yet all of these gates feel out of sync and we still hit problems at integration.

    Hopefully a better stratergy will emmerge with our new QA guy whose task, in my opinion, will be quite big.

  2. I had the chance to talk with the new QA guy because he participates in our "Development Process Improvement Comitee" meetings. From what I understood, his role will be to measure/enforce/improve our adherance to the new devel. process that will be put in place soon. I don't think he will be performing QA in the sense of "running tests".

    But I'm not sure yet about what his exact role will be, this should be clarified in the coming weeks.

  3. This comment has been removed by the author.

  4. When I was working for Lockheed Martin they were also having the equivalent of our ATP used to demonstrated the conformance to all requirements but it was written by the testers group and no development was starting before it was all written.

    It was a kind of test driven approach but with acceptance testing. Also, during the development of features the test group were running these tests as a sign of progression or regression. The ultimate goal was to be able to perform all the steps from this ATP. Every developers knew these steps and were able to run it because it was also our testing procedure.

    Sure here we have people running ATPs on site but usually what happened is that somebody find that something is broken and that is where the panic start.

    I think it would be better to have a test group in each department(ex. CGF test group) that would be able to tell you in real-time that with build version X the ATP step number Y is failing.

    These huge ATPs could be broken into small one for these groups and test locally without the need of site time and much earlier in the process.