Agile Principle #9: Agile Testing Is Not For Dummies!
In agile development, testing is integrated throughout the lifecycle; testing the software continuously throughout its development.
Agile development does not have a separate test phase as such. Developers are much more heavily engaged in testing, writing automated repeatable unit tests to validate their code.
Apart from being geared towards better quality software, this is also important to support the principle of small, iterative, incremental releases.
With automated repeatable unit tests, testing can be done as part of the build, ensuring that all features are working correctly each time the build is produced. And builds should be regular, at least daily, so integration is done as you go too.
The purpose of these principles is to keep the software in releasable condition throughout the development, so it can be shipped whenever it's appropriate.
The XP (eXtreme Programming) agile methodology goes further still. XP recommends test driven development, writing tests before writing the software.
But testing shouldn't only be done by developers throughout the development. There is still a very important role for professional testers, as we all know "developers can't test for toffee!" :-)
The role of a tester can change considerably in agile development, into a role more akin to quality assurance than purely testing. There are considerable advantages having testers involved from the outset.
This is compounded further by the lightweight approach to requirements in agile development, and the emphasis on conversation and collaboration to clarify requirements more than the traditional approach of specifications and documentation.
Although requirements can be clarified in some detail in agile development (as long as they are done just-in-time and not all up-front), it is quite possible for this to result in some ambiguity and/or some cases where not all team members have the same understanding of the requirements.
So what does this mean for an agile tester? A common concern from testers moving to an agile development approach - particularly from those moving from a much more formal environment - is that they don't know precisely what they're testing for. They don't have a detailed spec to test against, so how can they possibly test it?
Even in a more traditional development environment, I always argued that testers could test that software meets a spec, and yet the product could still be poor quality, maybe because the requirement was poorly specified or because it was clearly written but just not a very good idea in the first place! A spec does not necessarily make the product good!
In agile development, there's a belief that sometimes - maybe even often - these things are only really evident when the software can be seen running. By delivering small incremental releases and by measuring progress only by working software, the acid test is seeing the software and only then can you really judge for sure whether or not it's good quality.
Agile testing therefore calls for more judgement from a tester, the application of more expertise about what's good and what's not, the ability to be more flexible and having the confidence to work more from your own knowledge of what good looks like. It's certainly not just a case of following a test script, making sure the software does what it says in the spec.
And for these reasons, agile testing is not for dummies!
For further reading about agile principles, see 10 Key Principles of Agile Software Development.
Agile development does not have a separate test phase as such. Developers are much more heavily engaged in testing, writing automated repeatable unit tests to validate their code.
Apart from being geared towards better quality software, this is also important to support the principle of small, iterative, incremental releases.
With automated repeatable unit tests, testing can be done as part of the build, ensuring that all features are working correctly each time the build is produced. And builds should be regular, at least daily, so integration is done as you go too.
The purpose of these principles is to keep the software in releasable condition throughout the development, so it can be shipped whenever it's appropriate.
The XP (eXtreme Programming) agile methodology goes further still. XP recommends test driven development, writing tests before writing the software.
But testing shouldn't only be done by developers throughout the development. There is still a very important role for professional testers, as we all know "developers can't test for toffee!" :-)
The role of a tester can change considerably in agile development, into a role more akin to quality assurance than purely testing. There are considerable advantages having testers involved from the outset.
This is compounded further by the lightweight approach to requirements in agile development, and the emphasis on conversation and collaboration to clarify requirements more than the traditional approach of specifications and documentation.
Although requirements can be clarified in some detail in agile development (as long as they are done just-in-time and not all up-front), it is quite possible for this to result in some ambiguity and/or some cases where not all team members have the same understanding of the requirements.
So what does this mean for an agile tester? A common concern from testers moving to an agile development approach - particularly from those moving from a much more formal environment - is that they don't know precisely what they're testing for. They don't have a detailed spec to test against, so how can they possibly test it?
Even in a more traditional development environment, I always argued that testers could test that software meets a spec, and yet the product could still be poor quality, maybe because the requirement was poorly specified or because it was clearly written but just not a very good idea in the first place! A spec does not necessarily make the product good!
In agile development, there's a belief that sometimes - maybe even often - these things are only really evident when the software can be seen running. By delivering small incremental releases and by measuring progress only by working software, the acid test is seeing the software and only then can you really judge for sure whether or not it's good quality.
Agile testing therefore calls for more judgement from a tester, the application of more expertise about what's good and what's not, the ability to be more flexible and having the confidence to work more from your own knowledge of what good looks like. It's certainly not just a case of following a test script, making sure the software does what it says in the spec.
And for these reasons, agile testing is not for dummies!
For further reading about agile principles, see 10 Key Principles of Agile Software Development.
27 April 2007 11:59
I came across this netcase on agile testing which might be of interest?
http://parlezuml.com/blog/?postid=394
30 June 2007 14:00
Agile a good methodology, but hardly re-invents any testing rules. Testing should always be involved from the outset, any software development manual from 1980's onwards will say that. Developers seem to have forgotten their own training.
Modern software development happens in tighter more rapid cycles - its that simple. What is generally missed out from Agile, which is the the whole point - is involvement by the client, or the end-user is no such client exists. Making software quickly is not the focus of Agile, it's retaining control and ensuring that what a client wants is what they get, and that changes can be made mid-stream with minimal impact.
Have a look at usabilitymustdie.com, as it contains some good reality-checks for developers. Not Agile specific, but succinctly summarises to crisis in modern development. Development has become far too smug and self-congratulatory to the point of dictating to users what they want, instead of LISTENING.
19 July 2007 09:58
I must admit that after 18 years in a waterfall approach, it would be astruggle to change to an Agile approach. However I can see the benefits of throwing off the shackles that bind us as testers and starting to have more impact on what is produced.
We have come a long way from throwing code at testers at the end of the dev cycle, and testers only being used to match results versus spec. I enjoy being involved in decision making, helping to shape how a product looks and works, and it makes it more of a collaborative effort, encouraging team building.
You can use bits of Agile even in a waterfall project, by being proactive as a tester, challenging the spec (nicely!), making comments on design and usability etc.
In some ways the thought of changing my whole approach to Agile scares me but in another way I'd love to give it a go!
8 January 2008 05:56
I'd recommend reading Conventional Software Testing on an Extreme Programming Team
12 January 2009 20:39
I think the QA/risk mitigation aspect of testing becomes overstated as you use richer web frameworks with less custom code. For example, CI is clearly a necessity for developing reliable and secure banking systems in Java/C++, but for lightweight modular web development (e.g. customisation of typical web publishing apps) mandatory CI is a victory of process over end product. This is a clear case of the tail wagging the dog and violates the first principle of the agile manifesto. Over-adherence to process in the name of 'agile' can easily become a kind of techno-beaurocracy that should be seen to fly against the spirit of agile/lean.
That's not to say that developers shouldn't use tests or automate them where convenient. The big value is in TDD _as a coding technique_ to keep developers focused on features (also helping achieve the 80/20), and dev assurance that encourages refactoring.
16 December 2010 05:00
Brilliant stuff, man! What you have to say is really important and I am glad you took the time to share it. I hope that I can learn more about testing. According to me Software testing is provides an objective, self-sufficient view of the software to allow the business to value and understand the risks of software implementation. Thanks for sharing your opinion. I am yet to find anything as enlightening as this on the web.