Agile Testing versus Waterfall Test Phases
I have recently been asked by a tester how agile testing compares to the various test phases in more traditional, waterfall development projects.
For instance, after the code has been unit tested, are there several testing phases, such as system, integration and regression testing?
Although these layers of testing do exist in agile projects, agile testing is integrated throughout the lifecycle, with each feature being fully tested as it's developed, rather than doing most testing at the end of all the development.
Here's how traditional test phases typically fit in with an agile testing approach:
Kelly.
Photo by BaylorBear78
For instance, after the code has been unit tested, are there several testing phases, such as system, integration and regression testing?
Although these layers of testing do exist in agile projects, agile testing is integrated throughout the lifecycle, with each feature being fully tested as it's developed, rather than doing most testing at the end of all the development.
Here's how traditional test phases typically fit in with an agile testing approach:
- Unit testing is still completed by developers as usual, but ideally there's a much stronger emphasis on automated testing at the code/unit level.
- In eXtreme Programming (XP), there is also a strong emphasis on test driven development, which is the practice of writing tests before writing code. This can start simply with tests (or 'confirmations') being identified when a 'user story' is written, and can go as far as actually writing automated unit tests before writing any code.
- System testing and integration testing are rolled together. As there is at least a daily build, and ideally continuous integration, features can be tested as they are developed, in an integrated environment. As per waterfall, this stage of testing is ideally carried out by professional testers, as we all know developers can't test for toffee! Importantly, though, each feature is tested as it's developed, not at the end of the Sprint or iteration, and certainly not at the end of the project.
- Towards the end of each sprint, when all features for the iteration have been completed (i.e. developed and tested in an integrated environment), there needs to be time for a short regression test before releasing the software. Regression testing should be short because automated, test driven development, with features tested continuously in an integrated environment, should not result in many surprises. Hopefully it should be more like a 'road test'.
- Finally, on a very large project, where a release must practically span multiple sprints to be of any value, a 'stabilisation' sprint may be worthwhile to make sure everything is okay before release. This should, however, be a short duration and the need for a stabilisation sprint should be avoided if at all possible, by trying to deliver releasable quality in each and every sprint along the way. If it is required, this sprint should be all about reducing any defects prior to launch, and the scope of development should at that time be frozen.
Kelly.
Photo by BaylorBear78
3 September 2009 15:35
I agree with all the statements of the article.
Nevertheless, we're facing some issues using them.
In sprints of 2 weeks, each user story tend to be completed at the end of the sprint. Then, our tester verify the user story.
But the defects detected are piling up at the end of the sprint and we need to plan debugging time at the start of the next Sprint.
Any idea how to tackle this and close all user stories (debugging included) on the Sprint?
Thank you
David
23 November 2009 16:03
Hi David - yes it is a problem that all testers encounter in a sprint or any other form of iteration. Ideally the time needed for defects should form part of the sprint planning. Some teams use a finger in the wind guestimate of 15% of the sprint dedicated to bug fixing. This can be refined as metrics are kept over the forthcoming iterations.