Agile Testing: The Changing Role of Testers
That's because agile development contradicts so many things that many Testers have been taught is 'best practice'.
Testers might typically have gone through some recognised training such as ISEB certification.
ISEB testing qualifications, for example, specifically acknowledge iterative-incremental development models, including agile methods.
However, many testers would have undertaken their training, or gained their experience, when the waterfall model was more prevalent. And consequently would have potentially spent years practicing the V-model. With the V-model, system testing correlates directly to the system's specification, and testing is conducted when the software is completed.
Put simply, a Tester's life in traditional development methods was reasonably straight-forward. Give a Tester a spec and a finished piece of software, and they can check it works as specified.
Forget whether what was specified was really what the user wanted, or whether all the requirements were adequately captured, if it meets the spec, the quality is supposedly good.
With agile development methods, a tester's life is rather more complicated.
First of all, there are no big documents specifying every detail of the requirements and functionality for them to test against. Only small pieces of documentation per feature and details captured verbally through collaboration.
Secondly, the software is tested early and throughout the lifecycle while it is still being developed. In other words a moving target.
Put like that, agile testing can be a real challenge.
Add to that the idea of writing test cases up-front, before the software is developed, so acceptance tests form part of the requirements analysis.
Add to that the idea that some tests will be automated at code level and implemented by developers.
Add to that a much greater emphasis on automated regression testing due to the fact that feature-level testing has been completed while the code was still being developed.
To cope with such demands, an agile tester's role must change.
Some of the differences I've observed are as follows:
• With User Stories, there's little difference between a requirements scenario and a test case. Some requirements are implied from the test cases.
• If test cases are written up-front, there’s a fine line between requirements analysis and test analysis.
• A Business Analyst may have special skills in workshop facilitation, interviewing, gathering business requirements, etc. But, regarding functional analysis, Tester and Analyst roles start to converge.
• If unit tests are automated, a tester needs to work with developers to ensure that the tests are complete and appropriate, and that all important scenarios have been identified?
• Testers need to avoid duplicating tests that have already been adequately unit tested when they and others come to system test the features later in the lifecycle.
• Automated regression tests need to be written and maintained.
• The tester's involvement is increasingly important throughout the entire lifecycle of the development, not just in the latter stages of development.
• The role of tester is more aptly described as Test Analyst, and is more of a general QA role for the team, not necessarily the person who devises and executes all the tests.
27 February 2008 05:16
"a Tester's life in traditional development methods was reasonably straight-forward. Give a Tester a spec and a finished piece of software, and they can check it works as specified." How different is it in Agile? I am a Tester and working on Agile. The only difference is I do not get a spec but I too get a piece of software that needs to be tested again and again. You have written the Agile is more challenging for Testers but in what way? I can be reached at [email protected]
I would like to have more email exchange with you as I am still not convinced what is the challenge for a Tester in Agile. - Ravikiran Padki
4 March 2008 02:06
Yes, I agree that the tester role needs to change. The article does not really address the way in which this can be done. My feeling is that not only must the tester's role change, but also the testing role of everyone else in the team. In my mind the fundamental change should be in what the tester(System Tester, Test Analyst) focus their testing on. Historically testers acted as "was the specs implemented" cops to make sure specs were implemented correctly. Agile calls for far more discipline from all involved. Surely between BAs and Developers they should know whether specs were implemented correctly. This can be done through unit & integration testing and through prototype reviews.
This will leave testers to focus on an often overlooked aspect of testing, the stability of the system. The "the user will never do that" test scenarios. I am of the opinion that this is were testers in agile can play the best role. How do we do this without extensive specs? We use experience based methods. Exploratory testing together with technical and business knowledge will truly bring the best out of future test enigneers. Couple this with test automation tools and you could have a winning combination even in an agile framework.
19 March 2008 11:05
I work in a waterfall approach and have not done any Agile testing. I am however interested in the concept and attended a Sigist conference in London yesterday which had an interesting speaker - James Lyndsay - talking about Testing in an Agile environment.
The theme is around the tester role changing (promoted by developers mainly!), and in effect that in the future testers working in a scrum will find AND FIX bugs!
I have a problem with that part of Agile.
I don't believe that testers can effectively fix bugs just like that. As a tester you would need to learn the programming language in order to do this. If you moved around projects where each uses a new technology, then the training requirements are huge. This benefits consultants and training providers but not necessarily the company. Why train a tester in .NET if you have x developers who are already skilled in that area. The danger with this is that testers could become jacks of all trades and masters of none. Is that what we want?
I have mailed James with this point, and used an analogy of a house builder. A team of people work together to complete a house within x weeks (sprint). Each of whom have different skills but work together to complete the job. Bricklayers lay bricks but don't install the electrics!
I await his reply.
On a separate point, he did not cover integration testing. If there are 5 sprints, and each one is tested independently, at what point does someone tackle the integration of that code? What if there are inputs and outputs to be checked? How does end to end testing fit in which is not part of a development sprint as the dev is already complete?
The Agile approach as a tester raises a number of questions for me at this stage. I would be interested in other people's experiences.
20 March 2008 18:34
Hi Steve
Interesting.
Personally speaking, I don't see professional system testers fixing bugs. I don't think that's good. First and foremost because of the skills issues you mention.
The only time I would see this potentially working - and even then I'm not sure it's a great idea - is when you have no testers on your team and the "tester" is actually a developer testing another developer's code. In this case it may make some sense, but even so there is some value in the second pair of eyes that is lost if the "tester" also fixes defects.
Regarding your question about integration: In agile development, integration is continuous throughout every sprint as each feature is developed.
In a project spanning several sprints, there is still room for a regression testing period at the end. I referred to this in another blog post as a "stabilisation sprint" at the end of the project. I would only think this is necessary on a very large project. On smaller projects, or in individual business-as-usual sprints, regression testing would be a brief period at the end of the sprint, once all features are signed off as *done*.
With continuous integration and testing throughout, on a low complexity product this would be more like a road test. On a high complexity product it would obviously need to be more involved.
Regression testing is still needed in agile development though. Even if the code has 100% automated unit tests.
Kelly.
21 April 2008 07:08
As a test manager defining test strategies I am keen to identify a template that specifies the prescriptive elements of an agile testing project. Test templates are usually the building blocks for a test project but with the diminishing focus on documentation in agile development the need for the traditional documentation goes away except for the test strategy. The test strategy says what is supposed to be done up front and you need something like that even for an agile development in order to establish clear unambiguous ground rules for all members of a team.
7 August 2008 15:25
To Steve W: any reply from James on this ?
27 November 2008 14:24
As it happens, I sent Steve a reply to his email on 19 March, and spent just under an hour minutes chatting with him on the phone a few days later. I'd post my reply here, but it needs Steve's initial set of questions to do it justice.
I think that it is unlikely, and undesirable, that a tester will be required to fix all and any bugs. I think that is is likely, and desirable, that simple-to-fix bugs will be fixed as they are found.
In my practice, this happens when I sitting down with a coder, we find the bug in data, configuration, or code, write a better test (and run it to check it fails), make a change, test it, and (if satisfied) make it part of the release and run the usual large set of confirmatory tests on the whole release.
This won't happen to all - or even most - bugs. Some bugs will be logged and prioritised before fixing, some will need a group of people to fix, some won't get fixed, some will cause arguments and fundamental changes to the design. Your mileage may vary.
Anonymous - or anyone else on the list - feel free to contact me via my website / skype / email / etc.
The talk went with a paper; www.workroom-productions.com/papers.html . Have a read.
6 May 2010 10:26
In Agile, I refer the tester takes two roles, Tester and Business Analyst. The tester communicates with a product owner to get requirements, write the test cases, and present requirements to coders. The test cases should be finished before the coder implements, the coder can review and also use the test cases during the implementation time. The tester still report bugs and inform coders to fix them. The product owner will do UAT after the tester complete testing.
How do you think?
6 May 2010 11:19
Hi Son. I definitely agree with you, that's why in my last two companies we have made a conscious decision to describe testers as Test Analysts.
Kelly.