User Stories Should Be *Testable*
The *T* in the the 'Invest' acronym (a way to remember and assess what makes a good User Story) stands for Testable.
The most common forms of User Story that are not testable are big User Stories, known as Epics, or non-functional User Stories...
An Epic can be a User Story that really comprises multiple User Stories. Or perhaps an Epic can also be a User Story that is very complex.
An example from my previous job might be something like: "As a user, I want to calculate the cost of repairing a crashed car". This is really an Epic and is not really Testable.
Non-functional stories are things like "As a user, I want the system to be fast", "As a user, I want the system to be secure", "As a user, I want the system to be easy to use". None of these things are well written from a Test perspective. Perhaps they could be re-phrased to make more Testable? For instance, "web pages should generally load within 2 or 3 seconds", "the system should comply with OWASP security standards"...
Let's take my recent Example of a User Story to see if it's Testable?
First of all, it's small and fairly independent, which makes it inherently more testable to start with. This is a simple example so it should be easy to test. The scenarios are: Successful login, Failed login (user id), Failed login (password), Failed login (expired account), Failed login (authentication system unavailable), Remember me (if successful and ticked).
Importantly, identify the tests up-front, before the User Story is implemented. If developers know how the User Story will be tested, maybe they will write it to pass? :-)
Kelly.
The most common forms of User Story that are not testable are big User Stories, known as Epics, or non-functional User Stories...
An Epic can be a User Story that really comprises multiple User Stories. Or perhaps an Epic can also be a User Story that is very complex.
An example from my previous job might be something like: "As a user, I want to calculate the cost of repairing a crashed car". This is really an Epic and is not really Testable.
Non-functional stories are things like "As a user, I want the system to be fast", "As a user, I want the system to be secure", "As a user, I want the system to be easy to use". None of these things are well written from a Test perspective. Perhaps they could be re-phrased to make more Testable? For instance, "web pages should generally load within 2 or 3 seconds", "the system should comply with OWASP security standards"...
Let's take my recent Example of a User Story to see if it's Testable?
First of all, it's small and fairly independent, which makes it inherently more testable to start with. This is a simple example so it should be easy to test. The scenarios are: Successful login, Failed login (user id), Failed login (password), Failed login (expired account), Failed login (authentication system unavailable), Remember me (if successful and ticked).
Importantly, identify the tests up-front, before the User Story is implemented. If developers know how the User Story will be tested, maybe they will write it to pass? :-)
Kelly.
7 April 2008 23:44
Hi Kelly,
I'm a follower of your blog and I have learnt a lot with it. Thanks for the contribution.
I 've made an odd post at my companie's blog, and I invite you to watch it too! :)
Hope you have fun!
www.seatecnologia.com.br/blog
or
http://www.youtube.com/watch?v=g_Y-eHsADrw
8 April 2008 01:20
The great thing about user stories are that they are written in plain English, rather that the traditional (1.1 The system shall do...) format. In this format, business users can understand what they should be able to do when performing acceptance testing. +1 for user stories.
8 April 2008 12:48
At the end of a Sprint what should be produced is clean code, i.e. fully tested, bug-free code. When implementing user stories, developers should not have to be told that the user stories will be tested, as testing should be a part of their process.
We use two types of testing - programmatic tests created by the developers, and acceptance testing, performed on the application by the user after the Sprint review.
I have also seen teams where one of the member's sole responsibilities is to create tests for the code produced by others. Whether a dedicated tester is absolutely required is debatable, however some teams coming from a more traditional methodology will have one.
Regardless, the development team should be testing to cover the application perspective, and the product owner should test from the user perspective. That way, all bases are covered.
15 May 2008 15:36
One thing that I do have a concern with is the lack of testing detail. Following a traditional waterfall approach, you have a test case and then a script which tells you what data you are entering and what the expected result is. When writing user stories you write high level test conditions, but do not have much detail. I guess it is not a problem if the test is very simple, but if there are a number of variables where different results occur depending upon a value entered, then surely the inputs should be recorded. I also wonder how Agile as a methodology stand up to auditing. I have worked for banks where the tests are audited due to regulatory requirements and there is an expectation that the test inputs are fully defined, with expected results, and more importantly some proof of the result!. Where in Agile does proof of test results for auditing sit?
15 May 2008 15:36 This post has been removed by a blog administrator.
15 May 2008 17:18
Hi Steve
You make some very fair points that I know are a concern to a lot of testers getting used to agile.
In part, the point is really a philisophical one. In traditional waterfall projects, a tester has to think of the data to enter to drive out the test scenarios. In agile, the tester still has to think of the same things; just not necessarily *all* up-front, they can do it when they come to execute the test cases for the user story.
Specific data input that drives different scenarios can be captured in the test cases on the user story where appropriate. For example, to test a credit card payment you might have: 1) test visa card, 2) test amex card, 3) test expired card. You could potentially put the data with the test cases, but I'd suggest only including the essential differences in the data that drive it down a different scenario. And only if somehow it's helpful to do that up-front rather than when the story is ready to be tested.
If you're using a test management system, further details can be held in there, with each test case scripted in more detail if and where that's appropriate. At least then it's repeatable, but in agile testing would ideally be largely automated anyway, so it may not be necessary.
With or without this extra detail, it's still good to capture the basic test cases before development, as part of getting the user story written.
If you have a requirement to capture evidence of test results for audit purposes, you can still do so in your test management system, when the tests are executed. And you can still use your test management system to see how much of the testing has been covered for the sprint, and how much has passed.
Basically the process is largely the same as in traditional projects. But you do it piecemeal when it is needed, i.e. just in time, instead of all up-front.
Kelly.
1 August 2009 22:50
Hi Steve,
Having worked in agile teams for a while now, I have to agree with Kelly's response 'the process is largely the same'. Another advantage to writing test cases 'just in time' is that they stay fresh in your mind. How many times have you written test cases, sometimes months up front and by the time you test, forgotten the logic behind them?
4 November 2009 19:44
As a QA Analyst for over a decade, I do not agree with the test examples in most User Stories. A test should be an action and an expected result, whereas the acceptance tests commonly given in the User Story examples are really Waterfall requirements statements without the word "shall" in it. Seriously! Look at it for a while and you will see what I mean. User Stories are leveraging the test section in order to further refine requirements, but it is no more a test than a requirement is a test.
Why create test cases in advance? Well, I find that it helps me prepare on many levels, but that is quite a lengthy subject, not to mention that generating tests is in and of itself a form of testing(static). If you can catch defects in requirements, you have stopped them at the least expesive point of the process. I will also note that finding data values on the fly can be extremely time consuming. It helps to find out how dev has layed the tables out, etc. and test writing is a great way of figuring that out.
One other note: I believe that it is inherently not a good practice for developers to have access to the QA tests. Developers should never test the way that QA tests. Also, the prospect of unknown audits can often lead us to do more complete work than if we know the audit will not cover something.
Do you need a dedicated tester on the team? Hmmm...well I just logged 150 defects on a release that was Unit tested before I got it, and this was created by senior developers. If you want to be less accountable for the work you do and you don't mind more bugs going to the end user, then by all means get rid of the people who specialize in it. :)