Agile Release Planning
A software release may result from multiple iterations (or 'Sprints' in Scrum).
Sprint Planning is about planning what's included in the next iteration.
Whereas Release Planning is about planning multiple Sprints, in order to predict when a release (or releases) might be delivered.
Release Planning is a very simple way of doing some top-down planning. Much less complex than a more traditional project plan on a Gantt-chart. Therefore much quicker to do. And, I would say, no more or less accurate.
First of all, let's assume you already have your Product Backlog (feature list), with all your User Stories set out in priority order. Let's also assume that you've estimated your product backlog, ideally using Story Points.
If you already have an established team doing Scrum or XP (eXtreme Programming), use the team's known Velocity to divide the Product Backlog into Sprints.
However, if the team is not already using Scrum or XP, you need to estimate the team's Velocity. To do this, you must first make an assumption about the team size that is likely for the release. Then, you need to decide on your Sprint duration and, ideally with the input of the team, decide how many of the initial User Stories you think could reasonably be achieved in a Sprint. Add up the Story Points for these items. Using this number of Story Points as the team's estimated Velocity, divide the Product Backlog into Sprints.
If the project team is not already established, add a 'Sprint Zero' at the beginning to get things ready for the first Sprint, for instance getting the team organised, briefing meetings, setting up development and test environments, preparing the first set of User Stories, etc.
If it's a large or complex release, add a 'Stabilisation Sprint' (or more than one Sprint if appropriate) at the end to stabilise the release. By this, for instance, I mean stop adding new features, complete regression testing, bring the defect count down to an acceptable level, prepare for deployment, etc.
If the predicted end date is not acceptable for the project's objectives, alter the assumption about the team size (and associated costs!) and re-calculate.
And there you have it! A Release Plan that provides an outline of the Sprints, what's included in each, and an estimated date for the release.
Sprint Planning is about planning what's included in the next iteration.
Whereas Release Planning is about planning multiple Sprints, in order to predict when a release (or releases) might be delivered.
Release Planning is a very simple way of doing some top-down planning. Much less complex than a more traditional project plan on a Gantt-chart. Therefore much quicker to do. And, I would say, no more or less accurate.
First of all, let's assume you already have your Product Backlog (feature list), with all your User Stories set out in priority order. Let's also assume that you've estimated your product backlog, ideally using Story Points.
If you already have an established team doing Scrum or XP (eXtreme Programming), use the team's known Velocity to divide the Product Backlog into Sprints.
However, if the team is not already using Scrum or XP, you need to estimate the team's Velocity. To do this, you must first make an assumption about the team size that is likely for the release. Then, you need to decide on your Sprint duration and, ideally with the input of the team, decide how many of the initial User Stories you think could reasonably be achieved in a Sprint. Add up the Story Points for these items. Using this number of Story Points as the team's estimated Velocity, divide the Product Backlog into Sprints.
If the project team is not already established, add a 'Sprint Zero' at the beginning to get things ready for the first Sprint, for instance getting the team organised, briefing meetings, setting up development and test environments, preparing the first set of User Stories, etc.
If it's a large or complex release, add a 'Stabilisation Sprint' (or more than one Sprint if appropriate) at the end to stabilise the release. By this, for instance, I mean stop adding new features, complete regression testing, bring the defect count down to an acceptable level, prepare for deployment, etc.
If the predicted end date is not acceptable for the project's objectives, alter the assumption about the team size (and associated costs!) and re-calculate.
And there you have it! A Release Plan that provides an outline of the Sprints, what's included in each, and an estimated date for the release.
27 February 2008 16:45
Isn't this somehow going back to a waterfall approach when trying to plan what will be released and when?
I can see a customer not willing to make an application public until some critical stories are developed. In any case, delivering a "potentially shippable" product to the customer at the end of each sprint gives the customer the freedom to consider the application releasable, or not (whenever that may be).
27 February 2008 18:40
You had me until the "Stabilisation Sprint". If you get to the end and you need such a sprint, or sprints, then you were fooling yourself during the earlier sprints when you claimed the work item was "done". "Done" means tested and ready to ship. Either you were fooling yourself, or the Scrum Master was not doing their job very well.
27 February 2008 21:47
Hi, this is my response to Nathan...
Re your first question, "isn't this somehow going back to waterfall when trying to plan what will be released and when?" - an agile approach doesn't necessarily mean a lack of any planning.
Even if there's a release plan, the scope can be varied throughout the development, with features being added and dropped and the release plan being adjusted if necessary.
Realistically, it's unlikely many customers, internal or external, will commit to any significant project funding without some idea of what's likely to be released and when.
A key difference though is that it's not based on tasks and hours and dependencies all mapped out in detail on a gantt chart that attempts to understand all the deliverables. It's just based on the number of story points the team thinks it can do in each iteration and chunking up the features on the initial product backlog.
In any event, you're right about the product being potentially shippable at the end of each sprint. That certainly is ideal, but my comment on this point was about "releases that are large or complex", and therefore it may well be impractical to release the product until core features are present, even if the completed features are production quality at the end of each sprint.
Kelly.
27 February 2008 21:58
Here is my response to 'jbisotti'...
You questioned the need for a stabilisation sprint. For small releases, I would completely agree with you.
But let's be honest, in practice on a large or complex project with a large team developing a ground-up product and spanning several sprints, there always remains the possibility of making changes that affect features that were already signed off earlier in the project.
Now of course I realise that if you have a great design with little dependence between features, automated unit tests with 100% code coverage, automated regression testing, and do not suffer from human error, this risk might be low. But in my experience, in practice this isn't usually the case for large projects and some regression testing and rectification is usually needed once the scope is completed.
And on a large release, like a ground-up development, this last sprint (in our case they're short sprints of a couple of weeks) might be needed just to do this regression testing, resolve any final issues, complete final load testing, prepare for deployment, etc, etc.
You might not agree with the term "stabilisation sprint", maybe it's just the last sprint of the project.
If it's not needed, then clearly that's great! I'm certainly not saying it's compulsory. And I certainly wouldn't want to see it after a small business-as-usual sprint.
But, on a big project involving a large team and spanning several sprints, I'm just acknowledging what I think is probably the reality for many project teams, however experienced they might be in agile development.
Kelly.
4 March 2010 08:10
jbisotti, Kelly,
I do understand the perspective from both of you and tend to agree more with Kelly; with reality that in each sprint, we typically expect functional test cases as priority for acceptance.
Although performance testing may be equal party for acceptance, But unless we have all the sprints that may compose of several teams and sprints, one-shot performance test may be good to get actual performance that can be expected in Production. And larger teh ammount of functionality packed in release, more the time which may not fit in previous sprint completely.
Also vulnerability testing is another part which can best be done with stabilization Sprint.
The cost of doing perfefct testing in incrementally regression via each sprint is much higher as compared to stabilization Sprint cost.
23 July 2010 11:42
I think the question of a "stability" sprint is more of a semantic debate. You could easily have stated it as a list of "lower priority" user stories, such as: "As a clerk, I want to get the results of my query back in under 2 seconds, so that I can complete 20 widgets per hour." In "stability" sprint terms, that just means performance tuning. The same might be done with vulnerability tightening or other types of things that may be best left for the "end". Also, it is a simple reality that as more code is piled in, the risk / chance of regression issues grows exponentially, so (as suggested) larger projects will be more likely to need some final "stability" tweaking. As someone else mentioned, it is just too time consuming to do complete regression testing during every sprint once the amount of "new" code gets to a certain size. That is not to say that the result shouldn't be potentially shippable, but a little "clean-up" at the end of a major "release" is reasonable and indeed responsible.