Agile Business Conference 2008 – Agile Development: Enterprise Delivery
Now I'm in a session called 'Agile Development: Enterprise Delivery', being presented by bjss. They're using agile development techniques on an enterprise scale project, where the customer wants fixed budget, fixed delivery date, and the scope is also fixed because the project's replacing an existing system...
Now we all know that's a challenge in software development, whatever methodology!
The project is to deliver a high volume (5-10,000 orders per second), high performance trading system, with thousands of users and a big bang delivery. It's a significant revenue generator, and the project is a 1-2 year programme. Sounds quite scary, but so far doesn't sound very agile!
Added to the scale of the project and complexity of the solution, the customer is also more used to a traditional, waterfall approach to software development projects. The customer thinks they need something more agile to help mitigate some of the risks, but they also want their cake and eat it: they want to know exactly what they're going to get and when, because the deadline needs to be publicly advertised.
One way they're mitigating some of the risks is to deal with the most difficult, risky things first. As usual in agile, they're also of course breaking the project into small iterations, testing early, and taking a very collaborative approach to the project. Likewise, user acceptance testing is a continuous process throughout the delivery.
They're very risk focused, asking every day "what can we do to reduce the risks?". It's a very architecture-centric approach. They did a PoC (Proof of Concept) first, in order to test performance, new technologies and some key aspects of the architecture.
They didn't go into every detail on the architecture up-front, but they did have a high level view of the overall architecture, in order to understand the solution context and the key interfaces.
The project is use-case driven. Iterations are delivered to the users, even if they have little functionality. User representatives can actually use what has been built throughout the development, even if there's not much to the solution yet.
The project is structured into small, autonomous "cells" (multi-disciplined, agile teams). An architecture team was formed first, with other teams coming on board asap. The high level architecture allowed the solution to be divided into distinct parts, allocated to the vertical teams.
The project is very *team* focused. They are committed to the approach. They have clear roles and responsibilities. They are honest with each other and they happily help each other. They are empowered to make decisions. They prefer face to face communication, and they work hard to keep the team motivated.
The team leaders for each team meet every day in a Scrum of Scrums to keep the leadership group joined up and aware of progress and issues.
They only document what is necessary and only sufficient to meet the purpose. Documentation has a short lifetime. Instead they tend to focus more on workshops, collaboration, whiteboarding sessions and short design notes.
They have one rule - Never break the build! They automate as much as possible. They run *all* the automated unit tests on *every* build. The team fixes any problems as and when they arise.
In terms of quality, their mantra is 'keep things simple'. They test early and often. They turn on all warnings and fix everything they find. They measure test coverage. And customers test continuously as releases drop.
The choose their tools carefully, depending on the needs and nature of the project, making sure they have the tools to help measure test coverage, handle refactoring, automate tests and builds, etc, etc.
Although the development teams are working in a very agile way, from a project management perspective, they do keep a strong focus on the bigger picture and the end game. Use-cases move between iterations quite regularly, but they measure progress using some key metrics. For example they track % complete (using earned value), velocity, efficiency, defect count/detection rate, and test progress. These metrics are used for communication with the customer, and also for forward planning.
Regular demonstrations and iterative delivery are key to the software delivery. But they also do have some traditional project governance, sitting over the overall programme, e.g. status reports, RAG status, a gantt chart showing the iterations and any non-development aspects of the project. They also do resource profiles, track risks and issues and manage project budget.
So, in their experience, you can have your cake and eat it too!
Kelly.
Now we all know that's a challenge in software development, whatever methodology!
The project is to deliver a high volume (5-10,000 orders per second), high performance trading system, with thousands of users and a big bang delivery. It's a significant revenue generator, and the project is a 1-2 year programme. Sounds quite scary, but so far doesn't sound very agile!
Added to the scale of the project and complexity of the solution, the customer is also more used to a traditional, waterfall approach to software development projects. The customer thinks they need something more agile to help mitigate some of the risks, but they also want their cake and eat it: they want to know exactly what they're going to get and when, because the deadline needs to be publicly advertised.
One way they're mitigating some of the risks is to deal with the most difficult, risky things first. As usual in agile, they're also of course breaking the project into small iterations, testing early, and taking a very collaborative approach to the project. Likewise, user acceptance testing is a continuous process throughout the delivery.
They're very risk focused, asking every day "what can we do to reduce the risks?". It's a very architecture-centric approach. They did a PoC (Proof of Concept) first, in order to test performance, new technologies and some key aspects of the architecture.
They didn't go into every detail on the architecture up-front, but they did have a high level view of the overall architecture, in order to understand the solution context and the key interfaces.
The project is use-case driven. Iterations are delivered to the users, even if they have little functionality. User representatives can actually use what has been built throughout the development, even if there's not much to the solution yet.
The project is structured into small, autonomous "cells" (multi-disciplined, agile teams). An architecture team was formed first, with other teams coming on board asap. The high level architecture allowed the solution to be divided into distinct parts, allocated to the vertical teams.
The project is very *team* focused. They are committed to the approach. They have clear roles and responsibilities. They are honest with each other and they happily help each other. They are empowered to make decisions. They prefer face to face communication, and they work hard to keep the team motivated.
The team leaders for each team meet every day in a Scrum of Scrums to keep the leadership group joined up and aware of progress and issues.
They only document what is necessary and only sufficient to meet the purpose. Documentation has a short lifetime. Instead they tend to focus more on workshops, collaboration, whiteboarding sessions and short design notes.
They have one rule - Never break the build! They automate as much as possible. They run *all* the automated unit tests on *every* build. The team fixes any problems as and when they arise.
In terms of quality, their mantra is 'keep things simple'. They test early and often. They turn on all warnings and fix everything they find. They measure test coverage. And customers test continuously as releases drop.
The choose their tools carefully, depending on the needs and nature of the project, making sure they have the tools to help measure test coverage, handle refactoring, automate tests and builds, etc, etc.
Although the development teams are working in a very agile way, from a project management perspective, they do keep a strong focus on the bigger picture and the end game. Use-cases move between iterations quite regularly, but they measure progress using some key metrics. For example they track % complete (using earned value), velocity, efficiency, defect count/detection rate, and test progress. These metrics are used for communication with the customer, and also for forward planning.
Regular demonstrations and iterative delivery are key to the software delivery. But they also do have some traditional project governance, sitting over the overall programme, e.g. status reports, RAG status, a gantt chart showing the iterations and any non-development aspects of the project. They also do resource profiles, track risks and issues and manage project budget.
So, in their experience, you can have your cake and eat it too!
Kelly.
23 September 2008 19:32
I thought you may find the following news and Agile success story of interest.
http://www.accurev.com/press-releases/092308-avid-selects-accurev.html
http://www.accurev.com/accurev-mcafee-success-story.html
Best,
Alex