Agile Release Planning – the preparation phase

Before I start, I want you to read the following statement:

“The most important role of PO, project manager and the team is to create a product that someone wants to buy, from which company can earn some money, and in return pay you something back. In process of creation, you need to respect the law and make sure that people are treated with respect and dignity. Anything else is a second priority”.

Some time back, I spoke about “Iron Triangle” and how critical is to know upfront which project lever (functionality, date or release) are at your disposal for adjustment as you start with the planning. Failure to ask this to stakeholders is a sign of immaturity or lack of confidence, or both.

Once that is clear, you look into the feature list which is presented by PO at the kick start of the project. That list can’t be longer than 10-15 items. This list must survive “so-called” elevator test (from Geoffrey Moore’s “classic” Crossing the Chasm), and it goes like this:

  • For <target customer>
  • Who <statement of the need or opportunity>
  • The <product name> is a <product category>
  • That <key benefit, compelling reason to buy>
  • Unlike <primary competitive alternative>
  • Our product <statement of primary differentiation>

These items are not “epics”, these are your FTPET’s (I just made that name now, features that passed elevator test). For the stakeholders and the outside community, these are only sellable items, and you need to track them separately throughout the project lifecycle, next to all other working items (stories, epics, tasks, subtasks, whatever – these are too technical terms for the rest of the folks).

In the first phase of the project (between M1 and M2 from the project milestones blog), you need to organize a workshop. For more info, I recommend a book “Practices for Scaling Lean & Agile Development“ by Craig Larman and Bas Vodde.  Short version: go for a separate location with key guys, make sure you have enough white boards around (and camera to capture drawings and discussions) and do the following (extract from the book):

  • Envision business case, strategy, high-level list of features (major scope), and constraints for the release.
  • Identify ten or twenty percent of the features that deserve deep analysis immediately.
  • Choose a subset that will yield broad and deep information about the overall release work. There are always some key features that, if deeply analyzed, give you overarching information about the big picture.
  • These may be the most complex, the most architecturally influential, the highest-priority features, or features with the most obscurity. From the viewpoint of information theory, they represent a subset that, if analyzed, is most likely to have lots of surprising information—the most valuable kind (end of ref)

Once the workshop is over, create a small sketch plan on A4 paper. If you need A3, you are already in trouble. Divide it into 4 quarters (Q1 to Q4) and draw the activity diagram, similar to story mapping, but taking into account only major items, including FTPETs and most important non-feature tasks (what ever that is).  Small tip: for god’s sake don’t use week numbers instead of quarters, most of the people outside

of project circles can’t actually tell you in which w## are they. It would also give a wrong impression that your precision will be in weeks – and it is not. This paper will be your basis for a discussion with R&D folks, key architects with whom are you going to make a more detailed plan. Right now, you need this paper as a tool to check the dependencies, and get a first feedback from R&D. After few sanity checks on the white board, you will know whether this plan makes sense, nothing more. If people can’t tell you this at least, you either have not done a great workshop, or you are surrounded by wrong people (next tip in later case: if you can’t change the team around you, change the team around you).

Depending on the team size, and whether the team is collocated or not, you will need to split development to one or more feature teams. Each feature team must have a feature owner, call it a scrum master, what ever. He should be leading feature discussions with PO (to come up with more detail user stories), lead technical discussions with R&D folks, and make sure that all ideas are captured in the wiki, with drawings, mockups and design choices. Here is one example of feature mockup in the wiki:

Once the key items passed this phase, you need to ask scrum masters and devs to add all KEY user stories, FTPETs and major tasks in the system (in our case JIRA). By then, about 30%- 50% of all features are well understood. Reminder: after workshop you tackled 20% of the features, few weeks later you are at 20% of the time span of the release. This is your milestone M2. 20% of time required for analysis is rule of thumb, BUT, you should not cross it too much.  If the time is the most important dimension, everything that has not been discussed at this stage and it still looks complex to solve, must be out of that release (or at least be given very low probability of success in that release). Here is what you can do next, create a feature list with the following items:

  • Key (JIRA or something similar)
  • Summary
  • Probability
  • Rationale (Grow, Harden, Customer etc)
  • Priority (P1, P2 or P3)
  • Comment
  • Size (relative) in story points

In my previous blog, I talked about story points estimation. Now you can see how I use them (R&D still can use them for other things). The size of the feature  is a story point given as a relative number. I can sum all stories together and provide a quick overview to PO on where the majority of the effort of this release will be spent (like in the picture above, which is made by one simple wiki macro). It also provides me with a tool to give a honest/professional feedback to the stakeholders about probability of the success of any given feature. Please also note that for the test and doc effort estimation I use T-Shirt estimation sizes (S,M,L,XL).

Rationale category is important information for PO and stakeholders, as based on this info we can all see in one pie chart whether we are making this release as the result of the push effect (bring a new functionality that is unique to the product), the pull effect (customers ask for new features) or we are busy with tasks such as improved scalability, some major redesign, or refactoring (resulted from “Demo Driven Development” syndrome).

Once you have your skeleton plan and 30%-50% of all features well understood (including the most complex in this bucket), you are ready to go – you have your milestone M2. Now the fun starts! Stay tuned for the next blog, it’s coming….


P.S. Comment  on the first paragraph:

When I became a manager some years back, I was told that I will be measured by “Whether the project is delivered on time, with agreed budget and all features implemented”. I believe now, as I did at that time, that this is the worst possible message (and metric) you can ever give to a young manager.


Effort Estimations Fallacy or Effort (only) Driven Project Planning

Countless books, blogs and studies have been written about the effort estimation process. Unless we talk about repetitive, so called “blue collar” jobs, the estimation of the time required to complete any complex activity has proven something hard to do right (these are my favorite studies: “Research suggests people underestimate numerical guesses when leaning left“, “Impact of irrelevant and misleading information on the effort estimate.”). My take on this is that the problem is not easy to solve as our work includes solving non-linear problems (or that the estimation of the effort to solve a problem is non-linear problem to solve). Before joining the big company, I had fun dealing with some of these problems in another field, and to my mind, from early days of working as a developer in a dev team, everything that had to do with the planning and the effort estimation was similar to the simulating annealing process, where the convergence to the global maximum requires sometimes taking random (not to say wrong) steps. In my previous blog, I talked about the “Cone of Uncertainty” that should shrink over time, which in a way is similar to the idea of the cooling in the simulating annealing algorithm.

I went to this length of explaining my “academical view” on this, as I would like you to remember two things: First, if you ever find yourself reading or trying to find a magic formula for effort estimation – you are wasting your time. Techniques, such as “functional points analysis”, have been invented by folks that never wrote single line of code. Secondly, the fact that the estimation is not a simple thing to do, doesn’t mean you should not do it, I am only saying, what ever you do, be careful about the data you get out of this process.

Many agile gurus, like Craig Larman, rather talk about a need for trust and transparency between stake holders and R&D, in order to make a project planning process more transparent and more engaging. More over, as the burden of success or failure of the project are often solely responsibility of the project manager after initial roadmap phase (which is wrong in many ways), discussions about the effort estimation becomes the central point of the project planning – generating mistrust between R&D and the rest of community at early stage of the project. WRONG!!

First of all, like mentioned  in my previous blog, project roadmap planning must be first about the product strategy and activities which are required to make the release success. Once that is clear, and first few iterations are completed, people will have better idea what is probably achievable in the current release. Before that phase is completed, any estimation or the feedback from R&D must be taken with a caution. At very beginning of the project, the focus of the  team, including R&D and PO, should not be on the effort estimation, but rather on understanding of the requirements, complexity of the problem and different possible paths that can lead to solution. Once this is clear, we are ready for the planning phase.

But before I explain how I do it (in the end, this is a blog about agileME), I still feel like introducing a little bit of methodology, sorry for that. When it comes to getting a number from the estimation process, they are today two main schools of thoughts: you either do the estimation in time, or in story points.

Some time back, Mike Cohn came up with a story point planning strategy: check out book or video link from google talk, or slides. In short, first you go with iterative estimation strategy, called a poker game. Here are the steps:

  • Each estimator is given a deck of cards, each card has a valid estimate written on it (Fibonacci series)
  • Customer/Product owner reads a story and it’s discussed briefly
  • Each estimator selects a card that’s his or her estimate
  • Cards are turned over so all can see them
  • Discuss differences (especially outliers)
  • Re-estimate until estimates converge

Finally fill the backlog with tasks, subtasks and story points. Once you have done so, as Mike says, you continue like this:

  • Determine how many iterations you have
  • Estimate velocity as a range (number of story points team can finish in one iteration)
  • Multiply low velocity × number of iterations  – These are “Will Have” items
  • Multiply high velocity × number of iterations and substruct from this the number of “Will Have” items in story points  – These are “Might Have items”

Picture taken from Mike Cohn’s presentation on the iteration planning

It sounds simple, but there are few “gotchas” that have hit every single scrum team I have seen so far:

  • it makes people focused too much on poker games too early in the project (you miss a big picture).
  • it feels like project management is all about story point estimation and simple counting (to be frank, few agile folks warn about  this trap, but still it happens way too often)
  • if you make your planning only based on the total number of story points, then you need to estimate every single little thing in story points. I hate this, and this is in contrary to Pareto rule from the previous blog. More over, you may wonder whether to estimate “framework things”. Then you have academic discussions on whether to estimate refactoring, etc. endless. Before you know it, people would spend weeks doing estimation without doing any real work, contrary to everything agile should bring. If you have inexperienced scrum master that needs to provide a roadmap planning, you will be doing poker games like crazy and end up with 800 stories in your backlog before you even start doing anything (flat backlog blackout syndrome)
  • people don’t learn from their own estimations. As mentioned earlier, estimations are always wrong, but if you do it in time, at least after a while you know better whether you are a pessimist or an optimist.
  • you can’t make that much out of the story points if the team is new in this
  • we are told that story points are about the complexity number of the feature, BUT everyone in his mind has the time dimension anyway (strange how few people are willing to admit this, don’t know why). For a fun, check out this TED video “Why we think it’s OK to cheat and steal (sometimes)” from Dan Ariely, and pay attention on the part of the experiment which includes tokens as a intermediate currency before being exchange for a real money.
  • if someone tells me that he will need 5 points to make something, I have no clue when this stuff will be done, more over, the guy doesn’t feel like making any commitment to the rest of the team. (You may wonder what this has to do with my previous statement of avoiding getting in situation “commitment from R&D only” to the stakeholders). This is different, it is about a guy having the best guess how much is going to take him to finish a task within one iteration. If there is no time information present, you never know whether the guy is in trouble during the development, whether he knows what he is talking about, and if you like, it is hard to make a peer pressure working (I am a hard core XP guy on this, to be frank).


Now, you may wonder why project managers like story points planning. I think for the same reason they liked waterfall, it gives them a fake feeling of having things under control once you have your numbers. Like by magic, story points remove all uncertainty of making the project a success.

If you by now think that I am not using story points at all, you would be wrong :). I do use them, but in a different way. First of all, story points discussion is very good way of discussing complexities – indeed. It is engaging. It also provides people with a “token platform” to discuss different aspects of the problem, and that level of indirection at that point of time is a good thing. The only real question is what you do with these discussions and the numbers that come out of them? Here is a little tip, while listening on discussions, I am interested in two things:

  1. How much opinions differ
  2. How complex the feature is

Complexity number gives you an idea about the nature of the probability distribution of the estimation for a given feature, while different opinions are giving you the range (or sigma) of the “Bell Curve” – if and only if, the complexity seems to be low (in that case we can talk about the Bell curve).

If you are faced with a project that is not evolving that much, and I guess this is the place where most of the “story points guys” are writing their studies and books, you are faced with rather predictable environment, and story point only project planning might work well. BUT, if you are faced with projects in which features exhibit high level of skewness, you better do things differently. One thing that strikes me the most about story point articles is how they tend to plot the distribution of story points per iteration (like in picture above) – and come with a bell curve always, while I would be more interested to plot estimated vs. actual points required to finish any given story. Well, the point is, you can’t, as this is never captured. Again, you may think, why the hell would someone need that, since you come up with the global distribution that is bell curved anyway? Again, that is only true, if all the features are of the same distribution, and that is not the case in more complex projects! Just ask yourself, how many times you have actually seen burn down  chart working? Really?

By now, you probably know why most of the project managers are either depressed, lunatics or alcoholics, as they need to deal with quite some uncertainty every single day, and provide always accurate planing. But there is a way out of this, more about it in my next blog post.