Countless books, blogs and studies have been written about the effort estimation process. Unless we talk about repetitive, so called “blue collar” jobs, the estimation of the time required to complete any complex activity has proven something hard to do right (these are my favorite studies: “Research suggests people underestimate numerical guesses when leaning left“, “Impact of irrelevant and misleading information on the effort estimate.”). My take on this is that the problem is not easy to solve as our work includes solving non-linear problems (or that the estimation of the effort to solve a problem is non-linear problem to solve). Before joining the big company, I had fun dealing with some of these problems in another field, and to my mind, from early days of working as a developer in a dev team, everything that had to do with the planning and the effort estimation was similar to the simulating annealing process, where the convergence to the global maximum requires sometimes taking random (not to say wrong) steps. In my previous blog, I talked about the “Cone of Uncertainty” that should shrink over time, which in a way is similar to the idea of the cooling in the simulating annealing algorithm.
I went to this length of explaining my “academical view” on this, as I would like you to remember two things: First, if you ever find yourself reading or trying to find a magic formula for effort estimation – you are wasting your time. Techniques, such as “functional points analysis”, have been invented by folks that never wrote single line of code. Secondly, the fact that the estimation is not a simple thing to do, doesn’t mean you should not do it, I am only saying, what ever you do, be careful about the data you get out of this process.
Many agile gurus, like Craig Larman, rather talk about a need for trust and transparency between stake holders and R&D, in order to make a project planning process more transparent and more engaging. More over, as the burden of success or failure of the project are often solely responsibility of the project manager after initial roadmap phase (which is wrong in many ways), discussions about the effort estimation becomes the central point of the project planning – generating mistrust between R&D and the rest of community at early stage of the project. WRONG!!
First of all, like mentioned in my previous blog, project roadmap planning must be first about the product strategy and activities which are required to make the release success. Once that is clear, and first few iterations are completed, people will have better idea what is probably achievable in the current release. Before that phase is completed, any estimation or the feedback from R&D must be taken with a caution. At very beginning of the project, the focus of the team, including R&D and PO, should not be on the effort estimation, but rather on understanding of the requirements, complexity of the problem and different possible paths that can lead to solution. Once this is clear, we are ready for the planning phase.
But before I explain how I do it (in the end, this is a blog about agileME), I still feel like introducing a little bit of methodology, sorry for that. When it comes to getting a number from the estimation process, they are today two main schools of thoughts: you either do the estimation in time, or in story points.
Some time back, Mike Cohn came up with a story point planning strategy: check out book or video link from google talk, or slides. In short, first you go with iterative estimation strategy, called a poker game. Here are the steps:
- Each estimator is given a deck of cards, each card has a valid estimate written on it (Fibonacci series)
- Customer/Product owner reads a story and it’s discussed briefly
- Each estimator selects a card that’s his or her estimate
- Cards are turned over so all can see them
- Discuss differences (especially outliers)
- Re-estimate until estimates converge
Finally fill the backlog with tasks, subtasks and story points. Once you have done so, as Mike says, you continue like this:
- Determine how many iterations you have
- Estimate velocity as a range (number of story points team can finish in one iteration)
- Multiply low velocity × number of iterations – These are “Will Have” items
- Multiply high velocity × number of iterations and substruct from this the number of “Will Have” items in story points – These are “Might Have items”
Picture taken from Mike Cohn’s presentation on the iteration planning
It sounds simple, but there are few “gotchas” that have hit every single scrum team I have seen so far:
- it makes people focused too much on poker games too early in the project (you miss a big picture).
- it feels like project management is all about story point estimation and simple counting (to be frank, few agile folks warn about this trap, but still it happens way too often)
- if you make your planning only based on the total number of story points, then you need to estimate every single little thing in story points. I hate this, and this is in contrary to Pareto rule from the previous blog. More over, you may wonder whether to estimate “framework things”. Then you have academic discussions on whether to estimate refactoring, etc. endless. Before you know it, people would spend weeks doing estimation without doing any real work, contrary to everything agile should bring. If you have inexperienced scrum master that needs to provide a roadmap planning, you will be doing poker games like crazy and end up with 800 stories in your backlog before you even start doing anything (flat backlog blackout syndrome)
- people don’t learn from their own estimations. As mentioned earlier, estimations are always wrong, but if you do it in time, at least after a while you know better whether you are a pessimist or an optimist.
- you can’t make that much out of the story points if the team is new in this
- we are told that story points are about the complexity number of the feature, BUT everyone in his mind has the time dimension anyway (strange how few people are willing to admit this, don’t know why). For a fun, check out this TED video “Why we think it’s OK to cheat and steal (sometimes)” from Dan Ariely, and pay attention on the part of the experiment which includes tokens as a intermediate currency before being exchange for a real money.
- if someone tells me that he will need 5 points to make something, I have no clue when this stuff will be done, more over, the guy doesn’t feel like making any commitment to the rest of the team. (You may wonder what this has to do with my previous statement of avoiding getting in situation “commitment from R&D only” to the stakeholders). This is different, it is about a guy having the best guess how much is going to take him to finish a task within one iteration. If there is no time information present, you never know whether the guy is in trouble during the development, whether he knows what he is talking about, and if you like, it is hard to make a peer pressure working (I am a hard core XP guy on this, to be frank).
Now, you may wonder why project managers like story points planning. I think for the same reason they liked waterfall, it gives them a fake feeling of having things under control once you have your numbers. Like by magic, story points remove all uncertainty of making the project a success.
If you by now think that I am not using story points at all, you would be wrong :). I do use them, but in a different way. First of all, story points discussion is very good way of discussing complexities – indeed. It is engaging. It also provides people with a “token platform” to discuss different aspects of the problem, and that level of indirection at that point of time is a good thing. The only real question is what you do with these discussions and the numbers that come out of them? Here is a little tip, while listening on discussions, I am interested in two things:
- How much opinions differ
- How complex the feature is
Complexity number gives you an idea about the nature of the probability distribution of the estimation for a given feature, while different opinions are giving you the range (or sigma) of the “Bell Curve” – if and only if, the complexity seems to be low (in that case we can talk about the Bell curve).
If you are faced with a project that is not evolving that much, and I guess this is the place where most of the “story points guys” are writing their studies and books, you are faced with rather predictable environment, and story point only project planning might work well. BUT, if you are faced with projects in which features exhibit high level of skewness, you better do things differently. One thing that strikes me the most about story point articles is how they tend to plot the distribution of story points per iteration (like in picture above) – and come with a bell curve always, while I would be more interested to plot estimated vs. actual points required to finish any given story. Well, the point is, you can’t, as this is never captured. Again, you may think, why the hell would someone need that, since you come up with the global distribution that is bell curved anyway? Again, that is only true, if all the features are of the same distribution, and that is not the case in more complex projects! Just ask yourself, how many times you have actually seen burn down chart working? Really?
By now, you probably know why most of the project managers are either depressed, lunatics or alcoholics, as they need to deal with quite some uncertainty every single day, and provide always accurate planing. But there is a way out of this, more about it in my next blog post.