Agile Project Management – the project follow up

XP was a first development framework in which the customer feedback was embedded in the complete project lifecycle (see Beck Kent’s books, Ron Jeffries blog and articles, or check this list). In the picture below,  you can find “one slide explains it all” that I often used back in 2002-3, when people were asking me about “extreme stuff” we were doing:

(Side note, I can’t remember any more where I found this drawing)

User story: “As a <role>, I want <goal/desire> so that <benefit>”

Early on, I have noticed few interesting things about user stories:

  • First, it is an excellent way of introducing customer into the picture, similar to the “elevator test” idea – and I liked that.
  • It is excellent way of preparing demos after every iteration (and fast feedback which comes as the result of it)
  • It is the best way to do acceptance testing (check this guy out and his blogs/books on this link)
  • It is not always that great as the tool for project/release tracking (one interesting observation, in XP days, we used to talk about  “tracker” role for someone that was responsible for story/planning tracking – later replaced by my favorite term “scrum master” )
  • If the feature is rather complex, simple story splitting is not the best way to capture the work that needs to be accomplished.

While working on my first XP projects, I was a fanatic about data gathering, everything you can think of. We estimated tasks in days, hours, even minutes. We were all booking 8 hours a day – no exceptions. We would book coffee time, meetings, coding, demo’s… I kept doing this for years. No matter what we were doing, who was doing it, I always found similar observations:

  • 30% of all time in project is “wasted” on meetings, coffee breaks and kick-off meetings. (one more reason why I got so “freaky” about making agile meetings more efficient)
  • Almost all stories of the small size were implemented faster or on time.
  • Most of the big stories (or epics as they like to call them these days) and some small features were experiencing a strange distribution, and this is a “funny part”, the range varied from 20-30% to about 3-4 times of the original effort (in both direction, although majority of estimations proven to be too optimistic). It didn’t necessarily correlate to any point in time of the project, or how good the original estimation/analysis was done. Just one more thing for the record: I’ve had a pleasure working with some of the most experienced developers you can imagine, which makes these graphs even more interesting.
Here is a an example of two releases:

That was at the time I didn’t bother reading about “Cone of Uncertainty” or “Iron Triangle”. I was more like a chef in the kitchen, tweaking ideas rather than estimations, trying to figure out what’s the best way forward…

From then onwards, any time I see a project where the burn down chart behaves nicely, I get suspicious.  It tells me one of the following: either the estimation of all work was done with great accuracy and all developments is happening smoothly (with no grey swans around) OR someone is too busy making burn down charts look nice.

While I was occupied with my projects: planning, tracking, coding, support, bug fixing and studding the data that was in my hands, I must admit, I completely missed out on scrum movement, which was to become a dominant agile force… What surprised me at first was how much some of my conclusions differed from slides you can find in any scrum training.

One thing I realized, and that is what I have written in one of the previous blogs, is that trying to be too much smart upfront really doesn’t work – even if you have your stories. You need to code, as soon as you can, but not before that (you may hear me now talking about “invisible lines”, this is the moment when other people’s eyes start rolling when I talk like that). I also found another interesting thing, burn down charts don’t work for me, but burn up charts do the magic! Let me show you two graphs, and some important explanations that will follow:

(Graphs are created using JIRA dashboard feature – these two graphs are must for all my projects)

  • Developers must have subtasks for everyhing they do, and they are free to add more tasks any time they find it necessary. Every subtask is assigned at kick off.
  • At the beginning, all FTPET’s are estimated in either days or weeks
  • If FTPET is deemed to be complex, we address it early on (sometimes even before M0). During this process we do light analysis which results in set of subtasks. These subtasks are than estimated separately.
  • About 15% of effort is kept for refactoring. This is necessary to keep the code clean and robust in long term. Oh yes, I hear you saying: Refactoring should happen always and it should not be given a “task”. Well, I am saying the same, with one exception – I know that people will not estimate this effort when they estimate their work. I don’t really know why, but that’s the way it is. So I book it anyway (some nasty readers might call it my secret buffer – at least it is not that secret any more 🙂 )
  • All technical challenges are addressed in separate “spikes” and most of the time are not part of any release planning
  • Estimations are only about the work, it doesn’t include any overhead (meetings, maintenance, holiday, coffee time …)
  • Since R&D logs every day 8 hours, information about overhead and other non-dev tasks can be taken into account in the future planning (about 30% of R&D effort goes to overhead).
  • The team is closing subtasks with steady pace – follow the green line (interesting note: subtasks are not all of the same size, but why they look as they are? This is XP trick, every item should be finished within one or two iterations, not more than that).  Since my iterations are normally 1 week, it turns out that “story point” size in this graph on average is about 3-4 days! Multiply this by number of dev guys and …Hey, you have your velocity?!! Sounds like scrum, no? 
  • Some FTPTET’s and other big features are divided into subtask only later in the project (we have a “good idea” how much they would take, no time to split them all – Pareto rule). That you can see by following the red line.
  • Number of subtasks doubled between M1 and GA, but all FTPET’s were closed on time .
  • Most of the FTPTET’s were known to the R&D team 3-4 months prior to M0
  • Any time you add something, you make sure nothing else it broken (automation, TDD, whatever). That makes you adjust better on the major question- are we hitting the date, or the feature content?

So, what you do with these two graphs and how I do project follow up in 5 minutes every day? Is it rather simple:

You monitor the list of new subtasks that comes after M1 – this is the red line (at M0 you should have about 30%-50% of subtasks identified – check the previous post). The curve should start slowing down after a while. If it doesn’t, you are in trouble, good news – you know it.

By looking into green line, you only see that people are working, and you can make “projection” in your head when it should meet the red line – but again, this is not exactly as working with burn down chart. Burn down chart makes managers freaking, and instead of being focused on the work, they spent big effort trying to make them look nice. More over, it makes them feel like they need a better estimation process – next time, the trap I talked about in this blog Estimation Fallacy and Estimation (only) Driven Project Management. With the burn down chart, ideally, the red line stays constant – since it expresses the cumulative effort of that release. In my case, it only represents new subtasks that are created in time. I can stay focused on adding value, I am flexible to add and remove things as long as we are all on the same page (PO, me and the team) during the project lifecycle, and it is brutally transparent – to everyone.

Good, this was a long blog post, and by now you should know how you can do the project follow up in 5 minutes. But be cautious, the project management is not about task follow up ONLY! In my next blog, which I might call “How many balls you think you can juggle?” I will be talking about some other aspects of the project management during the development phase. Stay tuned.

 

About Veselin Pizurica

Veselin Pizurica is entrepreneur, CTO and founder at http://www.waylay.io/, a cloud software company that empowers enterprises and the public sector to benefit from the Internet-of-Things. waylay enables smarter real-time decisions, proactive monitoring and automation based on integration with existing line-of-business applications. Veselin has more than 15 years experience in software product development (Agile Development, Lean Startup Movement). He designed and implemented products in various domains, such as Cloud Computing, Semantic Web, Artificial Intelligence, IoT/M2M, Signal and Image Processing, Pattern Recognition, Home Networking, Mobile Management, MPLS, xDSL and Fiber technologies. Skills: project/product management, people management, agile development, telecommunications, SW platforms, cloud computing technology, coding java/ruby/python/javascript/html5/ Veselin is also author and co-author of 12 patent applications in domain of DSL, home networks and cloud computing.
This entry was posted in Agile|Agile, project management and tagged , . Bookmark the permalink.

Leave a Reply