Agile Project Management – the project follow up

XP was a first development framework in which the customer feedback was embedded in the complete project lifecycle (see Beck Kent’s books, Ron Jeffries blog and articles, or check this list). In the picture below,  you can find “one slide explains it all” that I often used back in 2002-3, when people were asking me about “extreme stuff” we were doing:

(Side note, I can’t remember any more where I found this drawing)

User story: “As a <role>, I want <goal/desire> so that <benefit>”

Early on, I have noticed few interesting things about user stories:

  • First, it is an excellent way of introducing customer into the picture, similar to the “elevator test” idea – and I liked that.
  • It is excellent way of preparing demos after every iteration (and fast feedback which comes as the result of it)
  • It is the best way to do acceptance testing (check this guy out and his blogs/books on this link)
  • It is not always that great as the tool for project/release tracking (one interesting observation, in XP days, we used to talk about  “tracker” role for someone that was responsible for story/planning tracking – later replaced by my favorite term “scrum master” )
  • If the feature is rather complex, simple story splitting is not the best way to capture the work that needs to be accomplished.

While working on my first XP projects, I was a fanatic about data gathering, everything you can think of. We estimated tasks in days, hours, even minutes. We were all booking 8 hours a day – no exceptions. We would book coffee time, meetings, coding, demo’s… I kept doing this for years. No matter what we were doing, who was doing it, I always found similar observations:

  • 30% of all time in project is “wasted” on meetings, coffee breaks and kick-off meetings. (one more reason why I got so “freaky” about making agile meetings more efficient)
  • Almost all stories of the small size were implemented faster or on time.
  • Most of the big stories (or epics as they like to call them these days) and some small features were experiencing a strange distribution, and this is a “funny part”, the range varied from 20-30% to about 3-4 times of the original effort (in both direction, although majority of estimations proven to be too optimistic). It didn’t necessarily correlate to any point in time of the project, or how good the original estimation/analysis was done. Just one more thing for the record: I’ve had a pleasure working with some of the most experienced developers you can imagine, which makes these graphs even more interesting.
Here is a an example of two releases:

That was at the time I didn’t bother reading about “Cone of Uncertainty” or “Iron Triangle”. I was more like a chef in the kitchen, tweaking ideas rather than estimations, trying to figure out what’s the best way forward…

From then onwards, any time I see a project where the burn down chart behaves nicely, I get suspicious.  It tells me one of the following: either the estimation of all work was done with great accuracy and all developments is happening smoothly (with no grey swans around) OR someone is too busy making burn down charts look nice.

While I was occupied with my projects: planning, tracking, coding, support, bug fixing and studding the data that was in my hands, I must admit, I completely missed out on scrum movement, which was to become a dominant agile force… What surprised me at first was how much some of my conclusions differed from slides you can find in any scrum training.

One thing I realized, and that is what I have written in one of the previous blogs, is that trying to be too much smart upfront really doesn’t work – even if you have your stories. You need to code, as soon as you can, but not before that (you may hear me now talking about “invisible lines”, this is the moment when other people’s eyes start rolling when I talk like that). I also found another interesting thing, burn down charts don’t work for me, but burn up charts do the magic! Let me show you two graphs, and some important explanations that will follow:

(Graphs are created using JIRA dashboard feature – these two graphs are must for all my projects)

  • Developers must have subtasks for everyhing they do, and they are free to add more tasks any time they find it necessary. Every subtask is assigned at kick off.
  • At the beginning, all FTPET’s are estimated in either days or weeks
  • If FTPET is deemed to be complex, we address it early on (sometimes even before M0). During this process we do light analysis which results in set of subtasks. These subtasks are than estimated separately.
  • About 15% of effort is kept for refactoring. This is necessary to keep the code clean and robust in long term. Oh yes, I hear you saying: Refactoring should happen always and it should not be given a “task”. Well, I am saying the same, with one exception – I know that people will not estimate this effort when they estimate their work. I don’t really know why, but that’s the way it is. So I book it anyway (some nasty readers might call it my secret buffer – at least it is not that secret any more 🙂 )
  • All technical challenges are addressed in separate “spikes” and most of the time are not part of any release planning
  • Estimations are only about the work, it doesn’t include any overhead (meetings, maintenance, holiday, coffee time …)
  • Since R&D logs every day 8 hours, information about overhead and other non-dev tasks can be taken into account in the future planning (about 30% of R&D effort goes to overhead).
  • The team is closing subtasks with steady pace – follow the green line (interesting note: subtasks are not all of the same size, but why they look as they are? This is XP trick, every item should be finished within one or two iterations, not more than that).  Since my iterations are normally 1 week, it turns out that “story point” size in this graph on average is about 3-4 days! Multiply this by number of dev guys and …Hey, you have your velocity?!! Sounds like scrum, no? 
  • Some FTPTET’s and other big features are divided into subtask only later in the project (we have a “good idea” how much they would take, no time to split them all – Pareto rule). That you can see by following the red line.
  • Number of subtasks doubled between M1 and GA, but all FTPET’s were closed on time .
  • Most of the FTPTET’s were known to the R&D team 3-4 months prior to M0
  • Any time you add something, you make sure nothing else it broken (automation, TDD, whatever). That makes you adjust better on the major question- are we hitting the date, or the feature content?

So, what you do with these two graphs and how I do project follow up in 5 minutes every day? Is it rather simple:

You monitor the list of new subtasks that comes after M1 – this is the red line (at M0 you should have about 30%-50% of subtasks identified – check the previous post). The curve should start slowing down after a while. If it doesn’t, you are in trouble, good news – you know it.

By looking into green line, you only see that people are working, and you can make “projection” in your head when it should meet the red line – but again, this is not exactly as working with burn down chart. Burn down chart makes managers freaking, and instead of being focused on the work, they spent big effort trying to make them look nice. More over, it makes them feel like they need a better estimation process – next time, the trap I talked about in this blog Estimation Fallacy and Estimation (only) Driven Project Management. With the burn down chart, ideally, the red line stays constant – since it expresses the cumulative effort of that release. In my case, it only represents new subtasks that are created in time. I can stay focused on adding value, I am flexible to add and remove things as long as we are all on the same page (PO, me and the team) during the project lifecycle, and it is brutally transparent – to everyone.

Good, this was a long blog post, and by now you should know how you can do the project follow up in 5 minutes. But be cautious, the project management is not about task follow up ONLY! In my next blog, which I might call “How many balls you think you can juggle?” I will be talking about some other aspects of the project management during the development phase. Stay tuned.

 

Agile Release Planning – the preparation phase

Before I start, I want you to read the following statement:

“The most important role of PO, project manager and the team is to create a product that someone wants to buy, from which company can earn some money, and in return pay you something back. In process of creation, you need to respect the law and make sure that people are treated with respect and dignity. Anything else is a second priority”.

Some time back, I spoke about “Iron Triangle” and how critical is to know upfront which project lever (functionality, date or release) are at your disposal for adjustment as you start with the planning. Failure to ask this to stakeholders is a sign of immaturity or lack of confidence, or both.

Once that is clear, you look into the feature list which is presented by PO at the kick start of the project. That list can’t be longer than 10-15 items. This list must survive “so-called” elevator test (from Geoffrey Moore’s “classic” Crossing the Chasm), and it goes like this:

  • For <target customer>
  • Who <statement of the need or opportunity>
  • The <product name> is a <product category>
  • That <key benefit, compelling reason to buy>
  • Unlike <primary competitive alternative>
  • Our product <statement of primary differentiation>

These items are not “epics”, these are your FTPET’s (I just made that name now, features that passed elevator test). For the stakeholders and the outside community, these are only sellable items, and you need to track them separately throughout the project lifecycle, next to all other working items (stories, epics, tasks, subtasks, whatever – these are too technical terms for the rest of the folks).

In the first phase of the project (between M1 and M2 from the project milestones blog), you need to organize a workshop. For more info, I recommend a book “Practices for Scaling Lean & Agile Development“ by Craig Larman and Bas Vodde.  Short version: go for a separate location with key guys, make sure you have enough white boards around (and camera to capture drawings and discussions) and do the following (extract from the book):

  • Envision business case, strategy, high-level list of features (major scope), and constraints for the release.
  • Identify ten or twenty percent of the features that deserve deep analysis immediately.
  • Choose a subset that will yield broad and deep information about the overall release work. There are always some key features that, if deeply analyzed, give you overarching information about the big picture.
  • These may be the most complex, the most architecturally influential, the highest-priority features, or features with the most obscurity. From the viewpoint of information theory, they represent a subset that, if analyzed, is most likely to have lots of surprising information—the most valuable kind (end of ref)

Once the workshop is over, create a small sketch plan on A4 paper. If you need A3, you are already in trouble. Divide it into 4 quarters (Q1 to Q4) and draw the activity diagram, similar to story mapping, but taking into account only major items, including FTPETs and most important non-feature tasks (what ever that is).  Small tip: for god’s sake don’t use week numbers instead of quarters, most of the people outside

of project circles can’t actually tell you in which w## are they. It would also give a wrong impression that your precision will be in weeks – and it is not. This paper will be your basis for a discussion with R&D folks, key architects with whom are you going to make a more detailed plan. Right now, you need this paper as a tool to check the dependencies, and get a first feedback from R&D. After few sanity checks on the white board, you will know whether this plan makes sense, nothing more. If people can’t tell you this at least, you either have not done a great workshop, or you are surrounded by wrong people (next tip in later case: if you can’t change the team around you, change the team around you).

Depending on the team size, and whether the team is collocated or not, you will need to split development to one or more feature teams. Each feature team must have a feature owner, call it a scrum master, what ever. He should be leading feature discussions with PO (to come up with more detail user stories), lead technical discussions with R&D folks, and make sure that all ideas are captured in the wiki, with drawings, mockups and design choices. Here is one example of feature mockup in the wiki:

Once the key items passed this phase, you need to ask scrum masters and devs to add all KEY user stories, FTPETs and major tasks in the system (in our case JIRA). By then, about 30%- 50% of all features are well understood. Reminder: after workshop you tackled 20% of the features, few weeks later you are at 20% of the time span of the release. This is your milestone M2. 20% of time required for analysis is rule of thumb, BUT, you should not cross it too much.  If the time is the most important dimension, everything that has not been discussed at this stage and it still looks complex to solve, must be out of that release (or at least be given very low probability of success in that release). Here is what you can do next, create a feature list with the following items:

  • Key (JIRA or something similar)
  • Summary
  • Probability
  • Rationale (Grow, Harden, Customer etc)
  • Priority (P1, P2 or P3)
  • Comment
  • Size (relative) in story points

In my previous blog, I talked about story points estimation. Now you can see how I use them (R&D still can use them for other things). The size of the feature  is a story point given as a relative number. I can sum all stories together and provide a quick overview to PO on where the majority of the effort of this release will be spent (like in the picture above, which is made by one simple wiki macro). It also provides me with a tool to give a honest/professional feedback to the stakeholders about probability of the success of any given feature. Please also note that for the test and doc effort estimation I use T-Shirt estimation sizes (S,M,L,XL).

Rationale category is important information for PO and stakeholders, as based on this info we can all see in one pie chart whether we are making this release as the result of the push effect (bring a new functionality that is unique to the product), the pull effect (customers ask for new features) or we are busy with tasks such as improved scalability, some major redesign, or refactoring (resulted from “Demo Driven Development” syndrome).

Once you have your skeleton plan and 30%-50% of all features well understood (including the most complex in this bucket), you are ready to go – you have your milestone M2. Now the fun starts! Stay tuned for the next blog, it’s coming….

 

P.S. Comment  on the first paragraph:

When I became a manager some years back, I was told that I will be measured by “Whether the project is delivered on time, with agreed budget and all features implemented”. I believe now, as I did at that time, that this is the worst possible message (and metric) you can ever give to a young manager.