Tuesday, April 7, 2009

Huitale Kanban

First I want to thank Henrik for publishing ScrumVsKanban as it helps me to describe our way of writing software and getting stuff done. Hopefully we can co-operate at some point to make this all a bit clearer to the community as there obviously is some interest.

I need to stress that I am no way saying that Scrum is bad or you should not do it; I have seen Scrum working in various teams already but found Kanban more appropriate for Huitale internal development :)

We stopped sprinting and doing Scrum after 26 sprints in Huitale. Here is some of the "whys":
  • We felt that splitting stories is artificial
  • Estimating sucks, even though it is 20% correct
  • What we need iterations for?
  • We can release every day, the process needs to serve us, not vice a versa
  • What if we have subcontractors and things take just more than 2 weeks?*
  • Stories are a bit weak, all the teams that I have worked with have trouble with stories (sure you can do something else in Scrum aswell..)
  • Splitting is about focusing but it is also about loosing some information (see pitfall)
Iterations
  • YES, we need them for reflective improvement (do we need to wait?)
  • NO, we do not need them for demo (why wait?)
  • NO, we do not need them to split stories (leads to problems)
So we do Kanban instead. By Henrik's description we are "Kanban team #3".

“We trigger a planning meeting whenever we start running out of stuff to do. We trigger a release whenever there is a MMF (minimum marketable feature set) ready for release. We trigger a spontaneous quality circle whenever we bump into the same problem the second time. We also do a more in-depth retrospective every fourth week.”

In practice it means that

  • No sprints nor iterations
  • No sprint planning
  • No complexity estimation (no story points)
  • Flow is more important than iterations
We value more getting a Minimum Marketable Feature out than having multiple stories that have no meaning (as they are parts of something more meaningful) for the business out

Huitale Way

Here is a picture demonstrating our way.



Some acronyms explained
PBL = Product backlog
NS = Not Started
IP = In Progress

As you see, we take 7 items from the backlog to our Kanban board. Why 7? Because that is the amount of items we can keep in our heads at any given time.

Here is a picture of our actual Kanban board


Red numbers are our queue size limits and as you see we allow two items to be In Progress. We have also defined a way how we cope with bugs, how we track waste etc. but I have left them out for simplicity.

Here is (imaginary) example of a Minimum Marketable Feature (yellow card on the previous Kanban board).


We add data to the MMF as it moves in the board. In addition to the board and the item itself we have so called Engineering Board where we keep all the relevant data per In Progress item.

How do we demo?

We have applied scrum-like demos, but we do demonstration per story. If the story is accepted by the Product Owner it will be deployed to production next day thanks to our capability to release every day.

What metrics we follow?

We gather the following metrics
  • Kanban board cycle time (from Not Started to Done per MMF)
  • Overall cycle time (From Idea to Done)
  • Number of defects
  • Waste

How do we plan or even roadmap?


Based on our cycle times we get average wait times per MMF (we call it Disneyland Wait Time as it won't be precise) . From there we can tell how long it takes to get stuff done. In order to get something done faster our Product Owner simply repriotizes. All data is naturally empirical and currently our wait time is 3 weeks. So 7 items in Not Started means that 7th item will be done after 21 weeks (7 x 3) if In Progress is empty.

Previously we have also tried some sizing based on t-shirt sizes (S, M, L, XL). Each size has been tracked (cycle time for each size adjusted based on empirical data) and they have been relative (3xS = M, 3xM = L, 3xL = XL). I would recommend new teams to start with sizing and then seeing if it works for them.

How do we retrospect?

I think cadence is good for reflective improvement so we have kept the "iterations" for retrospectives. Naturally the team members are actively improving "on the spot" any practice as there is no reason to wait for the retrospective to take place in order to reflect. However, I feel that we need the pulse for retropectives - it reminds us in case we may forget to do it.


Do not hesitate to contact me (marko.taipale at huitale.com) in case this raised some questions or you have similar experiences to share. You can also drop by our office to see it yourself - we have already had some ;)

11 comments:

Ari Tanninen said...

So how and when do you have retrospectives?

Might be also interesting to include a segment on release planning (even if you don't really do it) and how do you use velocity.

Marko Taipale said...

Good idea. I will add those into the original post.

About the retros. I guess we are figuring better ways to do it at the moment though I could say that the move from Scrum to Kanban maybe demonstrates our ability to inspect and adapt.

For the time being we are having retrospective in 4 days per week during the breakfast over some good cappucino and pulla :) Call it daily 30min retro.

Henrik Kniberg said...

Very interesting, thanks for contributing this!

offshore outsourcing services said...

Great informative post...
Regards,
Offshore outsourcing

agilemanager said...

Great article! Thanks for sharing!

David

Machiel said...

I would like to know how you can visualize the current productivity of the team using cycle time. Do you track the cycle time day-to-day? How do you do that? Can you see, like on a burn-down, when the team gets stuck?

Marko Taipale said...

@machiel If my team produces feature every third date then the productivity is "feature / 3 days". If you want to visualize it you could use CFD, see http://www.redmine.org/attachments/685/cumulative-flow-features.png

We track progress on items day-by-day and if it goes over the date (we have Service Level Agreement per size) then we mark the item (visual aid) and that also signals issues on the board.

In Kanban it is also very easy to see if the team is blocked by just looking the queues (bottleneck is found by looking at inventories). See http://www.gamasutra.com/view/feature/3847/beyond_scrum_lean_and_kanban_for_.php?page=4 (Leveling Workflow), there you can see a bottleneck at Level Design.

We do not recalculate cycle time every day but rather every month as average of the sized items.

Your questions will make me blog some more, so hold your horses :)

JM said...

Can you elaborate a bit more on your "engineering board" and how exactly does that relate to your process? Thanks!

Marko Taipale said...

@JM It's just a whiteboard (or wikipage if when we are not colocated) that contains all the information related to SINGLE backlog item. The idea is to slap all the possible information about the backlog item so that we share the knowledge while everyone is "building the knowledge" about the item. In our case the information includes stuff like diagrams (UML whatnot), screenshots, layout drawings, various possible error scenarios etc..

Too often agile teams tend to restrict themselves on "post-its"-only, so this is explicit way of communicating "you can do more than postits" and "you should be sharing information once you discover it". :o)

İnanç Gümüş said...

Hi Marko,

Thanks for sharing your thoughts.

I wonder do you have more metrics? I also want to know about waste metric that you've listed? How can you calculate it?

Thanks

Marko Taipale said...

@İnanç

I have listed some metrics in more recent post:
http://huitale.blogspot.com/2010/03/huitale-way-our-value-stream-map.html

We have tons of metrics for the code, but I am not sure if you are looking for those.

Waste metrics are based on the form of waste.

Few definitions for the types of waste:
http://en.wikipedia.org/wiki/Lean_software_development
http://en.wikipedia.org/wiki/Lean_Services#The_Service_Wastes
http://en.wikipedia.org/wiki/Lean_manufacturing#Types_of_waste
I just give you few examples:

1) Unused features are acknowledged by developing statistics for each feature we launch. We follow up the usage for few months and if there is no traction we drop the feature.

2) Delay can be measured in form of cycle time for each "step" in our process. If the step takes longer than it "usually takes" we look into it and try to figure out (5 whys) what's the problem. If it is special cause we might not do anything about it. If not, then we try to fix it.

3) Duplication in sw is easy to measure. We use code analyzers for that as part of our Continuous Integration.

.. and so on.