Wednesday, December 21, 2011

You can tune a methodology, but you can't tuna fish (tune a fish)

Last week was our first pass at introducing a new methodology to the client teams. Prior to our rollout there was significant concern throughout the group that we were going to impose so much new process that innovation and creativity would be stifled.

I was a bit surprised by this, not only because I had been constant in my verbal commitment to the team that we would only implement just enough process to keep the whole team working and productive, but also because I figured by now they'd have seen glimpses of my deep and abiding loathing for any activity that isn't directly related to delivering quality software. If nothing else, I'd have hoped that they would have noticed my highly refined sense of laziness when it comes to doing things not directly related to delivering working software.

But I digress.

In talking with the team I heard stories about a previous attempt to install Scrum as a working methodology for the team. It wasn't successful. This lack of success wasn't because of any fundamental flaw in Scrum, it was because the previous implementer didn't recognize that there was a working methodology in place and tried to impose new process where things were already working, and failed to recognize where the existing process could use some shoring up. 

What may have made a difference for our former Scrum Master was a realization that there is no such thing as a singular "correct" methodology. For every team delivering software there are nuances to the team, their environment and their problem domain that require some level of "tuning" to get their chosen methodology working just right for the team. Since I'm eager to get to the details of our "build-a-methodology" activity, I'll save my "Agile certifications only prove someone can keep their butt in a chair for three days" rant for another time.

Knowing that the team had a failed methodology adoption under their belts (not to mention the attendant "methodology is bad" jitters), a different approach seemed to be the best course of action.

As luck would have it, I had just the thing sitting in my toolbox.

In the book "Agile Software Development" (Cockburn, 2007), Alistair describes a technique for methodology tuning. I don't want to spoil the ending for you (no, it wasn't the butler), but here are the highlights of the technique in bullet form:

* Examine one example of each work product
* Request a short history of the project to date
* Ask what should be changed next time
* Ask what should be repeated
* Identify priorities
* Look for any holes

Currently we have 5 distinct teams working in the Mobile group at Walmart Labs. 4 teams are focused on client applications (iPhone, iPad, Android, Mobile Web) and a services team that provides functional API's to the client teams. No singular methodology is going to work for all of these teams, even if we ignore the fact that most of the teams are distributed and are working within the machinery of the larger organization (no, we really didn't ignore those factors).

Methodology tuning is deceptive in that it seems to be a relatively straight-forward activity. Steps 1 through 5 are easily performed by anyone with a heartbeat and rudimentary conversational skills (e.g. upper management). Step 6? Well, that's kinda the secret sauce - the step that moves this activity from a "Shu" level interaction all the way up the scale to the "Ri" level.

Let's take the "should change" elements from our exercise as an illustration. Here's the list of things that the team identified as being painful and worth avoiding:

* Different roles (QA, Dev, UXD) did not interact well
* Business drove scope creep
* Requirements not clearly stated and in some cases showed up late
* Lack of visibility into development progress
* Confusion around roles and responsibilities
* Key activities missed
* Design changes close to release

Out of this list two major patterns emerge. First, the team was not prepared for late changes and/or requirement creep. Second, there was a considerable amount of confusion over who was responsible for what and visibility into overall progress towards delivery. 

A novice methodologist looking over this list may conclude the following:

1) Requirements need to be more clearly stated and should not be allowed to change beyond a certain point in the development process. 

2) A clear statement of roles and responsibilities needs to be created for the team.

3) Team members need to report more clearly their current status and progress on their work.

All of the above changes seem to address the "holes" as surfaced by the discussion with the team, but each proposed change either imposes more work on the team or reduces their ability to change their mind about what work should be done.

Contrast that with the proposed changes I worked out with the team:

1) Any task performed by the team cannot exceed 1 day of effort to complete. 

By reducing the maximum size of tasks that the team can perform to a single day it creates a natural tension on the product team to spend more time on the requirements, both in decomposition (smaller stories = easier to estimate tasks) and in simply thinking more about the details of what the requirements should be (more thinking = less likely for late surprises. It also influence scope creep by exposing the cost of "that little extra" being slipped in by the business)

2) During release planning the stakeholders will use risk analysis as part of organizing the work of the team. 

The most common source of significant changes late in the development process for a product is responding to bad news. By identifying the features in an application that have the highest "risk" (Risk Reduction) we can organize work in such a way that bad news (if it shows up) is delivered early, giving the team more time to come up with a plan "B".

3) No iteration may have more than 50% of it's work be "must have" features

Setting this rule on iteration planning has two purposes. First, it again puts pressure on the stakeholders to be clear about what "must" be delivered versus what can be cut if the team runs into difficulty, reducing the chance of scope creep showing up late (scope creep is almost always accompanied with claims of "we have to have this, can't ship without it"). 
Second, having this "50%" buffer in place does allow for mid-iteration course corrections. It's not a recommended approach in that it violates the "focus" principle but is a useful tool for teams still working on building trust with the stakeholders that they can course-correct as much as they see fit after the current iteration.

4) Tasks, iterations and releases all must have clearly defined rules of what "done" means.

Having clear rules about what it means to be done with a task, an iteration, or even the release greatly facilitates the process of providing visibility into the current status of a development effort. It also goes a long way towards solving the problem of different roles not interacting effectively because in almost every case part of the definition of "done" for tasks and even iterations involves  direct interaction between different roles on the project. For example, in our project an iteration isn't done until two things happen - QA approves the work performed in the release, and the product of the current iteration is delivered directly to stakeholders.

Contrary to appearances, the point of this compare and contrast exercise isn't to strut my agile plumage. Take any 5 agile gurus, and they'll have at least 10 different opinions about how best to address the pains of the team, 20 if you let them iterate.

The real point was to illustrate the value of having a methodology tuning tool in your agile toolbox. As stated previously, no two products and/or teams will ever be the same, why would we expect our methodology to be any different?

3 comments:

  1. Good post! Liked a lot your description of how you went through the methodology tuning process.
    However, there seems to be an unchallenged assumption in your post: "more/better methodology will solve the problem"

    From your description it seemed to me that you actually focused on the people aspects(communication, common view on the risks, etc.), not the methodology.

    ReplyDelete
  2. Hi Vasco,

    Actually, from my perspective, it is the people aspects that "are" the methodology. Of course this assumes a minimalist definition of methodology - something along the lines of "How a team decides to work to deliver working software".

    Perhaps you could share your perspective on what you consider the methodology to be?

    Thanks for taking the time to read and comment!

    ReplyDelete
  3. Great post, but I can't find the sadism. Is it hiding behind Waldo somewhere? Perhaps if they had to survive AC for 3 days, that may be a form of torture that brings you some delight.

    ReplyDelete