Wednesday, December 21, 2011

You can tune a methodology, but you can't tuna fish (tune a fish)

Last week was our first pass at introducing a new methodology to the client teams. Prior to our rollout there was significant concern throughout the group that we were going to impose so much new process that innovation and creativity would be stifled.

I was a bit surprised by this, not only because I had been constant in my verbal commitment to the team that we would only implement just enough process to keep the whole team working and productive, but also because I figured by now they'd have seen glimpses of my deep and abiding loathing for any activity that isn't directly related to delivering quality software. If nothing else, I'd have hoped that they would have noticed my highly refined sense of laziness when it comes to doing things not directly related to delivering working software.

But I digress.

In talking with the team I heard stories about a previous attempt to install Scrum as a working methodology for the team. It wasn't successful. This lack of success wasn't because of any fundamental flaw in Scrum, it was because the previous implementer didn't recognize that there was a working methodology in place and tried to impose new process where things were already working, and failed to recognize where the existing process could use some shoring up. 

What may have made a difference for our former Scrum Master was a realization that there is no such thing as a singular "correct" methodology. For every team delivering software there are nuances to the team, their environment and their problem domain that require some level of "tuning" to get their chosen methodology working just right for the team. Since I'm eager to get to the details of our "build-a-methodology" activity, I'll save my "Agile certifications only prove someone can keep their butt in a chair for three days" rant for another time.

Knowing that the team had a failed methodology adoption under their belts (not to mention the attendant "methodology is bad" jitters), a different approach seemed to be the best course of action.

As luck would have it, I had just the thing sitting in my toolbox.

In the book "Agile Software Development" (Cockburn, 2007), Alistair describes a technique for methodology tuning. I don't want to spoil the ending for you (no, it wasn't the butler), but here are the highlights of the technique in bullet form:

* Examine one example of each work product
* Request a short history of the project to date
* Ask what should be changed next time
* Ask what should be repeated
* Identify priorities
* Look for any holes

Currently we have 5 distinct teams working in the Mobile group at Walmart Labs. 4 teams are focused on client applications (iPhone, iPad, Android, Mobile Web) and a services team that provides functional API's to the client teams. No singular methodology is going to work for all of these teams, even if we ignore the fact that most of the teams are distributed and are working within the machinery of the larger organization (no, we really didn't ignore those factors).

Methodology tuning is deceptive in that it seems to be a relatively straight-forward activity. Steps 1 through 5 are easily performed by anyone with a heartbeat and rudimentary conversational skills (e.g. upper management). Step 6? Well, that's kinda the secret sauce - the step that moves this activity from a "Shu" level interaction all the way up the scale to the "Ri" level.

Let's take the "should change" elements from our exercise as an illustration. Here's the list of things that the team identified as being painful and worth avoiding:

* Different roles (QA, Dev, UXD) did not interact well
* Business drove scope creep
* Requirements not clearly stated and in some cases showed up late
* Lack of visibility into development progress
* Confusion around roles and responsibilities
* Key activities missed
* Design changes close to release

Out of this list two major patterns emerge. First, the team was not prepared for late changes and/or requirement creep. Second, there was a considerable amount of confusion over who was responsible for what and visibility into overall progress towards delivery. 

A novice methodologist looking over this list may conclude the following:

1) Requirements need to be more clearly stated and should not be allowed to change beyond a certain point in the development process. 

2) A clear statement of roles and responsibilities needs to be created for the team.

3) Team members need to report more clearly their current status and progress on their work.

All of the above changes seem to address the "holes" as surfaced by the discussion with the team, but each proposed change either imposes more work on the team or reduces their ability to change their mind about what work should be done.

Contrast that with the proposed changes I worked out with the team:

1) Any task performed by the team cannot exceed 1 day of effort to complete. 

By reducing the maximum size of tasks that the team can perform to a single day it creates a natural tension on the product team to spend more time on the requirements, both in decomposition (smaller stories = easier to estimate tasks) and in simply thinking more about the details of what the requirements should be (more thinking = less likely for late surprises. It also influence scope creep by exposing the cost of "that little extra" being slipped in by the business)

2) During release planning the stakeholders will use risk analysis as part of organizing the work of the team. 

The most common source of significant changes late in the development process for a product is responding to bad news. By identifying the features in an application that have the highest "risk" (Risk Reduction) we can organize work in such a way that bad news (if it shows up) is delivered early, giving the team more time to come up with a plan "B".

3) No iteration may have more than 50% of it's work be "must have" features

Setting this rule on iteration planning has two purposes. First, it again puts pressure on the stakeholders to be clear about what "must" be delivered versus what can be cut if the team runs into difficulty, reducing the chance of scope creep showing up late (scope creep is almost always accompanied with claims of "we have to have this, can't ship without it"). 
Second, having this "50%" buffer in place does allow for mid-iteration course corrections. It's not a recommended approach in that it violates the "focus" principle but is a useful tool for teams still working on building trust with the stakeholders that they can course-correct as much as they see fit after the current iteration.

4) Tasks, iterations and releases all must have clearly defined rules of what "done" means.

Having clear rules about what it means to be done with a task, an iteration, or even the release greatly facilitates the process of providing visibility into the current status of a development effort. It also goes a long way towards solving the problem of different roles not interacting effectively because in almost every case part of the definition of "done" for tasks and even iterations involves  direct interaction between different roles on the project. For example, in our project an iteration isn't done until two things happen - QA approves the work performed in the release, and the product of the current iteration is delivered directly to stakeholders.

Contrary to appearances, the point of this compare and contrast exercise isn't to strut my agile plumage. Take any 5 agile gurus, and they'll have at least 10 different opinions about how best to address the pains of the team, 20 if you let them iterate.

The real point was to illustrate the value of having a methodology tuning tool in your agile toolbox. As stated previously, no two products and/or teams will ever be the same, why would we expect our methodology to be any different?

Monday, December 5, 2011

Team building: Recruiting life in the Big City

As part of my new responsibilities with the mobile division of Walmart Labs, I am working directly with our in-house recruiter on finding talented candidates to join our group. As recruitment for our group had started well in advance of my joining the team, my first task was to sort through a relatively hefty pile of resumes to find promising candidates for a number of different roles within our team.

It wasn't long before I had two piles in front of me (figuratively speaking, no actual tree-killing stacks were made). The first pile, containing the vast majority of the resumes I had reviewed, were the rejects. The second, a meager stack at best, were the candidates interesting enough to warrant an actual conversation.

In reflecting on the two stacks, I couldn't help but apply a mental label to each. The first pile (by now leaning precariously over the trash icon on my desktop) became the "Cogs". The second were the "Players".

Cogs (to my mind) are the people in the software business that are there to do a job, draw a paycheck, and that's about the extent of it. Their resumes were an exercise in HR pre-screening hurdle jumping, and very little else. Of course  it is entirely possible that there was much more to these candidates but if so it certainly didn't shine through.

In contrast to the Cogs, the Players (think sports, not some guy in a red velvet smoking jacket) deliver. They are craftspersons, always working to improve their skills. They were engaged with their professional community. And they made a difference where they had worked.

Frankly, I was rather surprised that I had a pile of Players in the first place. Silicon Valley is an extremely competitive market for talented individuals, and large companies that don't have a reputation for technical excellence tend to not attract these sort of candidates.

So I set off to find our team's recruiter to find out how these interesting resumes had snuck in. I had been looking forward to this conversation because the recruiter in question had handled my recruitment, and I was interested in hearing her opinion of what it was like to bring in someone like myself to the team. For the record, no, I wasn't that bad, but I definitely did not follow the game plan on my way in.

Talking with Kathleen as her "customer" was a very different experience than as her recruit. I learned very quickly that her path to the company was similar to mine - she too had been pursued by a colleague and wasn't exactly overwhelmed about the opportunity. In talking about the recruiting and onboarding process it was clear that her level of frustration in dealing with the hiring process was even greater than mine. I had only been through it once, whereas this was her day to day reality.

Our first task was to adjust the screening process so that the candidates coming in were better qualified for the team that we are building. More players, less cogs. At first I thought that this would be as simple as changing the job descriptions to better reflect what we were doing, so I stated it as I saw it:  "We are creating a true entrepreneurial space within a larger company that will be a showcase for how agile teams can deliver without becoming mired in process and red tape."

I could immediately tell by the amused/pitying look on Kathleen's face that I had said something wrong. With a shake of her head she said to me "You do know that every single large company that is competing for talent out here says that exact same thing?". As soon as she said it, I realized how naive I had been in my thinking. Of course companies competing for a limited pool of technical resources would craft the most appealing message to get the attention of the community, regardless of the reality of how the company really worked.

I had already known that recruiting for our team was going to be challenging. Much like the initial resistance felt by Kathleen and myself, the very first obstacle we would have to overcome with the type of players we were looking for was the fact that Walmart hasn't exactly been on the short list of bleeding edge mobile technology adopters. But this newest revelation made it clear that we were going to have to come up with something a bit better than a dazzlingly worded job description.

Fortunately, we do have something a bit better. Us.

More specifically, our connection to our respective communities and other players we have worked with in the past.

let's assume for a moment that I am a representative example of the type of player we're looking for. Yeah, I know - I'm as surprised as you that I can say that with a straight face, but just work with me for a minute, will ya?

Had I seen a posting for my current position somewhere on the intar-webs, I would have spotted the corporate logo out of the corner of my eye and moved right along. This isn't an indictment of my current employer - you can't argue with success, and Walmart is as successful as you can get. But there was nothing interesting in that space for me, namely because the place that I prefer to work is a place where my contribution to the success of the organization goes beyond a few percentage points on a graph somewhere. I (much like other players) want to have a visible and lasting impact on the organization we are working with.

I now know that the space for this sort of contribution does exist because I the people that reached out to me to join the company are from the same mold and have created a space that will allow players to do what they do best. It was this personal contact by people I knew and respected that shifted my perspective on the job from "part of the machinery" to "being part of changing the game".

Moving forward the key to success in our recruiting efforts will be this same sort of outreach to our respective communities. We will know right away when it is working by the number of times we hear in the interviewing process comments like "I applied for the position because I heard about what your team is doing and I want to be a part of it.".

Now that I think about it, we are already starting to hear that now.