Friday, February 3, 2012

A Tale of Two Burn Charts

Within the last month we've launched two teams at Walmart Labs using methodology tuning as our means of incrementally introducing Agile. The new "process" introduced for each team was more or less the same:

1) Team will work in iterations
2) Team will create tasks from the user stories allocated to the iteration, and provide work estimates for tasks based on a simple time scale - 2 hours, 4 hours, 1 day. 
4) Tasks are defined as "done" when code/assets/other is checked in and any acceptance criteria for the task has been validated. Iteration is "done" when the working code is deployed to stakeholders and QA has signed off on all tasks/stories within the iteration.
5) QA will work directly with the team, recording and completing their tasks in the same manner as the rest of the team in addition to acceptance evaluation as needed on other tasks.

A few differences of note between the teams: 

1) Team A opted to try out 3 week iterations, Team B went for 2
2) Team A had an initial training session on iteration planning separate from the actual iteration planning exercise, Team B had training (hastily) incorporated into the iteration planning.
3) Team A was primarily on-site, Team B was widely distributed.

For each team the source of the stories for the iteration were derived from existing sources, namely feature lists not articulated in typical story form. This was not an oversight, the teams did not have shared experience in working with Agile style stories, so rather than impose additional training burden on them we collectively agreed to work with the existing resources and revisit the story issue at the next reflection session.

One of the specific "pains" that the overall group was suffering was a lack of visibility into what was happening in the midst of development. This manifest itself in predictable ways - release schedule slips, lack of stakeholder knowledge of feature changes, etc. In addition to the methodology changes that the team agreed to listed above I made it a point to generate daily burn charts on the tasks the teams were performing during the iteration.

Rather than provide a day-by-day running commentary of the iteration, let's skip to the end to see if the butler really did do it:

Team A Task Burn Chart:
Red line denotes ideal burn rate for the iteration
Green line denotes actual team completion of tasks
Blue line on bottom of graph denotes new tasks introduced during the iteration


Team B Task Burn Chart:
Red line denotes ideal burn rate for the iteration
Green line denotes actual team completion of tasks
Blue line on bottom of graph denotes new tasks introduced during the iteration


Based on the burn charts alone, guess which team had delivered working software at the end of their iteration. Go ahead - I'll give you a minute to make your guess.

...

Ready? It was neither. 

Surprised? Don't be. Burn charts are a tool to understand what is happening during the course of an iteration, not a predictor of success. Even with experienced teams that have a track record of successful delivery each iteration it is entirely possible to fail to deliver at the end of an iteration. For teams that are using burn charts for the first time the burn charts serve only one useful purpose, which is to raise awareness to the team on what sort of underlying problems a burn chart can indicate once a team has some history of delivering in iterations.

Let's take a little walking tour of what occurred during each iteration for the two teams:

Stop #1 (Team A and B)
Note that early in the iteration both teams discovered a number of tasks not previously captured. For Team A, it was 5 new tasks, Team B added 12. Considering that the purpose of the iteration is to provide a period of focus for the team by not allowing changes to the work during that time, adding new tasks certainly seems to indicate that new features showed up during the iteration. 

What actually happened was that both teams realized that there were tasks that were missing from the stories that they had agreed to deliver and as a result they added the new tasks to the iteration without realizing that they had invalidated their earlier work estimates for the iteration and increased risk that they would not deliver on time.

Stop #2 (Team B)
After an initial addition of 12 tasks Team B continued adding tasks along the way, peaking up to 4 new tasks in a day twice. What is interesting about this influx of new tasks is that the team didn't detect this problem until it showed up in the burn chart, even though they had already diagnosed it in the initial "methodology building" reflection session. 

Stop #3 (Team B)
Notice that throughout the burn chart there were only three days where the number of net tasks declined. A casual observer might interpret this as "the team isn't doing anything", but that wasn't the case here. Two factors were behind this seeming lack of progress. First, the lead developer (who had taken primary responsibility for managing task state within the tracking tool) was on vacation for a week. In his absence the rest of the team continued to get work done, but didn't update their progress in the tracking tool.

Stop #4 (Team A)
If you look close to the end of the burn chart for Team A, you'll notice a small spike in new tasks two days before the end of the iteration. In this case the tasks in question were not likely to cause a delay in the completion of the iteration but they did represent a lack of understanding of the importance of focus during the iteration. Much like Team B on Stop #2, Team A had not yet developed an aversion to changing work during the iteration.

Stop #5 (Team A)
Look at the end of the burn chart. Notice how it concludes with 4 tasks remaining open? In this case this wasn't a reporting oversight, there were 4 actual tasks that were not completed at the end of the iteration. All 4 of these tasks were work needing to be performed by QA. It would be easy to claim that QA lags behind development and accept that the iteration is complete, but one of the rules of our newly minted methodology was that QA would be performing their work alongside the other team roles, not trailing it. 

End of Tour
Even though there was a huge disparity in the perceived progress of the two teams, the reality is that both teams finished the development work more or less on time. Team A had less overall work remaining to complete their iteration because of a closer communication between Dev and QA. Team B had more work to complete the iteration because QA was not as aware of progress and started their work later. Team B also had to spend time correlating completed tasks with their relevant stories (OK, feature lists) to communicate to stakeholders what user-centric functionality had been completed in the iteration.

The moral of our story is that neither team failed - both executed against their tasks rather well, at least from the development perspective. Visibility definitely improved, as did concrete visibility for both teams of previously hidden inefficiencies (effect of late QA involvement, lack of visibility of progress). Did the achieve total success in all of their corrective actions? No, nor was there ever an expectation that they should. The teams did learn something useful from the changes in their work and have already applied this knowledge to their current iterations of work to continue the process of improvement.

Monday, January 16, 2012

Crystal Base Jumping

I've just returned from the Blue Skies Boogie, an annual skydiving event that happens every January in Mesquite, NV. For the non-skydivers out there, think of a "boogie" in the same terms as a really cool software conference, with gravity as the keynote speaker.

As is my wont, I spent some time pondering the metaphoric similarities between my chosen passions - in this case software development and skydiving. it reminded me of a topic I had intended on writing on a couple of years ago but never got around to doing until now.

Several months ago I picked up a book on BASE Jumping - the extreme sport of jumping off of anything that isn't actually flying around in the air and living to tell the tale. Since I'm already an avid skydiver, BASE jumping seemed a pretty natural next step in my nonstop quest to torture my poor worried mother half to death (at least that's how she puts it).

BASE (Building, Antenna, Span, Earth) jumping is to skydiving as skydiving is to climbing down a ladder. Sure, both will do the job of getting you down to the ground, but there's a world of difference in the ride.

A skydive is driven by a simple metric, altitude. At specific altitudes there are actions that must be taken in order to you to survive a skydive, let alone land unharmed. The penalty for failing to perform these actions quickly and decisively is as steep as it gets.

BASE jumping is no different in that it is ruled by the same altitude metric. It also requires actions that must be taken in order to survive. What makes the critical difference between the two is the time scale in which these actions must be performed. 

On a typical skydive you may have up to a minute of freefall time before you reach the altitude where you must deploy your parachute. For a BASE jump, 7 or 8 seconds is about the maximum amount of time you get. Not a whole lot of room for dilly-dallying, if you know what I mean.

I know, I know. I promised that this post would have something to do with software development. I promise we're getting there.

A few years ago a company I was working for was entertaining the leadership of an important medical society. We had previously been discussing the possibility of licensing of some of our medical content to them for an educational application under development for them by a third party. We were all under the impression that the purpose of the meeting was to close a content licensing deal, and were quite surprised to hear the representative from this society apologize to us because even after a year of development their educational application was not going to be delivered in time, thus eliminating the need for our content.

During a break in the conversation one of my peers said "What if we went ahead and built it for them? We seem to be pretty good at this web application stuff, right?". It was one of those perfect questions - the kind when you can just feel the world around you come into focus a little brighter and sharper than it was before.

The only catch was that the application had to be ready in time for a July launch, which at the time of this meeting was less than 3 months away. Now before you start thinking to yourself "This guy is a real glutton for punishment!", may I remind you to take a moment to review the name of this blog? Are you really shocked to hear that I've pointed said Agile Sadism at myself from time to time?

Anyway, it took us a little over a month to work out the contractual details of the project, leaving us with a grand total of 7 weeks left to build and deliver a working application.

OK, this is the clever bit where I tie our analogy into the story. Recall the relative metrics for skydives versus BASE jumps? If the previous team "cratered" their project after a year, how could we possibly think that we could nail our BASE jump after only 7 weeks?

Editor's note: The term "cratered" is a skydiving term used to indicate a landing where the skydiver failed to deploy a parachute. The management of this blog does not endorse this as an effective method of completing a skydive, even if you like to make a big entrance.

Successfully transitioning a methodology from a more "normal" delivery period isn't any more of an accident than successfully transitioning from skydiving to BASE jumping. Fortunately for us, Alistair Cockburn's Crystal provided a meticulously detailed plan for our Crystal Base Jump. Although we may be straining the limits of what text can be posted in a single blog entry, I think it is important enough to post it here in it's entirety for the sake of others facing similar extreme projects. 

Here it is:

Put smart, experienced people in a room and get out of their way.

That's it.

Actually, that's not it. Our success was predicated by the choice of the people that were in the room. Although my claim to intelligence and expertise is questionable, my primary teammate [Nate Jones] certainly knew what he was doing.

What was most interesting wasn't how we were working, it was the rate at which we'd adapt how we were working. Our "methodology" could literally change within the space of a few hours depending on the current circumstances of the project. It is my belief that what really made the difference for us beyond our ability to deliver software is that we shared a common domain language for our methodology, allowing us to shift process with minimal signaling - usually just naming a technique would do the trick.

I know, you read all the way down here, thinking that there'd be all sorts of information on how to do your own "Crystal Base Jump" imparted. I don't want to leave you completely empty handed, so I'll close with a quick discussion of some of the key techniques we used to help achieve success. 

Customer Negation - We all have deep familiarity with the term "The customer is always right". Well, this isn't the case in a Crystal Base Jump. It's not that they are wrong, it was simply because we couldn't afford to commit any time to cycling on customer feedback. At the outset of the effort we identified their "without which, not" features that had to be delivered, and spent some time understanding the amount of play we had with the implementation of those features. Once we had clarity on these features we literally went dark on the customer for the remainder of the development period. The next time they saw anything related to the application was a day or two before we went live with the application. This practice of "customer negation" was actually spelled out in the legal contract

Trim the Tail - I first heard about this technique from Jeff Patton, who I am sure will correct me if I am improperly identifying him as the originator of the technique. Traditionally coders have a tendency to consider a given feature or set of features as an "all or none" proposition - either they are delivered at a specific level of fidelity or they aren't considered "done". 

In Trim the Tail you look at feature implementation as more of a staged effort. Initial delivery of the feature would be at the ["Yugo"] level: it gets the job done, but ain't pretty. Subsequent work in the same feature area would enhance the feature(s) to the intended level of completion. In the case of our Crystal Base Jump project, not only did we have a number of user-centric features delivered in a minimal state, some elements of our technology stack were also Yugos at delivery time.

Walking Skeleton - If Trim the Tail is a tactical tool, Walking Skeleton is the strategic. Also learned from Jeff Patton, this practice focuses on the implementation of a minimalist set of working features across the breadth (all major functional areas of the application) and depth (all layers of the technology stack). 

Use of the Walking Skeleton technique gives you early visibility into the complexity of implementing the full feature set as well as early experience in how stable the technology stack is for the application. Walking Skeleton is a great insurance policy against schedule devouring features and late discovery of technology stack instability.

Customer Proxying - Customer Proxying comes into play in situations where for whatever reason the team does not have easy or frequent access to actual customers. In our case the lack of customer access was quite intentional, but it didn't mean that we didn't care about what was important to them. 

In our case team members had considerable experience with thinking in terms of customer Personas and were able to channel these personas as needed during the development process.

Incremental UI - Although this technique is in theory a natural extension of Trim the Tail / Walking Skeleton, it does bear mentioning because of the specific positive impact it had on our project. Introduced by Nate Jones, this technique is the most elegant incremental delivery of UI fidelity I've ever seen. 

It goes something like this.

1. Paper prototyping to confirm basic user story fulfillment
2. Clickable prototypes to confirm navigation and user story details
3. Incremental integration of application stack against clickable prototypes with low fidelity UI formatting
4. Pretty pixel UI formatting and UI finalization

The advantage of the technique was that from the very start we had a consistent walkthrough across the user interface that was the basis for easily demonstrating application development progress. This incremental delivery kept the UI and application stack development highly cohesive while allowing each to proceed more or less independently of each other.

To be sure, there were other techniques and tools we used in the course of our Crystal Base Jump, but these were the ones that stood out the most during the heat of battle. 

Final thoughts on Crystal Base Jumping. Don't do it. Seriously. Much like the real thing, you'll fail unless you have the right experience, skills, and team. Unlike the real thing, the price you'll pay for failure isn't death, but a failed project after such a Herculean effort is going to hurt. A lot.

Monday, January 2, 2012

Fixing New Years Resolutions with Agile

December 31st, 2011. 11:25PM MST: 

For what seemed to be the 17 millionth time I was asked about my New Year's resolutions for 2012. Before I could summon up the energy for yet another foaming-at-the-mouth diatribe against the futility of this custom I realized that my mouth-foaming was sounding suspiciously familiar.

It's an exceedingly rare occurrence when you actually listen to your own rant. Was it the champagne that had elevated my consciousness to this rarified state? Or perhaps it was the euphoria of observing all of the lovely ladies in attendance at this party that had perfected the art and science of the "little black dress"? Whatever it was, it was working.

Here's what I heard when I actually started paying attention to my little rant:

A) New Years resolutions almost always fail
B) Why only make these life-altering changes once a year?
C) Even if you do make progress towards a resolution it is considered a failure if total success isn't achieved.
D) Resolutions are a conspiracy perpetrated by the fitness and weight loss military-industrial complex, which channels the funds gained from the brainwashed "Resolutionist" masses into advertising for fast food and big screen TV's, which in turn subvert yet more of the masses into their nefarious vicious consumerism cycle.

OK. I made that last one up. I really have to find a way to turn the channel when one of the cable stations plays a "Conspiracy Theory" movie marathon late at night

Anyway, if you look at the first three points, they seem awfully familiar to anyone that has been on big software projects, don't they? Let's take a closer look:

1) New Year's resolutions almost always fail = Big (as in budget and/or time to delivery) software projects almost always fail
2) Why only make these life-altering changes once a year = Why limit releases to once a year or so?
3) Progress towards a resolution is forgotten if the full resolution isn't achieved = New features/improvements implemented early sit unused until the whole release is delivered

The epiphany for me isn't some new insight into the software business (I'm sure some of the repeat visitors here would be shocked if I ever come up with some new insight into the software business).

No, the epiphany is in how we deal with our New Year's resolutions. Forget this dysfunctional "all or nothing" tradition we keep torturing ourselves with. Let's take a hint from the software business and do our New Years resolutions the Agile way.

Since we're talking about a team size of one, we can really strip Agile down to bare metal:

1) Iterate/deliver frequently
2) Reflect and adjust

Let's test this out. Say that under the old repressive regime of "Yearly Resolutions" I'd set a big goal for myself - such as learning how to become an Ultimate Pickup Artist (UPA).   Although there is a strong romantic appeal to throwing myself into such a noble goal with abandon, what I'm really interested in is results. After all, what if I'm really not cut out to be an ultra-babe magnet?

With an Agile approach to my New Year's resolution of becoming an UPA, I now have an extensive toolbox of techniques and practices to bring to bear. One of the first I'd reach for is a little gem called "fail fast". 

In this case, fail fast means two things. First, can I bear uttering inane pickup lines with a straight and sincere facial expression? Second, do I have what it takes to endure the wrath of women offended by lewd suggestions in pursuit of the small percentage that are either over-medicated or have otherwise taken leave of their senses to the point where they'd be smitten with a tawdry pickup line?

Right there is more than enough work for a first iteration. Assuming monthly iterations, the theme for January would be to "fail fast" and my work product would be to practice my pickup line delivery and insensitivity to criticism and/or physical assault. Heck, with a bit of careful planning I may even be able to work towards both goals at the same time.

By the end of January I'd have ample data to reflect on whether I'm socially and morally corrupt enough to be a truly stellar UPA. 

If it seems I have what it takes, I'd be ready to take my next incremental steps in February towards my goal.

 If not, I'll take comfort in the fact that I didn't waste a lot of time and energy in finding out that contrary to appearances, I do have some vestige of a conscience lurking somewhere and can instead focus on pursuing a more appropriate personal resolution, such as honing my gender sensitivity skills.

In conclusion, although you may not agree with my choice of resolutions, you can't argue with success. If you really want to make those New Year's resolutions permanent, you gotta go Agile!

Editor's note: No actual females were harmed during the manufacturing of this blog entry. Our brave volunteer test reader (Ghennipher Weeks) did suffer some emotional trauma in the line of duty, but with just a few short months of intensive therapy, she should be just fine.

Wednesday, December 21, 2011

You can tune a methodology, but you can't tuna fish (tune a fish)

Last week was our first pass at introducing a new methodology to the client teams. Prior to our rollout there was significant concern throughout the group that we were going to impose so much new process that innovation and creativity would be stifled.

I was a bit surprised by this, not only because I had been constant in my verbal commitment to the team that we would only implement just enough process to keep the whole team working and productive, but also because I figured by now they'd have seen glimpses of my deep and abiding loathing for any activity that isn't directly related to delivering quality software. If nothing else, I'd have hoped that they would have noticed my highly refined sense of laziness when it comes to doing things not directly related to delivering working software.

But I digress.

In talking with the team I heard stories about a previous attempt to install Scrum as a working methodology for the team. It wasn't successful. This lack of success wasn't because of any fundamental flaw in Scrum, it was because the previous implementer didn't recognize that there was a working methodology in place and tried to impose new process where things were already working, and failed to recognize where the existing process could use some shoring up. 

What may have made a difference for our former Scrum Master was a realization that there is no such thing as a singular "correct" methodology. For every team delivering software there are nuances to the team, their environment and their problem domain that require some level of "tuning" to get their chosen methodology working just right for the team. Since I'm eager to get to the details of our "build-a-methodology" activity, I'll save my "Agile certifications only prove someone can keep their butt in a chair for three days" rant for another time.

Knowing that the team had a failed methodology adoption under their belts (not to mention the attendant "methodology is bad" jitters), a different approach seemed to be the best course of action.

As luck would have it, I had just the thing sitting in my toolbox.

In the book "Agile Software Development" (Cockburn, 2007), Alistair describes a technique for methodology tuning. I don't want to spoil the ending for you (no, it wasn't the butler), but here are the highlights of the technique in bullet form:

* Examine one example of each work product
* Request a short history of the project to date
* Ask what should be changed next time
* Ask what should be repeated
* Identify priorities
* Look for any holes

Currently we have 5 distinct teams working in the Mobile group at Walmart Labs. 4 teams are focused on client applications (iPhone, iPad, Android, Mobile Web) and a services team that provides functional API's to the client teams. No singular methodology is going to work for all of these teams, even if we ignore the fact that most of the teams are distributed and are working within the machinery of the larger organization (no, we really didn't ignore those factors).

Methodology tuning is deceptive in that it seems to be a relatively straight-forward activity. Steps 1 through 5 are easily performed by anyone with a heartbeat and rudimentary conversational skills (e.g. upper management). Step 6? Well, that's kinda the secret sauce - the step that moves this activity from a "Shu" level interaction all the way up the scale to the "Ri" level.

Let's take the "should change" elements from our exercise as an illustration. Here's the list of things that the team identified as being painful and worth avoiding:

* Different roles (QA, Dev, UXD) did not interact well
* Business drove scope creep
* Requirements not clearly stated and in some cases showed up late
* Lack of visibility into development progress
* Confusion around roles and responsibilities
* Key activities missed
* Design changes close to release

Out of this list two major patterns emerge. First, the team was not prepared for late changes and/or requirement creep. Second, there was a considerable amount of confusion over who was responsible for what and visibility into overall progress towards delivery. 

A novice methodologist looking over this list may conclude the following:

1) Requirements need to be more clearly stated and should not be allowed to change beyond a certain point in the development process. 

2) A clear statement of roles and responsibilities needs to be created for the team.

3) Team members need to report more clearly their current status and progress on their work.

All of the above changes seem to address the "holes" as surfaced by the discussion with the team, but each proposed change either imposes more work on the team or reduces their ability to change their mind about what work should be done.

Contrast that with the proposed changes I worked out with the team:

1) Any task performed by the team cannot exceed 1 day of effort to complete. 

By reducing the maximum size of tasks that the team can perform to a single day it creates a natural tension on the product team to spend more time on the requirements, both in decomposition (smaller stories = easier to estimate tasks) and in simply thinking more about the details of what the requirements should be (more thinking = less likely for late surprises. It also influence scope creep by exposing the cost of "that little extra" being slipped in by the business)

2) During release planning the stakeholders will use risk analysis as part of organizing the work of the team. 

The most common source of significant changes late in the development process for a product is responding to bad news. By identifying the features in an application that have the highest "risk" (Risk Reduction) we can organize work in such a way that bad news (if it shows up) is delivered early, giving the team more time to come up with a plan "B".

3) No iteration may have more than 50% of it's work be "must have" features

Setting this rule on iteration planning has two purposes. First, it again puts pressure on the stakeholders to be clear about what "must" be delivered versus what can be cut if the team runs into difficulty, reducing the chance of scope creep showing up late (scope creep is almost always accompanied with claims of "we have to have this, can't ship without it"). 
Second, having this "50%" buffer in place does allow for mid-iteration course corrections. It's not a recommended approach in that it violates the "focus" principle but is a useful tool for teams still working on building trust with the stakeholders that they can course-correct as much as they see fit after the current iteration.

4) Tasks, iterations and releases all must have clearly defined rules of what "done" means.

Having clear rules about what it means to be done with a task, an iteration, or even the release greatly facilitates the process of providing visibility into the current status of a development effort. It also goes a long way towards solving the problem of different roles not interacting effectively because in almost every case part of the definition of "done" for tasks and even iterations involves  direct interaction between different roles on the project. For example, in our project an iteration isn't done until two things happen - QA approves the work performed in the release, and the product of the current iteration is delivered directly to stakeholders.

Contrary to appearances, the point of this compare and contrast exercise isn't to strut my agile plumage. Take any 5 agile gurus, and they'll have at least 10 different opinions about how best to address the pains of the team, 20 if you let them iterate.

The real point was to illustrate the value of having a methodology tuning tool in your agile toolbox. As stated previously, no two products and/or teams will ever be the same, why would we expect our methodology to be any different?

Monday, December 5, 2011

Team building: Recruiting life in the Big City

As part of my new responsibilities with the mobile division of Walmart Labs, I am working directly with our in-house recruiter on finding talented candidates to join our group. As recruitment for our group had started well in advance of my joining the team, my first task was to sort through a relatively hefty pile of resumes to find promising candidates for a number of different roles within our team.

It wasn't long before I had two piles in front of me (figuratively speaking, no actual tree-killing stacks were made). The first pile, containing the vast majority of the resumes I had reviewed, were the rejects. The second, a meager stack at best, were the candidates interesting enough to warrant an actual conversation.

In reflecting on the two stacks, I couldn't help but apply a mental label to each. The first pile (by now leaning precariously over the trash icon on my desktop) became the "Cogs". The second were the "Players".

Cogs (to my mind) are the people in the software business that are there to do a job, draw a paycheck, and that's about the extent of it. Their resumes were an exercise in HR pre-screening hurdle jumping, and very little else. Of course  it is entirely possible that there was much more to these candidates but if so it certainly didn't shine through.

In contrast to the Cogs, the Players (think sports, not some guy in a red velvet smoking jacket) deliver. They are craftspersons, always working to improve their skills. They were engaged with their professional community. And they made a difference where they had worked.

Frankly, I was rather surprised that I had a pile of Players in the first place. Silicon Valley is an extremely competitive market for talented individuals, and large companies that don't have a reputation for technical excellence tend to not attract these sort of candidates.

So I set off to find our team's recruiter to find out how these interesting resumes had snuck in. I had been looking forward to this conversation because the recruiter in question had handled my recruitment, and I was interested in hearing her opinion of what it was like to bring in someone like myself to the team. For the record, no, I wasn't that bad, but I definitely did not follow the game plan on my way in.

Talking with Kathleen as her "customer" was a very different experience than as her recruit. I learned very quickly that her path to the company was similar to mine - she too had been pursued by a colleague and wasn't exactly overwhelmed about the opportunity. In talking about the recruiting and onboarding process it was clear that her level of frustration in dealing with the hiring process was even greater than mine. I had only been through it once, whereas this was her day to day reality.

Our first task was to adjust the screening process so that the candidates coming in were better qualified for the team that we are building. More players, less cogs. At first I thought that this would be as simple as changing the job descriptions to better reflect what we were doing, so I stated it as I saw it:  "We are creating a true entrepreneurial space within a larger company that will be a showcase for how agile teams can deliver without becoming mired in process and red tape."

I could immediately tell by the amused/pitying look on Kathleen's face that I had said something wrong. With a shake of her head she said to me "You do know that every single large company that is competing for talent out here says that exact same thing?". As soon as she said it, I realized how naive I had been in my thinking. Of course companies competing for a limited pool of technical resources would craft the most appealing message to get the attention of the community, regardless of the reality of how the company really worked.

I had already known that recruiting for our team was going to be challenging. Much like the initial resistance felt by Kathleen and myself, the very first obstacle we would have to overcome with the type of players we were looking for was the fact that Walmart hasn't exactly been on the short list of bleeding edge mobile technology adopters. But this newest revelation made it clear that we were going to have to come up with something a bit better than a dazzlingly worded job description.

Fortunately, we do have something a bit better. Us.

More specifically, our connection to our respective communities and other players we have worked with in the past.

let's assume for a moment that I am a representative example of the type of player we're looking for. Yeah, I know - I'm as surprised as you that I can say that with a straight face, but just work with me for a minute, will ya?

Had I seen a posting for my current position somewhere on the intar-webs, I would have spotted the corporate logo out of the corner of my eye and moved right along. This isn't an indictment of my current employer - you can't argue with success, and Walmart is as successful as you can get. But there was nothing interesting in that space for me, namely because the place that I prefer to work is a place where my contribution to the success of the organization goes beyond a few percentage points on a graph somewhere. I (much like other players) want to have a visible and lasting impact on the organization we are working with.

I now know that the space for this sort of contribution does exist because I the people that reached out to me to join the company are from the same mold and have created a space that will allow players to do what they do best. It was this personal contact by people I knew and respected that shifted my perspective on the job from "part of the machinery" to "being part of changing the game".

Moving forward the key to success in our recruiting efforts will be this same sort of outreach to our respective communities. We will know right away when it is working by the number of times we hear in the interviewing process comments like "I applied for the position because I heard about what your team is doing and I want to be a part of it.".

Now that I think about it, we are already starting to hear that now.

Wednesday, November 9, 2011

Jammin'

Last night I had the pleasure of participating in a "Jazz Dialog" event conceived and hosted by Alistair Cockburn. For those that are having a hard time seeing what those two things have to do with each other, I'll attempt to 'splain.

One of the key principles of Jazz music is the element of improvisation. No two recitations of a song played by the same players will be exactly the same because the players are not just engaged in playing the song, they are also engaged in playing with each other. It is this element of improvisation that keeps Jazz music fresh and interesting for players and listeners alike.

Imagine that you are in the audience of an impromptu Jazz jam session that has the likes of Miles Davis or Dizzy Gillespie sitting in with the group. If you are a neophyte Jazz aficionado, you may feel shifts in your emotional reaction to the music, but not really know why or how it is happening. A more seasoned Jazz fan may notice that there is some sort of interplay above or below the level of the actual music between the players, but may not be able to interpret how that interplay is affecting the evolution of the song. 

To truly understand the forces shaping the music, you'd need an expert observer who is paying attention to the players, and not the music. They would notice that Dizzy seemed a bit more subdued than usual at the outset of the song and watched Miles repeatedly challenge Dizzy to step up his energy level by intentionally magnifying his own play. Without this expert observation of the interaction of the players you may notice that the song started out slow and then picked up energy as it went on without ever realizing that it was the direct result of the interplay of the players above and below the music.

So what does that have to do with dialog? I'm glad you asked. 

For over a decade now a group has been meeting in Salt Lake on a monthly basis to talk about software. The group was originally focused on discussing Object Oriented software development, but around the time I showed up in 2002 the charter of the group had shifted to discussion of all things Agile (and agile) related. Attendance to the group fluctuates, but there is a core group of attendees that keep coming back to sit in on these conversations. 

Why? After 10 years, you'd think we'd be all talked out. But yet we keep coming back because the monthly discussion aren't just a discussion, they are our "Jazz Dialog" jam sessions. And much like a good jam session, we can take the same old tunes (such as "estimation accuracy") which seem to be in an advanced state of expired equine violence (beating a dead… you get the point) and still getting something new out of the conversation.

Because it isn't the topic that is important, it is the dance of the dialog.

I had an "Aha" from the Jazz Dialog last night - it was about the implications of our roundtable "jam sessions". Three in particular stood out to me:

1) The people that keep coming back to the roundtable sessions over the years? Dialog Jazz musicians. We all love a good jam.

2) I believe that the people that come to the roundtable looking for answers to questions or challenges tend to not stick around because they subconsciously (or consciously) pick up on the fact that the dialog itself is more important than the outcome of individual discussions and thus look elsewhere for help.

3) Multiple attempts to export the roundtable to other locations have failed over the years because they duplicate the format but not the musicians.

Of course one good "aha" leads to another, so my second is that there is an inverse correlation to the amount of interest you have in a particular topic versus your ability to look past the topic to observe the flow of the dialog. I don't think I'd be satisfied with just committing to metering my engagement with a topic in order to observe the dialog, so this means that I need to practice getting better at tracking a dialog while I'm in the middle of it.

Monday, November 7, 2011

The high cost of failing to introspect

The high cost of failing to introspect

Disclaimer: I am not authorized to speak in any way, shape or form for Walmart, Walmart.com and Walmart Labs, the opinions expressed here in this blog  are entirely my own, and if history is any sort of indicator, half-baked.

I've been at my new position as Director of Engineering for the Mobile group within Walmart Labs for roughly 10 days now, which usually would mean that I'm barely qualified to find my way to the bathroom unaided. But since I'd like to make a good impression with my bosses I've done my best to hit the ground running as fast as I can

Right into a brick wall, as it turns out.

From the moment I walked in the door I was aware of a certain amount of tension between individual client application teams and a services team that was responsible for providing back end services for those client teams. Knowing that this was was the most visible area of organizational pain within my new professional home I shifted to investigative mode.

The first theory that I tested was that there were personality and/or work ethic issues between the teams. On the surface this seemed to be a good one to start with given that the services team had been in place prior to the leadership change that brought in our "Lean Startup" oriented management and client teams.

It didn't take long to discard this theory. When I first met with the services team their sense of frustration and anxiety over the recent changes around them was clearly evident. But what was interesting is that their passion about getting the job done was also equally evident, which means that the real problem wasn't in that room. Since I was fresh out of theories the next step was for me to sit with the team and watch them do their job.

It worked. 

The specific job I was most interested in watching was how the team went about handling the production deployment process. As luck would have it the team was set to do a production deployment that same night. In an effort to understand what I'd be seeing during the production deployment that evening, I sat down with one of the team members and had him outline the steps in the process for me.

An hour later we had to get another team member to come in and fill in some of the details that the first team member was unsure of. 30 minutes after that the two of them had to go get yet another guy to fill in a couple of blanks that the others didn't have enough information about. Are you starting to get the picture? 

Watching for myself it was as complex as the description. Responsibility handoffs were "over the wall" style, meaning that the person shepherding the changes through would have to start from scratch every time a new person entered into the deployment process. Scheduled event times meant little to outside teams as there was no visibility on our side as to what else was on the deployment schedule, or what was causing a deployment schedule to slip if deployments were being pushed back. Even tools created to facilitate the deployment sometimes caused more problems in the deployment than they solved.

Translation, we were losing more than 50% of the productivity of the services team due to the "cost" of interacting with the production release process.

What was most amazing to me was the high degree of tolerance within the larger organization of the pain of this deployment process. Now understand that I come from a much smaller team, where knowledgeable and trained people took the place of process. In a larger company we don't always have this luxury, thus the reason why the process had been created in the first place. It was clear that every single individual working within the process was working hard and doing the best they could within their specific responsibilities, but had become accustomed to the fact that this was how things worked, and it wasn't going to change. In some cases they didn't even seem to recognize this process as being "painful" from an organizational perspective. Speaking as the "Agile Sadist", I was surprised at the amount of organizational and individual professional pain that was being endured.

You're probably thinking that my next question would be "How could this have happened?". Surprisingly, it isn't. I am sure that the actual story of how the current status quo evolved would be interesting and informative, but I believe that the actual cause is far easier to diagnose - a lack of sufficient introspection.

Introspection allow you the opportunity to review what is and isn't working, and make changes based on that information. It is one of Alistair's core principles of the Crystal family of methodologies, and rightly so. It provides a thoughtful change agent, one that is based on first-hand knowledge and experience of the team that is performing the work.

Without introspection there is no good measurement of whether the application of changes are needed in the first place or if they are working to address the original issue. In the case of our production release process you can discern a number of causative elements if you look closely enough, but the changes implemented to address them in most cases not only failed to solve the original issue, but actually spawned new issues by their introduction.

Again, understand that this is not a rant against how bad production deployment seems to be at the 'ol workplace. It has certainly worked for them to this point, and allowed them to reach their business goals. But the business goals of our mobile group are different, and require a much more "agile" release capability. To be honest, it is exciting to see the possibilities for improvement here, and you can be sure that whatever we end up doing, introspection will be a key part of it.