Wednesday, December 21, 2011

You can tune a methodology, but you can't tuna fish (tune a fish)

Last week was our first pass at introducing a new methodology to the client teams. Prior to our rollout there was significant concern throughout the group that we were going to impose so much new process that innovation and creativity would be stifled.

I was a bit surprised by this, not only because I had been constant in my verbal commitment to the team that we would only implement just enough process to keep the whole team working and productive, but also because I figured by now they'd have seen glimpses of my deep and abiding loathing for any activity that isn't directly related to delivering quality software. If nothing else, I'd have hoped that they would have noticed my highly refined sense of laziness when it comes to doing things not directly related to delivering working software.

But I digress.

In talking with the team I heard stories about a previous attempt to install Scrum as a working methodology for the team. It wasn't successful. This lack of success wasn't because of any fundamental flaw in Scrum, it was because the previous implementer didn't recognize that there was a working methodology in place and tried to impose new process where things were already working, and failed to recognize where the existing process could use some shoring up. 

What may have made a difference for our former Scrum Master was a realization that there is no such thing as a singular "correct" methodology. For every team delivering software there are nuances to the team, their environment and their problem domain that require some level of "tuning" to get their chosen methodology working just right for the team. Since I'm eager to get to the details of our "build-a-methodology" activity, I'll save my "Agile certifications only prove someone can keep their butt in a chair for three days" rant for another time.

Knowing that the team had a failed methodology adoption under their belts (not to mention the attendant "methodology is bad" jitters), a different approach seemed to be the best course of action.

As luck would have it, I had just the thing sitting in my toolbox.

In the book "Agile Software Development" (Cockburn, 2007), Alistair describes a technique for methodology tuning. I don't want to spoil the ending for you (no, it wasn't the butler), but here are the highlights of the technique in bullet form:

* Examine one example of each work product
* Request a short history of the project to date
* Ask what should be changed next time
* Ask what should be repeated
* Identify priorities
* Look for any holes

Currently we have 5 distinct teams working in the Mobile group at Walmart Labs. 4 teams are focused on client applications (iPhone, iPad, Android, Mobile Web) and a services team that provides functional API's to the client teams. No singular methodology is going to work for all of these teams, even if we ignore the fact that most of the teams are distributed and are working within the machinery of the larger organization (no, we really didn't ignore those factors).

Methodology tuning is deceptive in that it seems to be a relatively straight-forward activity. Steps 1 through 5 are easily performed by anyone with a heartbeat and rudimentary conversational skills (e.g. upper management). Step 6? Well, that's kinda the secret sauce - the step that moves this activity from a "Shu" level interaction all the way up the scale to the "Ri" level.

Let's take the "should change" elements from our exercise as an illustration. Here's the list of things that the team identified as being painful and worth avoiding:

* Different roles (QA, Dev, UXD) did not interact well
* Business drove scope creep
* Requirements not clearly stated and in some cases showed up late
* Lack of visibility into development progress
* Confusion around roles and responsibilities
* Key activities missed
* Design changes close to release

Out of this list two major patterns emerge. First, the team was not prepared for late changes and/or requirement creep. Second, there was a considerable amount of confusion over who was responsible for what and visibility into overall progress towards delivery. 

A novice methodologist looking over this list may conclude the following:

1) Requirements need to be more clearly stated and should not be allowed to change beyond a certain point in the development process. 

2) A clear statement of roles and responsibilities needs to be created for the team.

3) Team members need to report more clearly their current status and progress on their work.

All of the above changes seem to address the "holes" as surfaced by the discussion with the team, but each proposed change either imposes more work on the team or reduces their ability to change their mind about what work should be done.

Contrast that with the proposed changes I worked out with the team:

1) Any task performed by the team cannot exceed 1 day of effort to complete. 

By reducing the maximum size of tasks that the team can perform to a single day it creates a natural tension on the product team to spend more time on the requirements, both in decomposition (smaller stories = easier to estimate tasks) and in simply thinking more about the details of what the requirements should be (more thinking = less likely for late surprises. It also influence scope creep by exposing the cost of "that little extra" being slipped in by the business)

2) During release planning the stakeholders will use risk analysis as part of organizing the work of the team. 

The most common source of significant changes late in the development process for a product is responding to bad news. By identifying the features in an application that have the highest "risk" (Risk Reduction) we can organize work in such a way that bad news (if it shows up) is delivered early, giving the team more time to come up with a plan "B".

3) No iteration may have more than 50% of it's work be "must have" features

Setting this rule on iteration planning has two purposes. First, it again puts pressure on the stakeholders to be clear about what "must" be delivered versus what can be cut if the team runs into difficulty, reducing the chance of scope creep showing up late (scope creep is almost always accompanied with claims of "we have to have this, can't ship without it"). 
Second, having this "50%" buffer in place does allow for mid-iteration course corrections. It's not a recommended approach in that it violates the "focus" principle but is a useful tool for teams still working on building trust with the stakeholders that they can course-correct as much as they see fit after the current iteration.

4) Tasks, iterations and releases all must have clearly defined rules of what "done" means.

Having clear rules about what it means to be done with a task, an iteration, or even the release greatly facilitates the process of providing visibility into the current status of a development effort. It also goes a long way towards solving the problem of different roles not interacting effectively because in almost every case part of the definition of "done" for tasks and even iterations involves  direct interaction between different roles on the project. For example, in our project an iteration isn't done until two things happen - QA approves the work performed in the release, and the product of the current iteration is delivered directly to stakeholders.

Contrary to appearances, the point of this compare and contrast exercise isn't to strut my agile plumage. Take any 5 agile gurus, and they'll have at least 10 different opinions about how best to address the pains of the team, 20 if you let them iterate.

The real point was to illustrate the value of having a methodology tuning tool in your agile toolbox. As stated previously, no two products and/or teams will ever be the same, why would we expect our methodology to be any different?

Monday, December 5, 2011

Team building: Recruiting life in the Big City

As part of my new responsibilities with the mobile division of Walmart Labs, I am working directly with our in-house recruiter on finding talented candidates to join our group. As recruitment for our group had started well in advance of my joining the team, my first task was to sort through a relatively hefty pile of resumes to find promising candidates for a number of different roles within our team.

It wasn't long before I had two piles in front of me (figuratively speaking, no actual tree-killing stacks were made). The first pile, containing the vast majority of the resumes I had reviewed, were the rejects. The second, a meager stack at best, were the candidates interesting enough to warrant an actual conversation.

In reflecting on the two stacks, I couldn't help but apply a mental label to each. The first pile (by now leaning precariously over the trash icon on my desktop) became the "Cogs". The second were the "Players".

Cogs (to my mind) are the people in the software business that are there to do a job, draw a paycheck, and that's about the extent of it. Their resumes were an exercise in HR pre-screening hurdle jumping, and very little else. Of course  it is entirely possible that there was much more to these candidates but if so it certainly didn't shine through.

In contrast to the Cogs, the Players (think sports, not some guy in a red velvet smoking jacket) deliver. They are craftspersons, always working to improve their skills. They were engaged with their professional community. And they made a difference where they had worked.

Frankly, I was rather surprised that I had a pile of Players in the first place. Silicon Valley is an extremely competitive market for talented individuals, and large companies that don't have a reputation for technical excellence tend to not attract these sort of candidates.

So I set off to find our team's recruiter to find out how these interesting resumes had snuck in. I had been looking forward to this conversation because the recruiter in question had handled my recruitment, and I was interested in hearing her opinion of what it was like to bring in someone like myself to the team. For the record, no, I wasn't that bad, but I definitely did not follow the game plan on my way in.

Talking with Kathleen as her "customer" was a very different experience than as her recruit. I learned very quickly that her path to the company was similar to mine - she too had been pursued by a colleague and wasn't exactly overwhelmed about the opportunity. In talking about the recruiting and onboarding process it was clear that her level of frustration in dealing with the hiring process was even greater than mine. I had only been through it once, whereas this was her day to day reality.

Our first task was to adjust the screening process so that the candidates coming in were better qualified for the team that we are building. More players, less cogs. At first I thought that this would be as simple as changing the job descriptions to better reflect what we were doing, so I stated it as I saw it:  "We are creating a true entrepreneurial space within a larger company that will be a showcase for how agile teams can deliver without becoming mired in process and red tape."

I could immediately tell by the amused/pitying look on Kathleen's face that I had said something wrong. With a shake of her head she said to me "You do know that every single large company that is competing for talent out here says that exact same thing?". As soon as she said it, I realized how naive I had been in my thinking. Of course companies competing for a limited pool of technical resources would craft the most appealing message to get the attention of the community, regardless of the reality of how the company really worked.

I had already known that recruiting for our team was going to be challenging. Much like the initial resistance felt by Kathleen and myself, the very first obstacle we would have to overcome with the type of players we were looking for was the fact that Walmart hasn't exactly been on the short list of bleeding edge mobile technology adopters. But this newest revelation made it clear that we were going to have to come up with something a bit better than a dazzlingly worded job description.

Fortunately, we do have something a bit better. Us.

More specifically, our connection to our respective communities and other players we have worked with in the past.

let's assume for a moment that I am a representative example of the type of player we're looking for. Yeah, I know - I'm as surprised as you that I can say that with a straight face, but just work with me for a minute, will ya?

Had I seen a posting for my current position somewhere on the intar-webs, I would have spotted the corporate logo out of the corner of my eye and moved right along. This isn't an indictment of my current employer - you can't argue with success, and Walmart is as successful as you can get. But there was nothing interesting in that space for me, namely because the place that I prefer to work is a place where my contribution to the success of the organization goes beyond a few percentage points on a graph somewhere. I (much like other players) want to have a visible and lasting impact on the organization we are working with.

I now know that the space for this sort of contribution does exist because I the people that reached out to me to join the company are from the same mold and have created a space that will allow players to do what they do best. It was this personal contact by people I knew and respected that shifted my perspective on the job from "part of the machinery" to "being part of changing the game".

Moving forward the key to success in our recruiting efforts will be this same sort of outreach to our respective communities. We will know right away when it is working by the number of times we hear in the interviewing process comments like "I applied for the position because I heard about what your team is doing and I want to be a part of it.".

Now that I think about it, we are already starting to hear that now.

Wednesday, November 9, 2011

Jammin'

Last night I had the pleasure of participating in a "Jazz Dialog" event conceived and hosted by Alistair Cockburn. For those that are having a hard time seeing what those two things have to do with each other, I'll attempt to 'splain.

One of the key principles of Jazz music is the element of improvisation. No two recitations of a song played by the same players will be exactly the same because the players are not just engaged in playing the song, they are also engaged in playing with each other. It is this element of improvisation that keeps Jazz music fresh and interesting for players and listeners alike.

Imagine that you are in the audience of an impromptu Jazz jam session that has the likes of Miles Davis or Dizzy Gillespie sitting in with the group. If you are a neophyte Jazz aficionado, you may feel shifts in your emotional reaction to the music, but not really know why or how it is happening. A more seasoned Jazz fan may notice that there is some sort of interplay above or below the level of the actual music between the players, but may not be able to interpret how that interplay is affecting the evolution of the song. 

To truly understand the forces shaping the music, you'd need an expert observer who is paying attention to the players, and not the music. They would notice that Dizzy seemed a bit more subdued than usual at the outset of the song and watched Miles repeatedly challenge Dizzy to step up his energy level by intentionally magnifying his own play. Without this expert observation of the interaction of the players you may notice that the song started out slow and then picked up energy as it went on without ever realizing that it was the direct result of the interplay of the players above and below the music.

So what does that have to do with dialog? I'm glad you asked. 

For over a decade now a group has been meeting in Salt Lake on a monthly basis to talk about software. The group was originally focused on discussing Object Oriented software development, but around the time I showed up in 2002 the charter of the group had shifted to discussion of all things Agile (and agile) related. Attendance to the group fluctuates, but there is a core group of attendees that keep coming back to sit in on these conversations. 

Why? After 10 years, you'd think we'd be all talked out. But yet we keep coming back because the monthly discussion aren't just a discussion, they are our "Jazz Dialog" jam sessions. And much like a good jam session, we can take the same old tunes (such as "estimation accuracy") which seem to be in an advanced state of expired equine violence (beating a dead… you get the point) and still getting something new out of the conversation.

Because it isn't the topic that is important, it is the dance of the dialog.

I had an "Aha" from the Jazz Dialog last night - it was about the implications of our roundtable "jam sessions". Three in particular stood out to me:

1) The people that keep coming back to the roundtable sessions over the years? Dialog Jazz musicians. We all love a good jam.

2) I believe that the people that come to the roundtable looking for answers to questions or challenges tend to not stick around because they subconsciously (or consciously) pick up on the fact that the dialog itself is more important than the outcome of individual discussions and thus look elsewhere for help.

3) Multiple attempts to export the roundtable to other locations have failed over the years because they duplicate the format but not the musicians.

Of course one good "aha" leads to another, so my second is that there is an inverse correlation to the amount of interest you have in a particular topic versus your ability to look past the topic to observe the flow of the dialog. I don't think I'd be satisfied with just committing to metering my engagement with a topic in order to observe the dialog, so this means that I need to practice getting better at tracking a dialog while I'm in the middle of it.

Monday, November 7, 2011

The high cost of failing to introspect

The high cost of failing to introspect

Disclaimer: I am not authorized to speak in any way, shape or form for Walmart, Walmart.com and Walmart Labs, the opinions expressed here in this blog  are entirely my own, and if history is any sort of indicator, half-baked.

I've been at my new position as Director of Engineering for the Mobile group within Walmart Labs for roughly 10 days now, which usually would mean that I'm barely qualified to find my way to the bathroom unaided. But since I'd like to make a good impression with my bosses I've done my best to hit the ground running as fast as I can

Right into a brick wall, as it turns out.

From the moment I walked in the door I was aware of a certain amount of tension between individual client application teams and a services team that was responsible for providing back end services for those client teams. Knowing that this was was the most visible area of organizational pain within my new professional home I shifted to investigative mode.

The first theory that I tested was that there were personality and/or work ethic issues between the teams. On the surface this seemed to be a good one to start with given that the services team had been in place prior to the leadership change that brought in our "Lean Startup" oriented management and client teams.

It didn't take long to discard this theory. When I first met with the services team their sense of frustration and anxiety over the recent changes around them was clearly evident. But what was interesting is that their passion about getting the job done was also equally evident, which means that the real problem wasn't in that room. Since I was fresh out of theories the next step was for me to sit with the team and watch them do their job.

It worked. 

The specific job I was most interested in watching was how the team went about handling the production deployment process. As luck would have it the team was set to do a production deployment that same night. In an effort to understand what I'd be seeing during the production deployment that evening, I sat down with one of the team members and had him outline the steps in the process for me.

An hour later we had to get another team member to come in and fill in some of the details that the first team member was unsure of. 30 minutes after that the two of them had to go get yet another guy to fill in a couple of blanks that the others didn't have enough information about. Are you starting to get the picture? 

Watching for myself it was as complex as the description. Responsibility handoffs were "over the wall" style, meaning that the person shepherding the changes through would have to start from scratch every time a new person entered into the deployment process. Scheduled event times meant little to outside teams as there was no visibility on our side as to what else was on the deployment schedule, or what was causing a deployment schedule to slip if deployments were being pushed back. Even tools created to facilitate the deployment sometimes caused more problems in the deployment than they solved.

Translation, we were losing more than 50% of the productivity of the services team due to the "cost" of interacting with the production release process.

What was most amazing to me was the high degree of tolerance within the larger organization of the pain of this deployment process. Now understand that I come from a much smaller team, where knowledgeable and trained people took the place of process. In a larger company we don't always have this luxury, thus the reason why the process had been created in the first place. It was clear that every single individual working within the process was working hard and doing the best they could within their specific responsibilities, but had become accustomed to the fact that this was how things worked, and it wasn't going to change. In some cases they didn't even seem to recognize this process as being "painful" from an organizational perspective. Speaking as the "Agile Sadist", I was surprised at the amount of organizational and individual professional pain that was being endured.

You're probably thinking that my next question would be "How could this have happened?". Surprisingly, it isn't. I am sure that the actual story of how the current status quo evolved would be interesting and informative, but I believe that the actual cause is far easier to diagnose - a lack of sufficient introspection.

Introspection allow you the opportunity to review what is and isn't working, and make changes based on that information. It is one of Alistair's core principles of the Crystal family of methodologies, and rightly so. It provides a thoughtful change agent, one that is based on first-hand knowledge and experience of the team that is performing the work.

Without introspection there is no good measurement of whether the application of changes are needed in the first place or if they are working to address the original issue. In the case of our production release process you can discern a number of causative elements if you look closely enough, but the changes implemented to address them in most cases not only failed to solve the original issue, but actually spawned new issues by their introduction.

Again, understand that this is not a rant against how bad production deployment seems to be at the 'ol workplace. It has certainly worked for them to this point, and allowed them to reach their business goals. But the business goals of our mobile group are different, and require a much more "agile" release capability. To be honest, it is exciting to see the possibilities for improvement here, and you can be sure that whatever we end up doing, introspection will be a key part of it.

Monday, October 31, 2011

Reflecting on my career

It's a beautiful cloudless day, especially here at 36,000 feet. I'm on a flight headed for San Francisco and my brand new position as Director of Engineering for the Mobile division within Walmart Labs.

When I'm not on the job one of my passions is skydiving. As you might imagine, there isn't a lot of room for reflection when you're free falling at 125 miles per hour, but since I've chosen to  stay with the plane for my full flight today it seems to be a perfect moment for a retrospective on the last segment of my career with my previous employer, Amirsys Inc.

I joined Amirsys 6 years ago, initially as the Technology Manager, then to Director of Technology and finally Chief of Technology. Amirsys was more than a job for me – it was a home, one that gave me a golden opportunity to test out a number of theories on making software development better. Thanks to my close association with Dr. Alistair Cockburn and the Salt Lake Agile Roundtable discussion group, I had no lack of theories, practices and experiences to draw from.

One of the techniques learned from the SL Agile group that I introduced to Amirsys was a retrospective format that focuses on three key questions:
  1. What worked? (Things that we did that worked well)
  2. What didn't? (Things that didn't work out well)
  3. Try? (What are we going to do differently)
This format worked very well for our teams at Amirsys, so I don't see any reason why I shouldn't apply it to my own career to see if I might actually learn something. 

What worked:

Agile Sadism. Creating and/or raising visibility on organizational “pain” was an effective tool in inducing cultural change within Amirsys, especially in the early days of adoption. Ultimately we created a team culture that was focused on rapid and repeatable delivery of value, value being defined primarily as revenue generating software. 

Note to self: It's probably best not to actually tell your peers and your direct report that you are actively engaging in benign corporate sadism.

“Special Operations” team model. Amirsys as a company had a clearly stated goal, namely to reach a specific company valuation by creating a number of complementary revenue generating “fronts”. What we didn't know early on was exactly what these fronts were going to be. Forming the development teams around a "special operations" military model of small, cross-trained teams allowed us to repurpose our output at any time based on business priorities by shifting teams and team members members to where they were needed.

Just In Time team growth. Over time it was clear that our products would grow in revenue to the point where they would justify the services of dedicated team roles. Our “multiple front” business strategy made it difficult to determine when any one product would require a specific dedicated team role, rather than hire for the role on an individual product basis I introduced a “Just In Time” role growth plan across all products. When we reached a point where the lack of a dedicated role was slowing the progress of all teams we hired people capable of performing that role across multiple products. Once a role was established we would hire additional resources for that role based on overall workload.

Protecting the team. I believe it was Kay Johansen that drew a picture once that has stuck with me for many years. Essentially the picture was a team surrounded by a wall called "management". The purpose of that wall was to allow in positive elements - resources, praise, etc. and block the negative elements - decision thrashing, micromanagement, unnecessary process. The not-so politically correct term for the role of a manager in this context was a "shit shield". I took that picture to heart in working with my team at Amirsys and felt I did well - our annual team member turnover was less than 10% for the 6 years I was with the company.

What didn't work well:

4 Dimensional Trick Shot. Alistair Cockburn gets the credit for this term. Remember the old Disney "Flatland" cartoon that showed the peculiarities of living with different numbers of dimensions? Good. Now picture Goofy, standing in front of a four dimensional pool table. After carefully lining up his shot, he fires the cue ball right off into the middle of nothing. Three days later, when everyone has gotten bored and gone home, every ball on the table neatly drops into a pocket save one, which ends up landing directly on top of his head. Yes, that was me. Goofy, I mean - not the ball.

Knowing that we had multiple product fronts to generate revenue on and not enough time or resources to have a dedicated platform team, I opted to have the teams build the platform as they were delivering individual products. Imagine my dismay when the CEO and my peers became frustrated with my tendency to call shots like "Enterprise wide authentication and subscription management off the side rail and into the "project that seems to be getting no resources right now" pocket Tuesday afternoon, two months from now". Lesson learned for me: No more trick shots into dimensions that the rest of the business can't see.

"Racking up Technical Debt". Early in my tenure at Amirsys the leadership of the business indicated that we had a specific time frame to increase revenue in advance of a financial event. Believing this to be the case, I decided to bias the efforts of the development team towards delivering revenue-generating products and features as opposed to clearing known technical debt. Of course what ended up happening is that factors converged to perpetually keep the date of that event 18 months or so into the future. It tok me a good three years to really recognize the situation we were in and put more effort into selling the rest of the business on clearing technical debt, by which time our technical debt had accumulated to the point where significant effort was required to clear the most pressing issues. Lesson learned: Don't trade consistent effort on clearing technical debt for squeezing out a few more features. If a specific goal requires pausing on clearing technical debt, time-box it.

What do you mean, "I don't know when we'll be done?. When we finally got serious about clearing technical debt, the number one target was a nine year old application that had grown so fragile that any work within the application was at least 4X beyond when it was new. By this time we could either refactor the existing application in place, or to end all non critical support of the legacy application and kick off a brand new effort.Even though refactor in place is much more difficult than a ground-up rebuild, I chose that option to allow the business to continue to add features to the product as it was being refactored. Although I warned the business about the tradeoff between predictability in completion for ongoing feature development, when the effort spanned into several months the business became frustrated with my inability to give a hard completion date. Lesson learned for me: Tolerance for uncertainty on the part of the business as a whole is a critical part of the "Refactor or Replace" decision.

Relationships. It seems strange to be discussing relationships in the context of running a software team, but ultimately I believe that this is one of my greatest failures. The key relationships that I failed to properly manage were the ones with my peers and with the CEO and President of the company, who I reported to. At the heart of my relationship failures was a lack of effective communication - about status, about progress, about challenges. I believe that this lack of transparency and collaboration on my part ultimately led to a shifting of my responsibilities that took me out of the day-to-day operations. Lesson learned for me: The team is not just my direct reports. It is also my peers and who I report to.

Try:

Sunshine, sunshine, sunshine. Given that every single one of my "Didn't work" points had some element of communication and visibility involved, it is clear that I need to put more emphasis on "Sunshine" (one of the 8 principles of Crystal that discusses transparency and visibility). More specifically it is my intent to:

1) Define clear metrics that best report progress and status to the whole team, from leadership on down
2) Create and maintain ambient data collection mechanisms that gather metrics data organically from the working teams that feed into highly visible info radiators
3) Commit to interacting with the whole team on an ongoing basis to insure clarity of meaning from the metric data

Mentoring. It probably comes as no surprise that I consider myself an advanced practitioner of Agile. That being the case it is rather ironic that I haven't put much effort into encouraging the growth of others on my teams in the same subject. By putting more emphasis on mentoring in all directions (direct reports, peers, my superiors) I should be a able to multiply my effectiveness without increasing the amount of actual work I am required to do. Of course this also requires me to let go to the point where others are able to try different things, make mistakes and learn.

KISS, YAGNI, etc. From an internal dialog with my ego: "Yes, I do think I am a talented software architect, and yes, I definitely want the world to know this. But do I have to prove it by setting up those 4-dimensional bank shots just to show that my secret master plan is a work of genius? Can't I just be happy with putting more emphasis on clear metrics for success and let the team decide for themselves how best to deliver based on those metrics?"

Simply put, less focus on delivering expected value six months from now out of work being done now, and more emphasis on delivering actual value now. By setting up clear metrics for success around the work product of the teams I believe we will deliver more value now while maintaining clarity for the whole team on what we are building and how well we are performing at delivering it.

Monday, September 19, 2011

Stories from the "Knowledge Acquisition" curve, part 2: Social Risk

For the three of you out there that are actually reading this blog (your check for next month is on its way), I'm continuing in the theme of knowledge acquisition stories. For those of you that didn't see the first post, you can find it here.


Social Risk is the focus of this installment. Seems like a strange risk to be listed as important enough to be worth devoting team resources to understanding, doesn't it? Rest assured, it does exist and in my opinion is the most prevalent cause of project delivery failures. 

We've been conditioned since the late 60's to think of software development as an "engineering" discipline, complete with it's own set of laws and physics that remain inviolate regardless of circumstances. This definition works right up to the point where you run into the humans involved with the project. There's no formula for calculating the degree of political sabotage expected from a hostile stakeholder or percentage of bad code expected from an inept developer.

One of the most important accomplishments of the Agile movement (in my humble opinion) has been the acknowledgement of humans as an integral part of the development effort. This is certainly a big step forward from the "engineering" mindset. But what hasn't quite become apparent to the adopters of Agile is the implications of humans as an integral part of the process. Simply put, humans are messy from an engineering point of view.


In most teams the elements of social risk are more background noise than roadblocks. Established teams doing more or less the same work that they have always done have adapted to the unique social elements in their world. But introduce change to the team (technology, personnel, stakeholders) and you now have a new social dynamic that will affect the team's ability to deliver.

For new teams the social risk is much higher - everything is an unknown. Does the team work well together? Is there hidden agendas among the stakeholders? Does the team have the ability to do the work they are tasked with?

Social risks can be tough to identify. It really does require a working knowledge of human psychology, something that you don't often see on the hiring requirements for development types. As with anything, experience and stories from others can help uncover these risks in your own team, so let's get to a story, shall we?

Some time ago a company I worked for acquired the rights to an existing free web application that would help pathologists choose specific cellular "stains" that would allow them to identify what type of cancer cells were in a tissue sample. Our plan was to take the underlying data and build a new application from the ground up that would provide greater functionality and allow us to charge for the service.

Two of the stakeholders for this project were very senior and well respected pathologists, one of whom was the creator of the application we were acquiring. As stakeholders for the new product, they had some very specific opinions about how the product should work, and worked closely with the team to make sure that the new application reflected their experience in pathology.

What the team didn't realize at the time was that although our stakeholders were absolutely correct in their perspective on how a pathologist should work, it turns out that pathologists actually want to work in different ways, ways that weren't supported in the first production release of the application.

The story does have a happy ending - as soon as we realized that there was significant pushback from the user community the team shifted into a weekly release cycle process and communicated extensively with the user base on progress and priorities until they had the application working to the user community's satisfaction.

The team actually learned two things in retrospective. The first was that we didn't understand the motivation on the part of the stakeholders to improve how people in their field do their work. Had we recognized that sooner it would have allowed the team to translate statements like "This is how a pathologist works" into "This is how we think a pathologist should work". The second was a re-affirmation of the importance of the Crystal principle "Easy access to expert users".

The moral of the story for me was that hidden stakeholder motivations were sufficiently disruptive to projects to warrant taking the time to discover and expose them to the team. This doesn't lessen the disruption - exposure of hidden motivations can be quite challenging. But the principle of "Fail Early" covers this nicely. If the warping of the project by a stakeholder(s) agenda is sufficient to ultimately cause failure, it is better to fail as early as possible.

Wednesday, September 7, 2011

Stories from the "Knowledge Acquisition" curve, part 1

If you've ever attended any of Alistair Cockburn's recent presentations on the nature of software "engineering", you'll have heard him discuss the early part of the design process as a game of knowledge acquisition (link - scroll down to the section titled "Chapter 2 of the Story, 2010).

Essentially what he is saying is that it is a successful strategy to treat initial design as a "pay to learn" period. No, you're not handing out piles of cash to trench-coated agents that are slipping you unpublished USDA reports. The nature of your coin is the effort of the team. What you are paying for is information about the success and/or failure of the product you are designing.

Although there is no simple formula to identify exactly what nuggets of knowledge are needed to determine the success and/or failure of a product, Alistair does provide us with a map to the territory. Here are his 4 categories of knowledge worth "paying" to learn:
  1. Business Risk (Are we building the right thing?)
  2. Social Risk (Can our team build it?)
  3. Technical Risk (Will our solution work?)
  4. Cost/Schedule Risk (Do we understand the cost / timing?)
As the title implies, the point of this (and subsequent) blog entries is to tell stories about how this game of knowledge acquisition was played. Depending on where you are at in your Shu Ha Ri path you will hopefully gain something useful even if it is the certainty that yours truly isn't playing the "knowledge acquisition" game with a full deck. 

This first story is about gaining knowledge about Cost/Schedule risks. At the risk of sounding boastful I've chosen to tell this story first because it was the impetus for Alistair to include the Cost/Schedule risk category to the above list.

Our story begins with the decision to create a new application that would allow medical residency physicians to study for their board examinations. Medical board examinations are typically case-centric, presenting the examinee with specific medical cases that they would then identify, diagnose and discuss treatment options.

As you might guess, the key to providing a useful learning product is having high quality content, something that Amirsys is known for within the medical community. Not only did we have existing content, but the content itself was structured in such a way that re-using existing content for new purposes (such as this learning application) is something that we are very good at.

Our skill in reuse of existing content wasn't an accident. Early on in my career with Amirsys I was surprised to discover that the timeline for the creation of high quality medical content significantly exceeded software development timelines. In other words, the most significant cost of developing a new product wasn't the development work, but the content creation for that product.

Considering that we had a pretty good idea of the costs associated with content creation, any new products requiring new content had to have a pretty convincing ROI to justify the content creation costs. Either that, or we had to come up with product ideas that wouldn't require new content creation. Thus the idea for this learning application was born.

At the time of product conception we had data from roughly 60,000 medical cases in our content repository. In order for a learning product to be successful you need to present both content to learn from and assessment about that content (questions). There was no lack of learning content, but at that time we had no authored questions over that content. 

The big break came when it was pointed out that our cases were structured in such a way that it may be possible to automatically create relevant questions about the cases without requiring a medical expert to perform any authoring. On the strength of this realization the development of the product was given the green light.

Shortly after work began on the product I was participating in a conversation about an unrelated product when one of our content authors happened to mention that the cases they authored didn't have all of the content I had assumed to be required for authoring. Given how dependent we were on this content being sufficiently complete to allow for the automated generation of assessment questions, this was a significant risk to the delivery of this product.

Unfortunately there was no easy way to evaluate the suitability of the content outside of actually showing it within the context of the new product and having a medical expert interact with it. Considering that the success of the new product hinged on the fact that we didn't have to explicitly author all of these new questions we had to do something to "acquire knowledge" about the extent of this new risk.

Our solution was to build a walking skeleton of the application that would allow a user to "take a quiz". No bells and whistles - just the presentation of these auto-generated questions and the ability to select an answer and see if the answer was right or wrong. It took members of the team approximately two weeks to build this tool and put it in front of our medical experts.

Within 10 minutes it was clear that there was a significant amount of inconsistency in our case data, to the point where a paying customer would not receive a useful learning experience from the content. Armed with the knowledge that our content was not in the condition we had assumed it was, the business needed to make a decision.

Fortunately the story has a happy ending. It was determined that although our case content wasn't consistent enough to auto-generate assessment questions, it was good enough to allow us to significantly speed up the authoring process by "prepopulating" assessments that authors could then edit rather than create from scratch.

Had we not known about Alistair's pattern of "Paying for Knowledge" it is entirely possible that we would not have discovered our content challenges until it was too late to meet key delivery dates or be required to disrupt other revenue generating activities. As it turned out the product was delivered on time, with the only negative consequence being a smaller set of content shipping with the new product. 

As of the time of this writing this product has been generating revenue for close to a year now and has given us the tools to build similar products for external customers who have similar expert content learning application needs.

Stay tuned for Part 2 - A tale of Social Risk

Tuesday, June 7, 2011

Shu Ha Ri and Distributed Cognition

Last week I attended Alistair Cockburn's "Advanced Agile Master Class" here in Salt Lake. I've made no secrets about the fact that I am one of Alistair's minions, but to have me as a minion comes with a price - you have to keep me interested by challenging me to learn and grow whether I like it or not.

Alistair's class didn't disappoint. I will have more to say about the class in subsequent postings, but for now I wanted to focus on a key piece of knowledge that I walked away from the class with. At the outset of the class Alistair had asked all of us to identify what we wanted out of the class. My goal was to come up with two different ideas that I could immediately put to use in my own organization. Two may not seem like a lot, but if you're around Alistair and the Salt Lake Agile Roundtable group for any length of time, you'd be amazed at how many great ideas just loiter around making a nuisance of themselves until you give in and try them and I was looking for something fresh and new.

I'll save you the cliff-hanger. I had my two before the first day was out. What was interesting is how the realization of one of these two (Distributed Cognition) came about in the context of Alistair's class.

Distributed Cognition (Wikipedia Entry) implies that the common thinking about a specific domain isn't limited to an individual but is rather shared across a group interacting within that domain. Consider the defensive unit of a professional football team. In order for the defense to do their job they have to instantly react to their observations of what the offense is doing, all without relying on explicit communication between team members.

When you discuss defensive units you frequently hear terms like "on the same page" or "the unit is meshing well". These terms are useful in giving some indication of how far along the team is in creating a shared cognition of their domain.

Fortunately for us, business does not move at quite the same pace as professional football. But there is no doubt that business does move fast. Fast enough that "being on the same page" becomes increasingly important as the business grows beyond a few guys in a room.

The reminder of the theory of Distributed Cognition was timely for me. Entering into the class one of the key challenges I had been facing (and not necessarily succeeding at) at my company was the increasingly complex challenge of keeping everyone in the company on the same page as our company grew. The tricks we used to keep everyone on the same page three years ago were failing to get the job done as we added more people, projects and responsibilities.

For those of you paying attention, Distributed Cognition isn't a specific practice, like unit testing or stand-ups. It is a theory. As with any theory, there are a number of practices that are derived from it that can be applied to different situations. This is important because I wasn't looking for a specific practice - I was looking for a theory.

One of the key points that Alistair kept hammering home in the class is that Advanced Agile class isn't about practices. It was about theory and self-awareness. This is all part of his Shu-Ha-Ri pattern of mastership for our software craft. For those of you too lazy to click the link, the short version is: Shu - specific practices, Ha - theory behind practices, Ri - self awareness to recognize the right theory and pick the appropriate practice.

Had the class focused only on practices, we *might* have been able to learn 4-5 well enough to take away and implement elsewhere. Since we instead focused on theory and self-awareness, I now have a name for a whole set of practices that I can discover and try out to see which best suits the needs of my company. Prior to the class I was aware of the theory, but it was the environment of the class itself that fostered the moment of self-awareness that indicated the applicability of Distributed Cognition to my organizational challenges.

Did everyone who was in the class get this level of understanding from the training? Good question. In some of the sidebar discussions I had with Dennis Stevens (Dennis's Blog) it was clear that our experiences in the class were similar in regards to the understanding derived from the class, but it would be interesting to hear from others that have been in the class. If you have, please take a moment to post a comment here about your experience with Alistair's "Advanced Agile" class.

Tuesday, May 10, 2011

Why isn't Agile "winning"? (part 2)

In Part 1 of this post I made two assertions:
  1. Our capitalistic business community is quick to recognize and adopt practices that have a clear impact on the bottom line.
  2. That same community is not so great at adopting complex practices, and as a result settle for mimicry of the process rather than true adoption.
Is this really true? Good question. In the first part of this posting we discussed a few examples of how companies failed to successfully adopt complex practices. Now let's look at an example of how a company properly declined to try to replicate a successful complex practice.

In 2007 FastCompany published this article about a GE jet engine manufacturing plant in Durham, NC. It's definitely worth a full read, but in the interests of brevity I will summarize the salient points:
  1. This plant successfully competes against other engine manufacturing plants both inside and outside of GE, including off-shore manufacturers that have labor costs that are pennies on the dollar compared to US labor costs.
  2. Only 1/4 of engines from this plant ever have a single identified defect (almost always cosmetic). 3/4 of the engines produced here are defect-free.
  3. There is no management layers. There is a Plant Manager, and the manufacturing teams. Within the manufacturing teams there is only one worker classification
In 2008 this plant made a decision to start manufacturing a new engine that was in increasing demand and already being manufactured by other GE plants. Within 9 weeks of that decision the plant shipped their first engine, at a cost approximately 12% less than what other GE plants that have been manufacturing that engine for years were delivering it at.

In the world of manufacturing a 12% cost reduction is a phenomenal achievement. To achieve that with the very first engine shipped is unheard of.

So why isn't GE doing everything in their power to replicate how the Durham plant operates with all of their other manufacturing plants? Is the senior management in GE too incompetent to recognize that a 12% decrease in manufacturing costs is a good thing? Not likely.

At the end of the article the Plant Manager of a different manufacturing plant is quoted as saying "I think what they have discovered in Durham is the value of the human being. Here we have people that turn wrenches. In Durham they have people that think.". 

It's clear that GE understands why and how the Durham facility does what it does. But it is equally clear that GE isn't rushing to replicate the Durham model elsewhere. Why? Because that model is too complex to replicate. You cannot achieve the same results by mimicking the process. You also have to replicate the people. Remove the people that make the process work and you have a disaster in the making.

GE is smart enough to know that a manufacturing disaster leads directly to massive real world costs. Have a few jet engines fail in flight due to manufacturing defects and GE is quickly out of the jet engine business. But what about other practices that do not translate so clearly to the bottom line?

Let's look at social media engagement. There is a lot of pressure on business today to "get into" the social networking space lest they be left behind. Engaging social networking is a tricky business because you cannot control the conversation, you can only participate in it. Last year Nestle decided that they were going to squash the practice of Facebook users using an altered Nestle logo as their profile picture. Their subsequent handling of the conversation has become legendary in the social networking sphere as a perfect example of how not to interact with your community.

How did Nestle find itself in this position in the first place? Complexity. They "knew" that they needed to be engaging in social networking lest they be left behind by their competitors. Unfortunately they did not properly understand the complexities of engagement with customers via social networking until too late. Fear of losing ground in the marketplace led to imperfect mimicry of the complex practice of engaging social networking. Nestle would have been better off completely avoiding any social networking engagement.

Agile is no question a complex practice. For every story you read about how Agile has become a mainstream practice there is another that laments the fact that Agile adoptions are mimicry at best and not true adoptions. William Pietri (whom I have joyously crossed swords with in other forums) has a great post about this lack of true adoption. The telling quote from the article:

"I think there is a vast army of supposedly Agile teams and companies that have adopted the look and the lingo while totally missing the point."

As we've seen, mimicry of a practice is not the practice. Without the actual practice you cannot reap the benefits. If we cannot clearly show the benefits of Agile within our own software industry are we going to convince the Boardroom to flock to Agile just because we say so?

Of course! If ignoring actual results in favor of a convincing sales pitch is good enough for politicians, it's certainly good enough for us. Vote Agile! Change you can believe in (this time we mean it)!

Seriously, I believe that there are two paths to establishing Agile as a valid winning business strategy. Let's take a quick look at these paths:
  1. Clean up our act. We've done a great job of selling the software industry on the fact that attending a three-day certification seminar where the only critical evaluation of your Agile skills takes the form of a one-page written test, the answers for which are usually dictated by the course facilitator. If we're going to sell Agile, we need a proper means of effectively measuring Agile. When someone comes to me claiming to be a Certified Scrum Master, I want to see proof that they have the ability to eliminate obstacles or conduct stakeholder feature negotiations effectively, not proof that they were capable of keeping their butt in a chair for three days. If an Agile coach shows up and promises full Agile adoption in anything less than a year of ongoing engagement with the team I want to see how well their previous clients have adopted Agile.
  2. Beat them at their own game. There is no question that there are teams that are truly Agile, that deliver value frequently and consistently. There are enough of them now in the software industry that they have already become the "shining example" of success that the rest of the industry is attempting to duplicate. There was a tipping point when there were enough of these successful teams in the software industry talking about their success to allow the concept of Agile to become mainstream within the software industry. There are businesses out there right now that are winning at business in an Agile manner whether they know it or not. If we can properly market these businesses as Agile, how many more are needed to reach our tipping point?
Sadly, although I am passionate about the first option, I am also a realist enough to realize that there is no way we'll convince the hordes of coaches, trainers and even certification bodies out there to clean up their act. There's far too much money at stake for them to do anything different. And unless their customers realize that they aren't getting what they are paying for, there is no impetus for change.

It's the second option that I think has the best chance for success. There is a part of me that feels like giving up on the dream of true mainstream adoption of Agile (for software or business in general) is a personal failing on my part. But the reality is that Agile is a complex practice, and nothing we can do will reduce that complexity. Complex practices are doomed to be understood and exploited by only a small percentage of businesses and at best mimicked imperfectly by the majority.

So if we want to see Agile win acceptance outside of the team room we need to take an active hand in creating the success stories that lead to the mainstream acceptance tipping point. If we're happy with what we are achieving in the team room it's time to take those successes and bring them into the board room so that other parts of the business can start to realize the value of Agile as a means of conducting business.

Wednesday, February 23, 2011

Why isn't agile "winning"? (part 1)

Over the weekend I received a comment from John Rusk (www.agilekiwi.com) about one of my "Ahas" from 10 Years Agile. He thought that my musing about whether Agile really matters or not was more of a reflection on how companies do business as a whole as opposed to a question about Agile.

His comment (which I do agree with in principle) caused me to realize that I hadn't quite captured the essence of the thought that led to that "Aha". Since I believe that this is one of the key issues facing the ongoing adoption of Agile both inside and outside of software development, I think it's worthy of its own posting. Well, that plus this is my blog, so the voting on topics is a bit skewed...

Let's start with some ground rule assumptions:

  1. "It's a Dog Eat Dog World" - The capitalistic economic model creates an evolutionary "survival of the fittest" environment for companies.
  2. "Show Me The Money" - The measurement of success for companies operating in this environment is money (revenue / profitability).
  3. "Keeping Up With The (Dow) Joneses" - Business practices that have a positive impact on money tend to become adopted by other companies (akin to how beneficial genetic mutations spread through a population).
  4. "Blink And You Missed It" - The rate of propagation of these "mutations" can be exceedingly rapid (days and weeks versus years) so it is good business to pay attention to what others are doing.

Assuming all of the above is true, you have an environment that is highly tuned to detect, adopt and innovate business practices that directly influence the bottom line. A couple of examples for your amusement:
  • The Web - Say what you will about the insanity of the dot-com bubble, there was a solid business "mutation" at the root of it - the realization that the web provided a unique new channel to customers for companies. Two business behemoths of today trace their birth back to the heady days of the dot-com era - Amazon and Google.
  • Lean Manufacturing - Most discussions of the origins of Lean credit Toyota for taking earlier concepts espoused by Henry Ford and others and adapted them to the dynamic challenges of the modern manufacturing industry. Today you would be hard-pressed to find any manufacturer that does not use Lean practices to some degree. This practice has been so successful that it has actually "mutated" into the Agile movement.
Interestingly enough, both of these examples of business practice "mutations" also contain an important cautionary lesson that may inhibit adoption of Agile outside of software development. Consider the following:

Dot-bombs. For every success story coming out of the dot-com frenzy, there are hundreds of failures. In most cases the failures were entirely predictable in that they failed to "Show Me The Money". What would possibly motivate otherwise sane business minds to deviate from focusing on money as a measure of success?

Henry Ford's Lean - According to Taachi Ohno (Credited with developing TPS - principal progenitor to Lean), he "learned it all from Henry Ford's book". If that were truly the case, why did the company that Henry Ford founded fail to "Keep Up With The (Dow) Joneses"? You'd think somebody at Ford would have at least one copy of Henry's book around, right?


In both cases the underlying cause seems to be a lack of understanding of the complexity of the business "mutation" that they were either pursuing or ignoring. 


In the case of the Dot Com bubble, a few extremely high profile IPO's (theGlobe.com) and company sales (Hotmail) demonstrated to the market that there was a vast new frontier of business opportunity on the internet. Unfortunately for the market as a whole, there was a lack of realization that the valuations of these high profile deals were based on a speculative model (% of market share) and not a traditional valuation formula (revenue times X). Establishing the value of a company based on market share alone is quintessential "betting on the come". This is not to say that it does not have it's place. In the case of Microsoft buying Hotmail this was a classic example of where this sort of valuation makes sense. 


But the larger business community simply saw that the valuation of internet companies was shooting through the roof and rather than taking the time to understand the complexity of this new frontier, companies settled for mimicking the outward appearance of moving to the internet. We all know how that played out.


With Ford and Toyota, the complexity comes from a different angle. Henry Ford was definitely on to something with his fledgling "Lean" practices. What Henry (and Ford as a company) failed to grasp is that the challenges facing manufacturers are not just static (waste in manufacturing processes) but also dynamic (supplier is late in delivering critical parts). Unfortunately for Ford and other US manufacturers, the next several decades would not provide them with the sort of environment that would force them to innovate to improve quality and productivity. This wasn't the case in Japan, where a weak post-war economy forced manufacturers away from reliance on mass-production economies of scale to remain in business.


When Japanese autos did show up on US soil, the US auto manufacturers initially failed to understand the different practices that Japanese manufacturers were following. By remaining at a low price point and steadily improving quality, the Japanese manufacturers stayed in the market when by US manufacturer standards they should have been out of business. US manufacturers like Ford failed the "Keeping Up With The (Dow) Joneses" principle, which set them up for being hit square by "Blink And You Missed It". By the time they realized that something was up, Japanese manufacturers had surpassed them in quality and were taking away significant market share - a state of affairs that persists to this day.

Hopefully by this point we have established that:
  1. The business community as a whole is quick to recognize and adopt practices that have a clear impact on "success", namely money.
  2. The business community as a whole is not so great at recognizing when a practice that has a clear impact on money is complex. As a result the tendency is to mimic the appearance of the practice as opposed to adopt the actual practice.
So what does that have to do with Agile? Good question. If you haven't already worked out where this is going, you'll have to wait until the next posting when I actually try to answer the question of "Why isn't Agile 'winning'?".

Wednesday, February 16, 2011

My "Ahas" from 10 Years Agile

The Salt Lake Agile Roundtable discussion group (Yahoo Discussion Group) has a little tradition that I've become quite fond of over the years - at the end of the meeting everyone in attendance has to provide at least one "Aha" moment (something during the discussion that stuck with you) from the discussion. Not only does it reinforce important concepts by repetition, once you know that you need to communicate these to others you start to pay more attention to the conversation. Which in turn gives you more "Ahas" and so on.

Once I discovered that I somehow connived myself onto the invitation list for the recently completed "10 Years Agile" conference (#10yrsagile for twitter fans), it was inevitable that there'd be some forthcoming "Aha" moments from the conference that I'd want to be sharing with whoever would listen.

At this point it's probably worth mentioning that my particular style of Agile is the Crystal family of methodologies developed by Dr. Alistair Cockburn. I became a fan of Alistair back in 2002 when we were speaking at the same local conference on software development, and over time that relationship has developed to the point where I consider him both my mentor, and one of my closest friends.

I do have a real point to my blatant name-dropping. Alistair had asked me to publish my "Ahas" somewhere so he could do whatever it is that he wants to do with them. So with that in mind, here are my "Ahas" from 10 Years Agile.

1) Leadership and Vision - There was a lot of discussion around the concept of "leadership" in the Agile community. Makes sense, but it seems to me that discussing leadership without a parallel conversation about vision isn't going to make a lot of sense. If anything perhaps we should focus first on defining what our shared vision is, and then address the issue of leadership. Since it's clear that we've already got leadership in play (see "elephants" below) I am not so sure that more or different leadership is going to make as much a difference as we'd like.

2) Defining and realizing "Value" - One of the most prevalent issues was value - defining it, quantifying it, increasing it, you name it. Value was definitely one of the more pervasive issues in the discussion. The specific "Aha" that occurred to me was that we may be over-simplifying our discussion of value in relation to an Agile organization. The value of Agile means different things when you move between individual <=> team <=>organization. The ways that you measure value for each are going to vary greatly. Job satisfaction as a value measurement means a lot to an individual, but not much to the organization. Return on investment is a key value for the organization, but how important is that to the individual?

3) Practices and Tools outside of development - One of the more interesting frustrations I had during the course of the conference was the seemingly pervasive attitude of "Agile is for software". At one point in the discussion a participant claimed that a team isn't Agile unless they have followed specific software testing practices. Later in the conference there was a heated debate about whether the term "engineering" should show up in one of the outputs of the conference. I am not debating the need for specific practices that will improve work products. What irks me is if we expect Agile to become a tool for the organization as a whole, don't we think that the tools and practices of the CEO, or the Marketing team have the same importance?

4) Elephants in the room - Of course we're talking about the prevalence of the "certification" entities as the most visible public face of Agile to the world. It was clear that there wasn't a lot of love in the room for the value of certifications as they exist today in the Agile world. What didn't seem to be so clear to others (or maybe I just didn't have the right conversations) was the fact that there isn't much we can do about them other than work with them or put out a better product. Take an organization like the Scrum Alliance. Are they going to change their business practices based on outcry from the rest of the Agile community? Not a chance. By the best definition of success we have in a capitalist system, they are successful because they are making money. If we don't like what they are doing, there's a proven formula for dealing with this. Compete with them and force them to change by way of a superior offering.

5) From Certification to Evaluation - The real problem that I see with #4 above is the fact that the business community is accepting certifications as if they somehow predict job performance or improve the chances of successful adoption. Unless you can show me what your failure rate is for people attempting to certify as whatever title you're giving, I'm not the least bit interested in that certification. What amazes me is that the certification bodies themselves aren't more concerned about this. Without critical evaluation of the knowledge and skills that are being trained how does the certification organization know how effective its educational effort is?

6) Does Agile really make a difference? - Not so much a question that showed up at the conference, but one that kept running around in my head, kicking over garbage cans and spray-painting the cat. As mentioned in #4 above, a capitalistic society provides some very clear measures of success, and is as Darwinian an environment as you can get for principles and practices that affect the bottom line. It's clear we've made excellent progress over the past 10 years, but it's still not so clear to me why businesses aren't beating down the door to really adopt Agile throughout the enterprise and not just be buzzword compliant. If we expect Agile to move from the dev team out to the larger organization we're going to have to be much clearer about defining and proving the real value of Agile as a better means of doing business.

Monday, February 14, 2011

"Agile Sadist". Really? REALLY?!?

Yes, really.

No, I don't show up at work decked out in a full leather jumpsuit, whip in hand, impatiently waiting for the first hapless developer to come scuttling by. If my professional colleagues and co-workers can be believed, I'm actually not that bad to work with on a daily basis, at least as long as there isn't any Dilbert-esque sort of shenanigans going on in the organization.

Before I dive into why I'm risking guilt by association with one of the more "colorful" genres of adult entertainment, let's clarify something about what Agile means, at least to me. Around 4 or 5 years ago I was sitting with Jeff Patton, discussing whatever it was at the time that was interesting to us, when an "aha" showed up in our conversation. It seemed important enough at the time to warrant writing down, so we did:

It says: "Agile adoption isn't process adoption - it's culture adoption"

Culture is defined as "the behaviors and beliefs characteristic of a particular social,ethnic, or age group". Practices that do not align with underlying cultural principles are at best ineffective. Take the example of the stand-up meeting. How well does the stand-up give early warning of trouble if there isn't sufficient personal safety within the culture to allow someone to bring bad news to the attention of the group?

But I digress. Back to the point.

The "Sadist" term was chosen (very carefully, I might add) for two reasons. The first, and most important, was a realization that individual and organizational behavioral change isn't just influenced through positive means. Whether we like it or not, humans have evolved more mechanisms for changing behavior based on negative stimuli than we have for positive behavior. It doesn't take a three day certification seminar to teach you that fire isn't something that should be handled with bare skin.

Before you jump to conclusions here, understand that I am not advocating that your team be used for practice by novice hibachi chefs if they fail to deliver a release on time. But understand that pain and discomfort, whether it be physical, emotional or intellectual, is an effective tool for changing behavior.

Not exactly a happy thought, is it? Rest assured that unless the organization that you're trying to effect change in is the military or some sort of terrorist group, physical pain isn't on the menu. To be honest, even the term "pain" isn't on the menu. When I talk about using "pain" as a means of effecting cultural adoption, what I really mean is "discomfort".

Here's an example. Once upon a time there was a small company that has more than one stakeholder for a specific software product in the same office as the development team. Each time a stakeholder needed something in the software, they'd go to an individual on the team and demand that it be done now, regardless of what was being worked on. It suffices to say that there was a fair bit of thrashing going on.

To change this "back-channel feature request" behavior, I decided to make sure that the discomfort of shifting priorities was felt directly by the stakeholders and not just the development team. The stage was set by getting the stakeholders to agree to discuss any changes with the rest of the stakeholders before they could go into the current iteration. As each stakeholder had been bitten by changes before, they readily agreed to the change. I think it was a day later that a stakeholder came in with a request. Their argument of "it'll just take a minute" didn't dissuade the team from calling a stakeholder discussion on the spot. Not only did the stakeholders deny the change, they admonished the originating stakeholder for disrupting the team.

It didn't take many of these negotiations to change the behavior of the stakeholders.

The second reason for choosing "Sadist" is a bit more personal. More than once in my life I have been accused of unnecessary hyperbole to get a point across. Guilty as charged. What I've learned though is that hyperbole is a good tool for generating interesting discussions. If I had fashioned myself as an "Agile Organizational Discomfort Enhancer", I'd probably lose interest in what I had to say before I'd even finished forming my response. But as a self-professed "Agile Sadist", I can honestly say that there has been no lack of interesting discussions to be found in the Agile community.

Oh, and another interesting thing about hyperbole - it's a sort of a hybrid intelligence/awareness test. Most people don't look past the hyperbole to see what is really there. For the few that do, it's nice to have them announce themselves by saying:

 "That's interesting. What do you really mean by that?".