Monday, April 2, 2012

Care and feeding of feedback loops

So, I have to brag just a bit. Frequent readers will recall that from time to time I post about the teams I am currently working with. For those of you suffering from "too long, didn't read" syndrome, I'll sum up the back story.

The mobile applications group I work with consists of 5 different development teams - one back end services team and four client platform teams (iOS x 2, Android, Mobile Web). Although each of these teams have some really top notch people on the team, they were challenged from the process perspective when I joined the group. 

Our Android team was the first to launch with a structured methodology based on Alistair Cockburn's "methodology tuning" technique. The team is in the final stages of their first release using their shiny new methodology and speaking personally, I really am quite proud of what the team has accomplished. 

One accomplishment in particular was interesting enough that it warranted a write-up here. As mentioned above, the team is in their final stages of delivering their first release of a shopping application for the UK based grocery store chain. As this was the first time a native mobile application was being released for the UK market by our group, the leadership team decided that formal usability testing was warranted. 

Interesting thing about performing usability testing for an application to be deployed in the UK, the usability testing took place in - you guessed it, the UK. With the exception of the Product Owner, the whole development team is based in the US. Ideally we would have sent the whole team over to observe the usability testing, but given the distance the team decided to send their UX Designer over to represent the group.

The choice was a good one. The UX Designer took copious notes of the usability testing, noting equally the things that users were pleased with and things that didn't work so well. These notes were then transferred to the team at the close of the first day of testing, who in turn immediately started poring through the notes.

Here's the good part. Once the team was through the UX Designer's notes, they decided that they were going to fix a number of issues immediately and have the updated application ready for users at the start of the second day of usability testing. And they delivered.

Note that I didn't say that the Product Owner requested the fixes, nor did the team responsible for usability testing. The decision to make the changes in time for the next day of testing was made entirely within the team. For those looking for some measure of team ownership of an application, you could do a lot worse than using this as a benchmark.

Day 2 of usability testing dawns, and a new tested version of the application has been delivered to the usability lab. Prior to the team working in formalized iterations it is questionable whether they would have felt confident to turn around a new release of the application within a single night. Thanks to several iterations of practice at delivering a working application to stakeholders (the team's definition of "done - done" at the end of an iteration) gave them the experience and confidence to crank out a new release in short order.

I know what you're thinking. "Wonderful, they can iterate and release quickly. Look how 'agile' they are. Whoopie.", right?

Well, yes. They are 'agile'. Right on the front page of the Agile Manifesto it says "Respond to change over following a plan". All of the iteration / release conventions the agile community holds dear is rooted in this principle.

But there's more to the story than that. Right below that "respond to change" line is "Customer collaboration over contract negotiation". What seems to be lost on most of our industry is the fact that the word "collaboration" means interact, not guess. Teams that interact directly with customers on a frequent basis are getting the single most important feedback there is - actual feedback from real customers using the application.

Because the team had improved their ability to deliver frequently, they put themselves in a position to take the best possible advantage of that critical feedback loop. Prior to this it was difficult (not impossible, but still difficult) for the US based development team to interact directly with intended customers in the UK. In interviewing the team after the fact it was clear to them the value of having that customer interaction thanks to one of the team being on the spot.

Without the team directly involving themselves in the usability testing they would have received the resulting reports in due course, but they would have missed out on the opportunity to close the that critical feedback loop with their customers. A feedback loop that was particularly difficult for the team to close prior to this usability testing event. 

I think that the best news of all is that I can't take any credit for the team taking advantage of user testing in the way they did. I didn't even know what their plan was until I heard about it after the fact from the team project manager, Michael Buckland. Michael is much more than the project manager for the team - he's also taken over agile coaching responsibilities for the team, and has done a great job in that role. So much so that I may even forgive him at some point for being such a Scrum fanatic. 

Monday, March 12, 2012

Who says I ain't agile?

*sigh*

I really should know better by now, but I still remain surprised that people in the Agile business seem to miss the fact that there is more than one agile methodology when they are claiming "You're not agile unless...".

The current subject of my angst is a blog posting (10 Signs that Your Team Isn't Really Agile). For the record, I actually do think it is a thought provoking commentary for teams that are beginning the adoption of Scrum and looking for guidance on whether they are coloring within the lines or not. 

Where I take issue with the article is the fact that the author cites examples of practices and techniques that are primarily associated with Scrum and claims that if you're not following them as prescribed, you're not agile. 

Well, I'm not quite ready to turn in my agile secret decoder ring just yet.

I chose to dissect this specific article because right off the bat it was a particularly egregious violation of the agile equivalency principle. Yes, I know it's a math term, just bear with my nerdiness for a moment here. 

Right in the opening paragraph the author states "With all the hype in the software industry about Agile, Scrum, and so on...". Or, in formulaic terms:

Agile = Scrum (false - Scrum is an instance of agile, not a peer)
Scrum = So on (false - 'So on' is undefined)

OK. Now that I've appeased my inner math nerd, let's get to the interesting part, dissecting the 10 points raised in the article. 

Important Note: It is entirely possible that I misinterpreted what the author wrote, so I'd invite you to read the original blog so you have both sides of the story. 

1. Your team is not responsible for the whole story.

In the article the author discusses both the fact that Agile teams should be cross-functional and that no value is delivered to the customer until the full story is delivered. All good so far. But then the author makes the statement that although it is possible to have a successful development process without cross functional teams, it simply isn't agile.

Not quite.

Every agile methodology in current use has some means identified for scaling up teams. Many studies have concluded that the optimal Agile team  size is somewhere between 4 and 8 members. Teams working on larger projects (especially within enterprise settings) will frequently own different parts of a single user story. As long as the teams deliver on their parts and collaborate as needed to insure the full story is delivered there is nothing un-agile about shared ownership of a story.

2. Testing is done by another team.

In the description it says "code complete is not done". Looks good so far. Next it states that if QA is another team the team isn't cross-functional. No argument here. Work that isn't tested and integrated isn't considered done. Also looks good.

But that's it. It isn't stated directly, but the implication certainly seems to be "if QA is a separate team, you're not Agile".

In checking the Agile Manifesto, I don't see a thing in there about QA has to be on the same team. Is it a good idea? Absolutely - the whole point of frequent iterations is to maximize feedback that the team can then act on to correct/improve their work. QA has pretty important feedback. 

But to claim that a team isn't Agile unless QA is integrated with the team simply isn't true. It does require more investment into communication and does slow feedback loops, but it does not prevent the team from delivering working software on a frequent basis, nor does it prevent the team from adjusting priorities as needed.

3. You are not INVESTing in your user stories.

INVEST is an acronym that identifies properties of a good user story. It stands for Independent (of other stories), Negotiable (in scope to fit in a sprint), Valuable (to end user), Estimable (by team), Sized appropriately (for a sprint), Testable.

It's a good acronym. But it assumes that the team is a) working in user stories, b) those stories are expressed in INVEST form, and c) the team is good enough at generating these stories so that the team can perform useful work on them.

What if you're not working in user stories? There are any number of alternatives (use cases, requirements documents, etc) that seem to work just fine for many teams. Sadly, the author misses a golden opportunity to sell the ultimate cross-functional team solution - having the customer as part of the team itself. User stories are a record of a conversation (usually between the customer and product oriented roles). If all of the roles (dev, QA, etc.) are right there for that conversation it is less important to have a detailed record of the conversation, especially if the customer is ON the team and can re-engage in that conversation as needed.

4. Tasks are assigned to team members by a manager

Blog: "In a truly agile team, each member can pick which tasks he or she will work on next, as long as it pertains to the next most valuable user story.". 

Yes, some teams do indeed work this way, assuming that the team members have a good understanding of what the "next most valuable" story is. Not all teams work this way. 

Many highly agile teams trust their project and product managers to do their job and properly queue up work for the team. Why did they trust the product and project management to queue work properly? Because they were directly involved in the requirements gathering and prioritization activity as it happened and were part of the decision making process.

5.Team is told how much work to commit to

No argument here. If the team did not generate the estimates of work, they aren't valid, regardless of how much the stakeholders hope or expect to the contrary.

6. You are reporting rather than discussing your progress

There's not one but two assumptions made here about what an agile team should be doing. The article first mentions stand-ups and then goes on to claim that the questions inherent in the format are not there for the manager to keep tabs on the team, but to elicit another team members thoughts.

Stand-ups are a communication technique. The three questions mentioned in the article are a further refinement of that technique. Stand-up meetings are actually a replacement communication technique for teams that do not have a better means of communication (e.g. osmotic). The three questions mentioned in the article are a refinement of the stand-up technique to teach teams what is important to discuss in the context of the whole team. 

Don't get me wrong, I am a fan of stand-up meetings, and the same question format outlined in the article. But to claim that a team isn't agile if they aren't doing both highlights a lack of recognition that there are other equally valid communication techniques.

7. Not focused on completing the most important user story

This one is too vague to dissect in detail. There is no definition on the definition of "most important" in the article, only the statement that if you don't focus on the most important task (however defined), you're not being agile. 
In my experience there are many different ways to define most important - revenue, customer request, production fault, technical debt - the list goes on and on. 

At any given moment you may have several "most important" tasks to choose from to work on, depending on the stakeholder perspective you are using. If your means of selecting work ignores less visible priorities (e.g. technical debt), you run the risk of making life much harder for the team as the application ages.

8. You changed to agile last year, and haven't changed a thing since

No argument here, especially considering the condition set up by the sign is a team new to agile. Teams that have been doing agile for some time may go for long periods without significant changes to their process, but adjustments to process for novice teams are critical to long term success. 

9. You are not ready to release on time with an incomplete scope

Overall, I agree with the author on this issue as well. In the article the author focuses on the value of done-done story completion and proper layering of the stories so that if the team isn't done with the current layer, they can deliver the previously completed layer and still realize business value.

But consider this. Not all software projects are driving to a specific delivery date. Take the gaming software company Blizzard Inc - their long-standing policy has been to only deliver when they have reached a certain level of feature completion. There is nothing inherently un-agile about basing your release on feature completion as opposed to a specific date as long as your team is aware and managing the trade-offs this approach presents.

10. You are not getting customer feedback every sprint.

This one is a bit misleading. My expectation on reading the sign was that the importance of frequent customer feedback (e.g. every sprint) was the key. But the author's explanation focuses on whether the team incorporates feedback from the demo into the upcoming sprint. If not, you're not agile.

Regarding the assertion that you are not agile if you are not incorporating that feedback into what you are building I'm in total agreement. 

I think that the point would could have been better reinforced if the explanation focused on the value of feedback from actual customers. Customer feedback is the single most important feedback loop there is. If you aren't getting direct feedback from the people (institutions, whatever) you expect to buy your product, how do you know if you are building the right thing? Don't assume that your customer research teams or marketing group are an adequate proxy for your customers - find a way to get real customers a room with the team and watch them use the software.

In closing as strange as it looks from the dissection above, I appreciate the fact that the author posted this article. Obviously we disagree considerably on what the boundaries of agile is, but without postings like his we miss the opportunity for dialog on what exactly this agile thing is that we are so passionate about is.

Friday, February 17, 2012

Picturing cultural change

In a significant departure from my typically over-verbose style, I'm doing a quick vignette today on an interaction I saw occur with one of the teams that I am currently working with on their transition to Agile. 

The backstory on this is that one of the persistent pains for this team is a disconnect with QA, so the team agreed to shift their working pattern (culture) to incorporate QA as a direct part of the team, with a much more prevalent role related to work in the current iteration. 

What the team knew was that this should give them much shorter feedback loops on if something needed to be fixed or not, and it should also greatly shorten the amount of time that the project would go dark in QA at the end of an iteration/release.

What teams experienced with multiple roles working together know about this closer collaboration is the fact that when you have close association with the work that a specific role is doing on a day to day basis, you learn about that role to the point where you naturally take on some of their work.

Rather than trying to communicate to the team every single advantage to having all of the roles working together closely, I kept the focus on the direct pain the team was trying to address. I knew that sooner or later the secondary effect of role "spreading" would happen, but not when.

The following picture is a screen shot of a conversation that took place between one of the UX designers on the project (Val G) and one of the developers on the project (Thomas H). I was impressed enough at how fast the team shifted their culture to realize this secondary benefit of close collaboration that I thought it warranted a blogging. 

Here's the conversation:


Friday, February 3, 2012

A Tale of Two Burn Charts

Within the last month we've launched two teams at Walmart Labs using methodology tuning as our means of incrementally introducing Agile. The new "process" introduced for each team was more or less the same:

1) Team will work in iterations
2) Team will create tasks from the user stories allocated to the iteration, and provide work estimates for tasks based on a simple time scale - 2 hours, 4 hours, 1 day. 
4) Tasks are defined as "done" when code/assets/other is checked in and any acceptance criteria for the task has been validated. Iteration is "done" when the working code is deployed to stakeholders and QA has signed off on all tasks/stories within the iteration.
5) QA will work directly with the team, recording and completing their tasks in the same manner as the rest of the team in addition to acceptance evaluation as needed on other tasks.

A few differences of note between the teams: 

1) Team A opted to try out 3 week iterations, Team B went for 2
2) Team A had an initial training session on iteration planning separate from the actual iteration planning exercise, Team B had training (hastily) incorporated into the iteration planning.
3) Team A was primarily on-site, Team B was widely distributed.

For each team the source of the stories for the iteration were derived from existing sources, namely feature lists not articulated in typical story form. This was not an oversight, the teams did not have shared experience in working with Agile style stories, so rather than impose additional training burden on them we collectively agreed to work with the existing resources and revisit the story issue at the next reflection session.

One of the specific "pains" that the overall group was suffering was a lack of visibility into what was happening in the midst of development. This manifest itself in predictable ways - release schedule slips, lack of stakeholder knowledge of feature changes, etc. In addition to the methodology changes that the team agreed to listed above I made it a point to generate daily burn charts on the tasks the teams were performing during the iteration.

Rather than provide a day-by-day running commentary of the iteration, let's skip to the end to see if the butler really did do it:

Team A Task Burn Chart:
Red line denotes ideal burn rate for the iteration
Green line denotes actual team completion of tasks
Blue line on bottom of graph denotes new tasks introduced during the iteration


Team B Task Burn Chart:
Red line denotes ideal burn rate for the iteration
Green line denotes actual team completion of tasks
Blue line on bottom of graph denotes new tasks introduced during the iteration


Based on the burn charts alone, guess which team had delivered working software at the end of their iteration. Go ahead - I'll give you a minute to make your guess.

...

Ready? It was neither. 

Surprised? Don't be. Burn charts are a tool to understand what is happening during the course of an iteration, not a predictor of success. Even with experienced teams that have a track record of successful delivery each iteration it is entirely possible to fail to deliver at the end of an iteration. For teams that are using burn charts for the first time the burn charts serve only one useful purpose, which is to raise awareness to the team on what sort of underlying problems a burn chart can indicate once a team has some history of delivering in iterations.

Let's take a little walking tour of what occurred during each iteration for the two teams:

Stop #1 (Team A and B)
Note that early in the iteration both teams discovered a number of tasks not previously captured. For Team A, it was 5 new tasks, Team B added 12. Considering that the purpose of the iteration is to provide a period of focus for the team by not allowing changes to the work during that time, adding new tasks certainly seems to indicate that new features showed up during the iteration. 

What actually happened was that both teams realized that there were tasks that were missing from the stories that they had agreed to deliver and as a result they added the new tasks to the iteration without realizing that they had invalidated their earlier work estimates for the iteration and increased risk that they would not deliver on time.

Stop #2 (Team B)
After an initial addition of 12 tasks Team B continued adding tasks along the way, peaking up to 4 new tasks in a day twice. What is interesting about this influx of new tasks is that the team didn't detect this problem until it showed up in the burn chart, even though they had already diagnosed it in the initial "methodology building" reflection session. 

Stop #3 (Team B)
Notice that throughout the burn chart there were only three days where the number of net tasks declined. A casual observer might interpret this as "the team isn't doing anything", but that wasn't the case here. Two factors were behind this seeming lack of progress. First, the lead developer (who had taken primary responsibility for managing task state within the tracking tool) was on vacation for a week. In his absence the rest of the team continued to get work done, but didn't update their progress in the tracking tool.

Stop #4 (Team A)
If you look close to the end of the burn chart for Team A, you'll notice a small spike in new tasks two days before the end of the iteration. In this case the tasks in question were not likely to cause a delay in the completion of the iteration but they did represent a lack of understanding of the importance of focus during the iteration. Much like Team B on Stop #2, Team A had not yet developed an aversion to changing work during the iteration.

Stop #5 (Team A)
Look at the end of the burn chart. Notice how it concludes with 4 tasks remaining open? In this case this wasn't a reporting oversight, there were 4 actual tasks that were not completed at the end of the iteration. All 4 of these tasks were work needing to be performed by QA. It would be easy to claim that QA lags behind development and accept that the iteration is complete, but one of the rules of our newly minted methodology was that QA would be performing their work alongside the other team roles, not trailing it. 

End of Tour
Even though there was a huge disparity in the perceived progress of the two teams, the reality is that both teams finished the development work more or less on time. Team A had less overall work remaining to complete their iteration because of a closer communication between Dev and QA. Team B had more work to complete the iteration because QA was not as aware of progress and started their work later. Team B also had to spend time correlating completed tasks with their relevant stories (OK, feature lists) to communicate to stakeholders what user-centric functionality had been completed in the iteration.

The moral of our story is that neither team failed - both executed against their tasks rather well, at least from the development perspective. Visibility definitely improved, as did concrete visibility for both teams of previously hidden inefficiencies (effect of late QA involvement, lack of visibility of progress). Did the achieve total success in all of their corrective actions? No, nor was there ever an expectation that they should. The teams did learn something useful from the changes in their work and have already applied this knowledge to their current iterations of work to continue the process of improvement.

Monday, January 16, 2012

Crystal Base Jumping

I've just returned from the Blue Skies Boogie, an annual skydiving event that happens every January in Mesquite, NV. For the non-skydivers out there, think of a "boogie" in the same terms as a really cool software conference, with gravity as the keynote speaker.

As is my wont, I spent some time pondering the metaphoric similarities between my chosen passions - in this case software development and skydiving. it reminded me of a topic I had intended on writing on a couple of years ago but never got around to doing until now.

Several months ago I picked up a book on BASE Jumping - the extreme sport of jumping off of anything that isn't actually flying around in the air and living to tell the tale. Since I'm already an avid skydiver, BASE jumping seemed a pretty natural next step in my nonstop quest to torture my poor worried mother half to death (at least that's how she puts it).

BASE (Building, Antenna, Span, Earth) jumping is to skydiving as skydiving is to climbing down a ladder. Sure, both will do the job of getting you down to the ground, but there's a world of difference in the ride.

A skydive is driven by a simple metric, altitude. At specific altitudes there are actions that must be taken in order to you to survive a skydive, let alone land unharmed. The penalty for failing to perform these actions quickly and decisively is as steep as it gets.

BASE jumping is no different in that it is ruled by the same altitude metric. It also requires actions that must be taken in order to survive. What makes the critical difference between the two is the time scale in which these actions must be performed. 

On a typical skydive you may have up to a minute of freefall time before you reach the altitude where you must deploy your parachute. For a BASE jump, 7 or 8 seconds is about the maximum amount of time you get. Not a whole lot of room for dilly-dallying, if you know what I mean.

I know, I know. I promised that this post would have something to do with software development. I promise we're getting there.

A few years ago a company I was working for was entertaining the leadership of an important medical society. We had previously been discussing the possibility of licensing of some of our medical content to them for an educational application under development for them by a third party. We were all under the impression that the purpose of the meeting was to close a content licensing deal, and were quite surprised to hear the representative from this society apologize to us because even after a year of development their educational application was not going to be delivered in time, thus eliminating the need for our content.

During a break in the conversation one of my peers said "What if we went ahead and built it for them? We seem to be pretty good at this web application stuff, right?". It was one of those perfect questions - the kind when you can just feel the world around you come into focus a little brighter and sharper than it was before.

The only catch was that the application had to be ready in time for a July launch, which at the time of this meeting was less than 3 months away. Now before you start thinking to yourself "This guy is a real glutton for punishment!", may I remind you to take a moment to review the name of this blog? Are you really shocked to hear that I've pointed said Agile Sadism at myself from time to time?

Anyway, it took us a little over a month to work out the contractual details of the project, leaving us with a grand total of 7 weeks left to build and deliver a working application.

OK, this is the clever bit where I tie our analogy into the story. Recall the relative metrics for skydives versus BASE jumps? If the previous team "cratered" their project after a year, how could we possibly think that we could nail our BASE jump after only 7 weeks?

Editor's note: The term "cratered" is a skydiving term used to indicate a landing where the skydiver failed to deploy a parachute. The management of this blog does not endorse this as an effective method of completing a skydive, even if you like to make a big entrance.

Successfully transitioning a methodology from a more "normal" delivery period isn't any more of an accident than successfully transitioning from skydiving to BASE jumping. Fortunately for us, Alistair Cockburn's Crystal provided a meticulously detailed plan for our Crystal Base Jump. Although we may be straining the limits of what text can be posted in a single blog entry, I think it is important enough to post it here in it's entirety for the sake of others facing similar extreme projects. 

Here it is:

Put smart, experienced people in a room and get out of their way.

That's it.

Actually, that's not it. Our success was predicated by the choice of the people that were in the room. Although my claim to intelligence and expertise is questionable, my primary teammate [Nate Jones] certainly knew what he was doing.

What was most interesting wasn't how we were working, it was the rate at which we'd adapt how we were working. Our "methodology" could literally change within the space of a few hours depending on the current circumstances of the project. It is my belief that what really made the difference for us beyond our ability to deliver software is that we shared a common domain language for our methodology, allowing us to shift process with minimal signaling - usually just naming a technique would do the trick.

I know, you read all the way down here, thinking that there'd be all sorts of information on how to do your own "Crystal Base Jump" imparted. I don't want to leave you completely empty handed, so I'll close with a quick discussion of some of the key techniques we used to help achieve success. 

Customer Negation - We all have deep familiarity with the term "The customer is always right". Well, this isn't the case in a Crystal Base Jump. It's not that they are wrong, it was simply because we couldn't afford to commit any time to cycling on customer feedback. At the outset of the effort we identified their "without which, not" features that had to be delivered, and spent some time understanding the amount of play we had with the implementation of those features. Once we had clarity on these features we literally went dark on the customer for the remainder of the development period. The next time they saw anything related to the application was a day or two before we went live with the application. This practice of "customer negation" was actually spelled out in the legal contract

Trim the Tail - I first heard about this technique from Jeff Patton, who I am sure will correct me if I am improperly identifying him as the originator of the technique. Traditionally coders have a tendency to consider a given feature or set of features as an "all or none" proposition - either they are delivered at a specific level of fidelity or they aren't considered "done". 

In Trim the Tail you look at feature implementation as more of a staged effort. Initial delivery of the feature would be at the ["Yugo"] level: it gets the job done, but ain't pretty. Subsequent work in the same feature area would enhance the feature(s) to the intended level of completion. In the case of our Crystal Base Jump project, not only did we have a number of user-centric features delivered in a minimal state, some elements of our technology stack were also Yugos at delivery time.

Walking Skeleton - If Trim the Tail is a tactical tool, Walking Skeleton is the strategic. Also learned from Jeff Patton, this practice focuses on the implementation of a minimalist set of working features across the breadth (all major functional areas of the application) and depth (all layers of the technology stack). 

Use of the Walking Skeleton technique gives you early visibility into the complexity of implementing the full feature set as well as early experience in how stable the technology stack is for the application. Walking Skeleton is a great insurance policy against schedule devouring features and late discovery of technology stack instability.

Customer Proxying - Customer Proxying comes into play in situations where for whatever reason the team does not have easy or frequent access to actual customers. In our case the lack of customer access was quite intentional, but it didn't mean that we didn't care about what was important to them. 

In our case team members had considerable experience with thinking in terms of customer Personas and were able to channel these personas as needed during the development process.

Incremental UI - Although this technique is in theory a natural extension of Trim the Tail / Walking Skeleton, it does bear mentioning because of the specific positive impact it had on our project. Introduced by Nate Jones, this technique is the most elegant incremental delivery of UI fidelity I've ever seen. 

It goes something like this.

1. Paper prototyping to confirm basic user story fulfillment
2. Clickable prototypes to confirm navigation and user story details
3. Incremental integration of application stack against clickable prototypes with low fidelity UI formatting
4. Pretty pixel UI formatting and UI finalization

The advantage of the technique was that from the very start we had a consistent walkthrough across the user interface that was the basis for easily demonstrating application development progress. This incremental delivery kept the UI and application stack development highly cohesive while allowing each to proceed more or less independently of each other.

To be sure, there were other techniques and tools we used in the course of our Crystal Base Jump, but these were the ones that stood out the most during the heat of battle. 

Final thoughts on Crystal Base Jumping. Don't do it. Seriously. Much like the real thing, you'll fail unless you have the right experience, skills, and team. Unlike the real thing, the price you'll pay for failure isn't death, but a failed project after such a Herculean effort is going to hurt. A lot.

Monday, January 2, 2012

Fixing New Years Resolutions with Agile

December 31st, 2011. 11:25PM MST: 

For what seemed to be the 17 millionth time I was asked about my New Year's resolutions for 2012. Before I could summon up the energy for yet another foaming-at-the-mouth diatribe against the futility of this custom I realized that my mouth-foaming was sounding suspiciously familiar.

It's an exceedingly rare occurrence when you actually listen to your own rant. Was it the champagne that had elevated my consciousness to this rarified state? Or perhaps it was the euphoria of observing all of the lovely ladies in attendance at this party that had perfected the art and science of the "little black dress"? Whatever it was, it was working.

Here's what I heard when I actually started paying attention to my little rant:

A) New Years resolutions almost always fail
B) Why only make these life-altering changes once a year?
C) Even if you do make progress towards a resolution it is considered a failure if total success isn't achieved.
D) Resolutions are a conspiracy perpetrated by the fitness and weight loss military-industrial complex, which channels the funds gained from the brainwashed "Resolutionist" masses into advertising for fast food and big screen TV's, which in turn subvert yet more of the masses into their nefarious vicious consumerism cycle.

OK. I made that last one up. I really have to find a way to turn the channel when one of the cable stations plays a "Conspiracy Theory" movie marathon late at night

Anyway, if you look at the first three points, they seem awfully familiar to anyone that has been on big software projects, don't they? Let's take a closer look:

1) New Year's resolutions almost always fail = Big (as in budget and/or time to delivery) software projects almost always fail
2) Why only make these life-altering changes once a year = Why limit releases to once a year or so?
3) Progress towards a resolution is forgotten if the full resolution isn't achieved = New features/improvements implemented early sit unused until the whole release is delivered

The epiphany for me isn't some new insight into the software business (I'm sure some of the repeat visitors here would be shocked if I ever come up with some new insight into the software business).

No, the epiphany is in how we deal with our New Year's resolutions. Forget this dysfunctional "all or nothing" tradition we keep torturing ourselves with. Let's take a hint from the software business and do our New Years resolutions the Agile way.

Since we're talking about a team size of one, we can really strip Agile down to bare metal:

1) Iterate/deliver frequently
2) Reflect and adjust

Let's test this out. Say that under the old repressive regime of "Yearly Resolutions" I'd set a big goal for myself - such as learning how to become an Ultimate Pickup Artist (UPA).   Although there is a strong romantic appeal to throwing myself into such a noble goal with abandon, what I'm really interested in is results. After all, what if I'm really not cut out to be an ultra-babe magnet?

With an Agile approach to my New Year's resolution of becoming an UPA, I now have an extensive toolbox of techniques and practices to bring to bear. One of the first I'd reach for is a little gem called "fail fast". 

In this case, fail fast means two things. First, can I bear uttering inane pickup lines with a straight and sincere facial expression? Second, do I have what it takes to endure the wrath of women offended by lewd suggestions in pursuit of the small percentage that are either over-medicated or have otherwise taken leave of their senses to the point where they'd be smitten with a tawdry pickup line?

Right there is more than enough work for a first iteration. Assuming monthly iterations, the theme for January would be to "fail fast" and my work product would be to practice my pickup line delivery and insensitivity to criticism and/or physical assault. Heck, with a bit of careful planning I may even be able to work towards both goals at the same time.

By the end of January I'd have ample data to reflect on whether I'm socially and morally corrupt enough to be a truly stellar UPA. 

If it seems I have what it takes, I'd be ready to take my next incremental steps in February towards my goal.

 If not, I'll take comfort in the fact that I didn't waste a lot of time and energy in finding out that contrary to appearances, I do have some vestige of a conscience lurking somewhere and can instead focus on pursuing a more appropriate personal resolution, such as honing my gender sensitivity skills.

In conclusion, although you may not agree with my choice of resolutions, you can't argue with success. If you really want to make those New Year's resolutions permanent, you gotta go Agile!

Editor's note: No actual females were harmed during the manufacturing of this blog entry. Our brave volunteer test reader (Ghennipher Weeks) did suffer some emotional trauma in the line of duty, but with just a few short months of intensive therapy, she should be just fine.