Tuesday, December 10, 2013

Learning against my will. Oh, and there was this Lean Startup Conference thing...

Sometimes this learning thing isn't all it is cracked up to be.

Take today's example. I found myself down at the Miller Business Innovation Center to watch their local webcast of the Lean Startup Conference happening this week in San Francisco. I was all set to settle in, have a couple epiphanies, and then I'd be able to knock off for the day with a new "I learned something" badge earned.

Wrong.

I put at least some of the blame on Alistair and the Salt Lake Agile Roundtable gang. It was from them that I picked up the habit of a) always be ready to articulate at least one learning moment from any learning event, and b) actually write down your learning moments so they are reinforced at the moment they are strongest.

I knew I was in trouble about 5 minutes after the first session started. It was Kathryn Minshew - "Getting Users Out of Nowhere". Normally I'd coast through topics like this with my mental cruise control on. Not because they were not interesting or relevant. They are. But that's not an area I've often been at a loss for ideas, so I hadn't pursued learning more.

Well, you know the universe loves moments like this. Or at least the universe that I inhabit certainly does. Since I'm a fan of Norse mythology I can picture with amazing clarity Odin sharpening up Gungnir while Thor is off to the side giving Mjolnir a last bit of polish while glaring in my general direction.

Ok. Maybe it wasn't Gotterdamerung. But what did happen was that a part of my brain that doesn't like the mental cruise control slammed on the brakes and said "Hang on! Did you hear that?!?"

In this particular case it was Kathryn's discussion of how to become your own PR machine. Upon hearing that I dutifully recorded the "aha" moment (yes, it's below if you really want to know). With that aha safely recorded I patted myself on the back and prepared to settle back into cruise control.

Never got there. That same part of my brain that slammed me out of cruise control said "What if she said something else interesting before this?". Muttering vague obscenities under my breath I called up my mental playback department to find out if there had been anything else interesting that I had missed the first time through.

And of course there was. The other tidbit was about making it insanely easy for others to engage in word of mouth advertising for you.

After admitting to myself that I had let a learning moment actually slip by there was no chance that I'd be able to descend back into mental cruise control. I was saddled up for the full ride.

And what a ride it was. I recall actually being irritated by the end of the webcast at the audacity of the conference organizers to schedule that many impactful sessions in a row. What ever happened to tossing a few stiffs in there so you could have time to wander over to the mental dressing room to see how well the new ideas fit?

I'm glad to say my irritation was short-lived. Yes, the learning experience had a lot to do with my mood alteration. So did this parting shot from my mental brake-slammer:

 "Gee, you're not getting OLD, are you? Only OLD people stop learning"

In case you thought you were going to get out of this blog entry without having to hear me blather on about the actual things I learned (or re-learned), think again.

I will give you this warning. Context counts in learning moments. If you don't have a shared context for the new idea it will not have the same impact. I'll do my best to convey context but I can promise you that whatever I have to say will be no substitute for you to have your own learning experience from these presentations. Hopefully they will be posted publicly at some point so you can if you missed the conference.

What I learned (or re-learned) today from the Lean Startup Conference:

Kathryn Minshew (The Muse) - Acquiring Your First Users Out of Thin Air

  • "Ask for word of mouth from your customers and make it insanely easy to do". It was the insanely easy part that got me thinking.
  • "Tell a good, clear, easy-to-report story". When you're working with bloggers / news outlets to get your message out stories are much more effective than data or marketing statements. Those stories should whenever possible relate to current events / trends
  • (Regarding her advice to offer to write posts / articles for blogs and news outlets) "Harvard Law Review rejected my offered stories for a year before they accepted one. Persistence matters.
Alexis Ringwald (LearnUp) - How to Build the Product When You're Not the User
  • Her team spent 5 months visiting people in unemployment lines learning what the people there needed.
  • Having a "Contact Us" button was not enough - they built a way to stay engaged with their users.
  • The whole team recently took a field trip to Mendota, CA (39% unemployment) to continue to learn about what was and was not working.
Daina Linton (Fashion Metric) - Work With Customers Before You Write Any Code
  • Initial idea (having a fashion expert available to advise when shopping) wasn't what people needed. By using open-ended questioning they learned what was needed for Men was a way to help them find clothes that actually fit.
  • By using a Concierge model (human in the loop) as opposed to building applications they were able to go from validating the new idea to confirming a return rate of 0% on shirts sold via their fitting model. 
  • Their initial product consisted of a web site that offered a better shirt fitting experience. Only call to action was to ask for permission to have a 10 minute phone conversation about what they needed.
Christie George (New Media Ventures) - Funding for Lean Impact
  • Applying the principles of "Fail Fast" and "Fail Forward" to social challenges means that real people are failed. Make sure the failure stays where it has positive value.
  • Social change takes a long time (interracial marriage as an example). Social startups can accelerate those feedback loops only so much.
  • Social funders (charitable organizations) are very risk adverse. Need to find a way to incentivize risk.
  • Create a culture where the truth is incentivized (example given was Unreasonable Institute's practice of reporting their investment decision failures).
Ari Gesher (Palantir) - Preparing for Catastrophic Success
  • The hard things to deal with are getting real estate and leadership. Both require at least a 6 month horizon. Only the leadership part is interesting (his words in the presentation).
  • Want internal leaders? Pick your leaders, then give them a few people to lead immediately. Support them with a forum that they can discuss their challenges and problems. Then let them lead.
  • In the midst of growth is no time to build command and control leadership. Leaders need to be communication nodes, facilitating communication for their teams, coordinating with other teams.
  • Culture is an emergent property of the people you hire. Thinking "Will this person fit the culture" is not as important as thinking "How will this person enhance the culture"
  • Indoctrinate culture from the day people start. Don't leave it up to chance.
  • Final thoughts - "If it hurts, you're doing it right"
Steve Blank (Stanford / Berkley / Columbia) - Evidence-based Entrepreneurship
  • Steve scared me a bit. His presentation centered around the fact that there are now actionable metrics around startups, which on the surface seems a good thing. But there's no lack of cautionary tales of metrics gone wrong...
  • Startups are not smaller versions of companies. They are temporary organizations that exist to find a scalable business model. Only when they find that model can they become "companies"
  • His template for a startup is a Business Model Canvas + Customer Development Lab + Agile Engineering and Dev methodology
  • His Lean Startup class at Stanford is experiential - you will talk to at least 100 customers during the 10 week course.
  • NSF approached him to put together a class for the NSF. That grew into a metric framework to observe the progress of teams
  • Having that metric framework allowed Steve to come up with the "Investment Readiness Measurement" - a means of determining what pre-funding stage a startup was in. Modeled off of the NASA Technology Readiness Scale
Nikhil Arora & Alejandro Velez (Back To The Roots) - Using Kickstarter to Run an MVP
  • After starting with a "Grow your own mushroom" home kit they decided to bring an aquaponics idea to market - a system where fish waste was used to grow a small herb garden. Turned to Kickstarter to fund it. Had $248,000 in funding within 30 days.
  • They learned along the way the cost of rushing to market (faulty pump, unclear packaging). Their lesson learned that it was less expensive to deal with those things before rushing to market, not after.
Keya Dannenbaum (ElectNext) - Learning to Be and Organization that Pivots
  • One of the most common statements about entrepreneurship is "Follow Your Passion". It's wrong. She followed this up with data about the effect of emotional "passion" (it burns out) versus dedicated practice. Out of the two, practice is what matters, not passion.
  • Their first MVP took 18 months to develop, and it is no longer in use. Their second took 6, and their third took a few weeks. Only the third has proven a repeatable revenue model, and the only "Product" is Google docs and an email address database.
  • But those first two products are not failures. They were vital practice for their team to get better at what matters as a startup. Previously there was a lot of fear associated with pivots. Over time the pivots became something embraced - it meant progress towards a goal.
John Shook (Lean Enterprise Institute) - Lean Startup--From Toyota City to Frement to You
  • John started off by following the lead of the MIT research team that spawned the Lean Institute. He wanted to work for a Japanese car manufacturer to learn their management methods. Only one would hire him - Toyota. At the time he was the only westerner working at a Japanese auto plant.
  • His statement about Lean to the whole group was powerful. "This (Lean) is a Learn By Doing process. You learn by doing it. With respect to everyone here, attending conferences and reading books will not get you there".
  • In 1 year the factory selected for the joint GM/Toyota venture went from last in quality to first (UAW rated)
Brad Smith (Intuit Software) - Lean Leadership Lessons
  • This was by far my favorite session. Of course I am defining "favorite" as "The one story I'd most like to find out if they are full of BS or really meant what they said".
  • Creating an innovation culture was a deliberate effort and takes buy-in across the whole company. The function of management is most importantly to "Find a way to get to Yes".
  • They host an annual incubation week - teams are allowed to work on anything they see fit, company has resources deployed to assist them (legal, etc).
  • Their projects are classified into three tiers - Horizon 1 (immediate revenue generators), Horizon 2 (Established business model with actual customers needing to be curated into a full offering), Horizon 3 (The idea/MVP area)
  • Their team cultures are different based on their tier of product. And they encourage that to be the case.
  • Most interesting to me was the acknowledgement that the Horizon 3 products require a more risk-tolerant environment to thrive, and Intuit delivers it to them.
  • Hugh Molosti is their primary idea guy over the years and has a 7 figure incentive package to keep him from going anywhere else. What other company is that proactive in keeping their best idea people?


Monday, November 18, 2013

Of Liaisons, ball bearings, fetzer valves and failures

There's a scene from the movie "Fletch' when Chevy Chase is posing as an aircraft engine mechanic and is being grilled about why he'd need ball bearings. His reply: "Come on guys, it's all ball bearings these days!"

\No, I'm not going to talk about ball bearings or even Fetzer valves (inside joke for the Fletch fans out there).

To paraphrase, "Come on guys! It's all patterns these days!"

But before I do, a bit of history. First, the earth cooled. Then dinosaurs roamed the planet.

Too far back? OK. We'll just set the wayback machine to 1977.

An architect by the name of Christopher Alexander published a book titled "A Pattern Language: Towns, Buildings, Construction". Although reports vary on the influence it had on the Architecture industry, it is commonly cited as the inspiration design patterns in software architecture, interaction design and many other fields.

So what was the big deal? He introduced the concept of a Pattern Language. Pattern languages gave a structure to identifying common problems along with their best practices solution(s) in specific disciplines. This allowed valuable information about common problems and solutions to be communicated much more rapidly and reliably than ever before.

As a software architecture-ish kinda guy, I cut my teeth on the Gang of Four's Design Patterns book. Then as I started looking around at other disciplines I realized that they had their own pattern languages, and if I took the time to understand them I'd have a much better understanding of the problems and problem solving approaches to these other disciplines - think of it as an abstract version of the Rosetta stone

Imagine my delight when I discovered that there were organizational patterns - a pattern language devoted to problem solving challenges in organizational structures, something we have to do from time to time in this business.

Odds are good that you're very familiar with at least one of these, even if you don't know it as an organizational pattern. Ever hear anyone say "We're running a skunk works team"?

Yep. It's an organizational pattern.

During a recent Agile Roots 2014 planning session I was talking with Lory Maddox (RN extraordinaire and mother of Ruby guru and Clean Coder evangelist Pat Maddox) about her current job, which is taking new patient interaction innovations and operationalizing them to work within the highly regulated environment of clinical medicine.

As she was telling a story about how her team averted a logistic nightmare by stopping the release of a new communication solution that had not been adapted by her team for deployment into the highly regulated clinical medicine environment, I realized that her team (whether they realized it or not) were implementing an organization design pattern I like to call "Operationalizing".

Operationalizing: A team with deep experience in a specific (usually highly regulated) environment that has the skills and the responsibility for adapting new solutions (software, process, etc) in such a way that the new solution can be deployed successfully within the target environment.

Note: I have not found definitive literature on the name of this pattern, so I get to call it what I like for now. Comment if you can point me to something authoritative.

I know what you're thinking. No, we haven't gotten to the actual point yet. We're close, I promise! Don't get me wrong, the concept is very interesting, but it isn't the one I wanted to talk about. So why did I even take the time to mention it?

Well, because thinking about how the Operationalizing pattern would play out was the trigger for putting my finger on how I've failed to properly apply the Liaison pattern.

Let's go back to the SkunkWorks pattern for a moment (quietly, we don't want Lockheed suing us).

The SkunkWorks pattern is a very common organizational approach these days to attempt to solve the problem of innovation. Look at any large company you'd care to point to. Odds are very high that they have at least one SkunkWorks team running, maybe more. My former employer Walmart was no exception.

When I was with Walmart this was the organizational pattern that we used to restructure the Mobile Applications group. Or was it?

On the surface, yes. But we broke a few key rules of the SkunkWorks pattern - we did not completely segregate the team from the rest of the organization, nor did we completely decouple our infrastructure from the larger organization. Although I'm of the opinion that we could have done more to segregate the team the same was not true regarding the infrastructure. Because we were building live mobile apps that customers were using to buy real things from Walmart, we didn't have a lot of choices regarding our production infrastructure.

And this is where the Liaison pattern comes into the story. If you were paying attention a few paragraphs back you'll see that I actually used the dreaded "F" word in reference to myself.

One of the first external teams I met with was the InfoSec group - this was the team responsible for insuring that all production code met industry and company security standards. As you'd expect with any large organization, the InfoSec team was a single siloed team that evaluated and validated the output of several teams including our Mobile Applications group.

Here's the sad part. In my first meeting with them I explained how we were departing from the previous practice of months-long release cycles to a much faster cycle. When they indicated that their process didn't allow for that fast of a cycle I told them about the Liaison pattern and how it could be used to great success. After confirming that they all liked the idea I assured failure by making the foolish assumption that they'd actually do something about it.

What I should have done in that situation was to put "find a way to get InfoSec to give us a liaison" at the top of my priority list.

Back to the Liaison pattern. Never heard of it? You'll have to be a student of military organizational logistics to have come across the same definition that I have. In military circles, a Liaison is a person (or persons) that are a member of one group within a military organization but are assigned to be part of another group for the express purpose of helping that group to effectively interact with their "home" group. The pattern is most often implemented in situations where the "other" group operates at a much faster pace than the other group (e.g. Special Operations teams conducing operations in the same areas as conventional forces).

Examples: Forward Air Controller, Liaison Officer

Had I made it a priority to work with InfoSec to have a member of their team assigned to ours it probably would not have made much of a difference in the early days of that engagement, namely because we were successful at pushing back on their process thanks to Executive Sponsorship (yet another organizational pattern). Where it would have made a huge difference was when that same executive sponsorship shifted and we were no longer able to keep InfoSec from slowing down our cycle time.

By the way, that whole slowdown thing? That's an organizational pattern too - Organizational Antibodies. I'll be talking about that one in a future posting.

The moral of the story - Organizational Patterns! Learn them, learn to use them if you care about being effective. Most importantly, learn how not using them will have a negative impact on your team(s).

Tuesday, October 29, 2013

Getting political with Agile Sadism

I was on my way home from a medical appointment the other day and for some unknown reason decided that I needed to have a little talk radio background noise. After tuning into the local NPR station I was all set to disengage my thinking brain from my ears when I heard the interviewee say "Without common pain there is no chance to achieve real change in an organization."

Cue cinematic double-take record scratching sound.

Did I really just hear that? On an NPR talking heads program?

After a little research I discovered that the talking head in question making this point was no less than Mike Leavitt - former Governor of Utah (amongs many other positions I didn't bother to go look up) and the point he was making was in his new book entitled "Finding Allies, Building Alliances".

For those of you worried about being bored to death by a book report, fear not. I have yet to pick up the book. You'll just have to get by with being bored to death by my thoughts on what I learned from that interview.

If you haven't looked at the title of the blog recently, this would be a good time to do so. Go ahead - I'll give you a minute.

Done? Great. As you'd imagine, the fact that pain is an effective tool in the political arena isn't a huge surprise. What was a surprise was the fact that he acknowledged it as such. What was even a bigger surprise is that he went on to describe a situation where his approach failed.

See why I was surprised? Hearing a politician (or at least a former one) acknowledge not only the fact that he's applied "Machiavellian" techniques to get things done but actually admit that he experienced failure was the true eye-opener for me.

Sounds a lot like a reflection session, doesn't it?

What he "learned" from his failure was that not all of the stakeholders in a particular situation felt a common pain. As a result that same stakeholder ended up derailing the effort because they were better off maintaining the status quo.

Of course we'd never see such a thing in the software development industry, would we?

It is always instructional to see how similar problems are solved in different contexts. As this was the first time I've heard of a formalized approach to solving difficult problems in the political world I wanted to see what could be applied to my industry. Fortunately I was able to find the "Too Long; Didn't Read" book summary to get the big picture:

1. A Common Pain—a shared problem that motivates different people/groups to work together in ways that could otherwise seem counterintuitive. 

Absolutely. Of course no virtuous person would ever consider actually manufacturing these common pains to get things done, right?

2. A Convener of Stature—a respected and influential presence who can bring people to the table and, when necessary, keep them there.

If I were translating these points into the software domain (which of course I am) I'd say that this is the equivalent to the concept of an Executive Sponsor

3. Representatives of Substance—collaborative participants must bring the right mix of experience and expertise for legitimacy and have the authority to make decisions.

This was a good reminder to me that the twin characteristics of recognized experience and decision making authority are key for all participants.

4. Committed Leaders—individuals who possess the skill, creativity, dedication and tenacity to move an alliance forward even when it hits the inevitable rough patches.

Since problems within the software domain tend to be smaller in scope to the challenges government groups face, I'd say that we'd see this be the same group as with point

5. A Clearly Defined Purpose—a driving idea that keeps people on task rather than being sidetracked by complexity, ambiguity and other distraction.

Definitely an important point for our industry considering how common it is for our teams to have split responsibilities.

6. A Formal Charter—established rules that help resolve differences and avoid stalemates. 

I've decided I don't know how I feel about this one yet. My initial reaction is that establishing rules is counter-productive to solving a problem quickly, but as I thought about it more I realized that the teams that I'd point to as not needing rules were actually operating under a self-generated set of rules learned from prior experiences.

7. The Northbound Train—an intuitive confidence that an alliance will get to its destination, achieve something of unique value, and that those who aren’t on board will be disadvantaged.

This was probably my most valuable takeaway from this list. When you think about it, it is nothing more than a "pain" generated by the problem solving process. But it is also a particularly wide-reaching pain that if properly applied can increase the "rejection rate" of the solution across the wider organization.

8. A Common Information Base—keeps everyone in the loop and avoids divisive secrets and opaqueness.

This is not exactly news for our software world (Information Radiators, Sunshine, etc), but it is a good reminder for me as I have a tendency to not put as much effort into these tools/process as they deserve.

So I'm sure at some point I'll pick up the book and read it. It's probably worth your time to do the same, especially if you are (as I am) a student of seeing similar problems solved in different environments.

Monday, April 2, 2012

Care and feeding of feedback loops

So, I have to brag just a bit. Frequent readers will recall that from time to time I post about the teams I am currently working with. For those of you suffering from "too long, didn't read" syndrome, I'll sum up the back story.

The mobile applications group I work with consists of 5 different development teams - one back end services team and four client platform teams (iOS x 2, Android, Mobile Web). Although each of these teams have some really top notch people on the team, they were challenged from the process perspective when I joined the group. 

Our Android team was the first to launch with a structured methodology based on Alistair Cockburn's "methodology tuning" technique. The team is in the final stages of their first release using their shiny new methodology and speaking personally, I really am quite proud of what the team has accomplished. 

One accomplishment in particular was interesting enough that it warranted a write-up here. As mentioned above, the team is in their final stages of delivering their first release of a shopping application for the UK based grocery store chain. As this was the first time a native mobile application was being released for the UK market by our group, the leadership team decided that formal usability testing was warranted. 

Interesting thing about performing usability testing for an application to be deployed in the UK, the usability testing took place in - you guessed it, the UK. With the exception of the Product Owner, the whole development team is based in the US. Ideally we would have sent the whole team over to observe the usability testing, but given the distance the team decided to send their UX Designer over to represent the group.

The choice was a good one. The UX Designer took copious notes of the usability testing, noting equally the things that users were pleased with and things that didn't work so well. These notes were then transferred to the team at the close of the first day of testing, who in turn immediately started poring through the notes.

Here's the good part. Once the team was through the UX Designer's notes, they decided that they were going to fix a number of issues immediately and have the updated application ready for users at the start of the second day of usability testing. And they delivered.

Note that I didn't say that the Product Owner requested the fixes, nor did the team responsible for usability testing. The decision to make the changes in time for the next day of testing was made entirely within the team. For those looking for some measure of team ownership of an application, you could do a lot worse than using this as a benchmark.

Day 2 of usability testing dawns, and a new tested version of the application has been delivered to the usability lab. Prior to the team working in formalized iterations it is questionable whether they would have felt confident to turn around a new release of the application within a single night. Thanks to several iterations of practice at delivering a working application to stakeholders (the team's definition of "done - done" at the end of an iteration) gave them the experience and confidence to crank out a new release in short order.

I know what you're thinking. "Wonderful, they can iterate and release quickly. Look how 'agile' they are. Whoopie.", right?

Well, yes. They are 'agile'. Right on the front page of the Agile Manifesto it says "Respond to change over following a plan". All of the iteration / release conventions the agile community holds dear is rooted in this principle.

But there's more to the story than that. Right below that "respond to change" line is "Customer collaboration over contract negotiation". What seems to be lost on most of our industry is the fact that the word "collaboration" means interact, not guess. Teams that interact directly with customers on a frequent basis are getting the single most important feedback there is - actual feedback from real customers using the application.

Because the team had improved their ability to deliver frequently, they put themselves in a position to take the best possible advantage of that critical feedback loop. Prior to this it was difficult (not impossible, but still difficult) for the US based development team to interact directly with intended customers in the UK. In interviewing the team after the fact it was clear to them the value of having that customer interaction thanks to one of the team being on the spot.

Without the team directly involving themselves in the usability testing they would have received the resulting reports in due course, but they would have missed out on the opportunity to close the that critical feedback loop with their customers. A feedback loop that was particularly difficult for the team to close prior to this usability testing event. 

I think that the best news of all is that I can't take any credit for the team taking advantage of user testing in the way they did. I didn't even know what their plan was until I heard about it after the fact from the team project manager, Michael Buckland. Michael is much more than the project manager for the team - he's also taken over agile coaching responsibilities for the team, and has done a great job in that role. So much so that I may even forgive him at some point for being such a Scrum fanatic. 

Monday, March 12, 2012

Who says I ain't agile?

*sigh*

I really should know better by now, but I still remain surprised that people in the Agile business seem to miss the fact that there is more than one agile methodology when they are claiming "You're not agile unless...".

The current subject of my angst is a blog posting (10 Signs that Your Team Isn't Really Agile). For the record, I actually do think it is a thought provoking commentary for teams that are beginning the adoption of Scrum and looking for guidance on whether they are coloring within the lines or not. 

Where I take issue with the article is the fact that the author cites examples of practices and techniques that are primarily associated with Scrum and claims that if you're not following them as prescribed, you're not agile. 

Well, I'm not quite ready to turn in my agile secret decoder ring just yet.

I chose to dissect this specific article because right off the bat it was a particularly egregious violation of the agile equivalency principle. Yes, I know it's a math term, just bear with my nerdiness for a moment here. 

Right in the opening paragraph the author states "With all the hype in the software industry about Agile, Scrum, and so on...". Or, in formulaic terms:

Agile = Scrum (false - Scrum is an instance of agile, not a peer)
Scrum = So on (false - 'So on' is undefined)

OK. Now that I've appeased my inner math nerd, let's get to the interesting part, dissecting the 10 points raised in the article. 

Important Note: It is entirely possible that I misinterpreted what the author wrote, so I'd invite you to read the original blog so you have both sides of the story. 

1. Your team is not responsible for the whole story.

In the article the author discusses both the fact that Agile teams should be cross-functional and that no value is delivered to the customer until the full story is delivered. All good so far. But then the author makes the statement that although it is possible to have a successful development process without cross functional teams, it simply isn't agile.

Not quite.

Every agile methodology in current use has some means identified for scaling up teams. Many studies have concluded that the optimal Agile team  size is somewhere between 4 and 8 members. Teams working on larger projects (especially within enterprise settings) will frequently own different parts of a single user story. As long as the teams deliver on their parts and collaborate as needed to insure the full story is delivered there is nothing un-agile about shared ownership of a story.

2. Testing is done by another team.

In the description it says "code complete is not done". Looks good so far. Next it states that if QA is another team the team isn't cross-functional. No argument here. Work that isn't tested and integrated isn't considered done. Also looks good.

But that's it. It isn't stated directly, but the implication certainly seems to be "if QA is a separate team, you're not Agile".

In checking the Agile Manifesto, I don't see a thing in there about QA has to be on the same team. Is it a good idea? Absolutely - the whole point of frequent iterations is to maximize feedback that the team can then act on to correct/improve their work. QA has pretty important feedback. 

But to claim that a team isn't Agile unless QA is integrated with the team simply isn't true. It does require more investment into communication and does slow feedback loops, but it does not prevent the team from delivering working software on a frequent basis, nor does it prevent the team from adjusting priorities as needed.

3. You are not INVESTing in your user stories.

INVEST is an acronym that identifies properties of a good user story. It stands for Independent (of other stories), Negotiable (in scope to fit in a sprint), Valuable (to end user), Estimable (by team), Sized appropriately (for a sprint), Testable.

It's a good acronym. But it assumes that the team is a) working in user stories, b) those stories are expressed in INVEST form, and c) the team is good enough at generating these stories so that the team can perform useful work on them.

What if you're not working in user stories? There are any number of alternatives (use cases, requirements documents, etc) that seem to work just fine for many teams. Sadly, the author misses a golden opportunity to sell the ultimate cross-functional team solution - having the customer as part of the team itself. User stories are a record of a conversation (usually between the customer and product oriented roles). If all of the roles (dev, QA, etc.) are right there for that conversation it is less important to have a detailed record of the conversation, especially if the customer is ON the team and can re-engage in that conversation as needed.

4. Tasks are assigned to team members by a manager

Blog: "In a truly agile team, each member can pick which tasks he or she will work on next, as long as it pertains to the next most valuable user story.". 

Yes, some teams do indeed work this way, assuming that the team members have a good understanding of what the "next most valuable" story is. Not all teams work this way. 

Many highly agile teams trust their project and product managers to do their job and properly queue up work for the team. Why did they trust the product and project management to queue work properly? Because they were directly involved in the requirements gathering and prioritization activity as it happened and were part of the decision making process.

5.Team is told how much work to commit to

No argument here. If the team did not generate the estimates of work, they aren't valid, regardless of how much the stakeholders hope or expect to the contrary.

6. You are reporting rather than discussing your progress

There's not one but two assumptions made here about what an agile team should be doing. The article first mentions stand-ups and then goes on to claim that the questions inherent in the format are not there for the manager to keep tabs on the team, but to elicit another team members thoughts.

Stand-ups are a communication technique. The three questions mentioned in the article are a further refinement of that technique. Stand-up meetings are actually a replacement communication technique for teams that do not have a better means of communication (e.g. osmotic). The three questions mentioned in the article are a refinement of the stand-up technique to teach teams what is important to discuss in the context of the whole team. 

Don't get me wrong, I am a fan of stand-up meetings, and the same question format outlined in the article. But to claim that a team isn't agile if they aren't doing both highlights a lack of recognition that there are other equally valid communication techniques.

7. Not focused on completing the most important user story

This one is too vague to dissect in detail. There is no definition on the definition of "most important" in the article, only the statement that if you don't focus on the most important task (however defined), you're not being agile. 
In my experience there are many different ways to define most important - revenue, customer request, production fault, technical debt - the list goes on and on. 

At any given moment you may have several "most important" tasks to choose from to work on, depending on the stakeholder perspective you are using. If your means of selecting work ignores less visible priorities (e.g. technical debt), you run the risk of making life much harder for the team as the application ages.

8. You changed to agile last year, and haven't changed a thing since

No argument here, especially considering the condition set up by the sign is a team new to agile. Teams that have been doing agile for some time may go for long periods without significant changes to their process, but adjustments to process for novice teams are critical to long term success. 

9. You are not ready to release on time with an incomplete scope

Overall, I agree with the author on this issue as well. In the article the author focuses on the value of done-done story completion and proper layering of the stories so that if the team isn't done with the current layer, they can deliver the previously completed layer and still realize business value.

But consider this. Not all software projects are driving to a specific delivery date. Take the gaming software company Blizzard Inc - their long-standing policy has been to only deliver when they have reached a certain level of feature completion. There is nothing inherently un-agile about basing your release on feature completion as opposed to a specific date as long as your team is aware and managing the trade-offs this approach presents.

10. You are not getting customer feedback every sprint.

This one is a bit misleading. My expectation on reading the sign was that the importance of frequent customer feedback (e.g. every sprint) was the key. But the author's explanation focuses on whether the team incorporates feedback from the demo into the upcoming sprint. If not, you're not agile.

Regarding the assertion that you are not agile if you are not incorporating that feedback into what you are building I'm in total agreement. 

I think that the point would could have been better reinforced if the explanation focused on the value of feedback from actual customers. Customer feedback is the single most important feedback loop there is. If you aren't getting direct feedback from the people (institutions, whatever) you expect to buy your product, how do you know if you are building the right thing? Don't assume that your customer research teams or marketing group are an adequate proxy for your customers - find a way to get real customers a room with the team and watch them use the software.

In closing as strange as it looks from the dissection above, I appreciate the fact that the author posted this article. Obviously we disagree considerably on what the boundaries of agile is, but without postings like his we miss the opportunity for dialog on what exactly this agile thing is that we are so passionate about is.

Friday, February 17, 2012

Picturing cultural change

In a significant departure from my typically over-verbose style, I'm doing a quick vignette today on an interaction I saw occur with one of the teams that I am currently working with on their transition to Agile. 

The backstory on this is that one of the persistent pains for this team is a disconnect with QA, so the team agreed to shift their working pattern (culture) to incorporate QA as a direct part of the team, with a much more prevalent role related to work in the current iteration. 

What the team knew was that this should give them much shorter feedback loops on if something needed to be fixed or not, and it should also greatly shorten the amount of time that the project would go dark in QA at the end of an iteration/release.

What teams experienced with multiple roles working together know about this closer collaboration is the fact that when you have close association with the work that a specific role is doing on a day to day basis, you learn about that role to the point where you naturally take on some of their work.

Rather than trying to communicate to the team every single advantage to having all of the roles working together closely, I kept the focus on the direct pain the team was trying to address. I knew that sooner or later the secondary effect of role "spreading" would happen, but not when.

The following picture is a screen shot of a conversation that took place between one of the UX designers on the project (Val G) and one of the developers on the project (Thomas H). I was impressed enough at how fast the team shifted their culture to realize this secondary benefit of close collaboration that I thought it warranted a blogging. 

Here's the conversation:


Friday, February 3, 2012

A Tale of Two Burn Charts

Within the last month we've launched two teams at Walmart Labs using methodology tuning as our means of incrementally introducing Agile. The new "process" introduced for each team was more or less the same:

1) Team will work in iterations
2) Team will create tasks from the user stories allocated to the iteration, and provide work estimates for tasks based on a simple time scale - 2 hours, 4 hours, 1 day. 
4) Tasks are defined as "done" when code/assets/other is checked in and any acceptance criteria for the task has been validated. Iteration is "done" when the working code is deployed to stakeholders and QA has signed off on all tasks/stories within the iteration.
5) QA will work directly with the team, recording and completing their tasks in the same manner as the rest of the team in addition to acceptance evaluation as needed on other tasks.

A few differences of note between the teams: 

1) Team A opted to try out 3 week iterations, Team B went for 2
2) Team A had an initial training session on iteration planning separate from the actual iteration planning exercise, Team B had training (hastily) incorporated into the iteration planning.
3) Team A was primarily on-site, Team B was widely distributed.

For each team the source of the stories for the iteration were derived from existing sources, namely feature lists not articulated in typical story form. This was not an oversight, the teams did not have shared experience in working with Agile style stories, so rather than impose additional training burden on them we collectively agreed to work with the existing resources and revisit the story issue at the next reflection session.

One of the specific "pains" that the overall group was suffering was a lack of visibility into what was happening in the midst of development. This manifest itself in predictable ways - release schedule slips, lack of stakeholder knowledge of feature changes, etc. In addition to the methodology changes that the team agreed to listed above I made it a point to generate daily burn charts on the tasks the teams were performing during the iteration.

Rather than provide a day-by-day running commentary of the iteration, let's skip to the end to see if the butler really did do it:

Team A Task Burn Chart:
Red line denotes ideal burn rate for the iteration
Green line denotes actual team completion of tasks
Blue line on bottom of graph denotes new tasks introduced during the iteration


Team B Task Burn Chart:
Red line denotes ideal burn rate for the iteration
Green line denotes actual team completion of tasks
Blue line on bottom of graph denotes new tasks introduced during the iteration


Based on the burn charts alone, guess which team had delivered working software at the end of their iteration. Go ahead - I'll give you a minute to make your guess.

...

Ready? It was neither. 

Surprised? Don't be. Burn charts are a tool to understand what is happening during the course of an iteration, not a predictor of success. Even with experienced teams that have a track record of successful delivery each iteration it is entirely possible to fail to deliver at the end of an iteration. For teams that are using burn charts for the first time the burn charts serve only one useful purpose, which is to raise awareness to the team on what sort of underlying problems a burn chart can indicate once a team has some history of delivering in iterations.

Let's take a little walking tour of what occurred during each iteration for the two teams:

Stop #1 (Team A and B)
Note that early in the iteration both teams discovered a number of tasks not previously captured. For Team A, it was 5 new tasks, Team B added 12. Considering that the purpose of the iteration is to provide a period of focus for the team by not allowing changes to the work during that time, adding new tasks certainly seems to indicate that new features showed up during the iteration. 

What actually happened was that both teams realized that there were tasks that were missing from the stories that they had agreed to deliver and as a result they added the new tasks to the iteration without realizing that they had invalidated their earlier work estimates for the iteration and increased risk that they would not deliver on time.

Stop #2 (Team B)
After an initial addition of 12 tasks Team B continued adding tasks along the way, peaking up to 4 new tasks in a day twice. What is interesting about this influx of new tasks is that the team didn't detect this problem until it showed up in the burn chart, even though they had already diagnosed it in the initial "methodology building" reflection session. 

Stop #3 (Team B)
Notice that throughout the burn chart there were only three days where the number of net tasks declined. A casual observer might interpret this as "the team isn't doing anything", but that wasn't the case here. Two factors were behind this seeming lack of progress. First, the lead developer (who had taken primary responsibility for managing task state within the tracking tool) was on vacation for a week. In his absence the rest of the team continued to get work done, but didn't update their progress in the tracking tool.

Stop #4 (Team A)
If you look close to the end of the burn chart for Team A, you'll notice a small spike in new tasks two days before the end of the iteration. In this case the tasks in question were not likely to cause a delay in the completion of the iteration but they did represent a lack of understanding of the importance of focus during the iteration. Much like Team B on Stop #2, Team A had not yet developed an aversion to changing work during the iteration.

Stop #5 (Team A)
Look at the end of the burn chart. Notice how it concludes with 4 tasks remaining open? In this case this wasn't a reporting oversight, there were 4 actual tasks that were not completed at the end of the iteration. All 4 of these tasks were work needing to be performed by QA. It would be easy to claim that QA lags behind development and accept that the iteration is complete, but one of the rules of our newly minted methodology was that QA would be performing their work alongside the other team roles, not trailing it. 

End of Tour
Even though there was a huge disparity in the perceived progress of the two teams, the reality is that both teams finished the development work more or less on time. Team A had less overall work remaining to complete their iteration because of a closer communication between Dev and QA. Team B had more work to complete the iteration because QA was not as aware of progress and started their work later. Team B also had to spend time correlating completed tasks with their relevant stories (OK, feature lists) to communicate to stakeholders what user-centric functionality had been completed in the iteration.

The moral of our story is that neither team failed - both executed against their tasks rather well, at least from the development perspective. Visibility definitely improved, as did concrete visibility for both teams of previously hidden inefficiencies (effect of late QA involvement, lack of visibility of progress). Did the achieve total success in all of their corrective actions? No, nor was there ever an expectation that they should. The teams did learn something useful from the changes in their work and have already applied this knowledge to their current iterations of work to continue the process of improvement.