Monday, September 19, 2011

Stories from the "Knowledge Acquisition" curve, part 2: Social Risk

For the three of you out there that are actually reading this blog (your check for next month is on its way), I'm continuing in the theme of knowledge acquisition stories. For those of you that didn't see the first post, you can find it here.


Social Risk is the focus of this installment. Seems like a strange risk to be listed as important enough to be worth devoting team resources to understanding, doesn't it? Rest assured, it does exist and in my opinion is the most prevalent cause of project delivery failures. 

We've been conditioned since the late 60's to think of software development as an "engineering" discipline, complete with it's own set of laws and physics that remain inviolate regardless of circumstances. This definition works right up to the point where you run into the humans involved with the project. There's no formula for calculating the degree of political sabotage expected from a hostile stakeholder or percentage of bad code expected from an inept developer.

One of the most important accomplishments of the Agile movement (in my humble opinion) has been the acknowledgement of humans as an integral part of the development effort. This is certainly a big step forward from the "engineering" mindset. But what hasn't quite become apparent to the adopters of Agile is the implications of humans as an integral part of the process. Simply put, humans are messy from an engineering point of view.


In most teams the elements of social risk are more background noise than roadblocks. Established teams doing more or less the same work that they have always done have adapted to the unique social elements in their world. But introduce change to the team (technology, personnel, stakeholders) and you now have a new social dynamic that will affect the team's ability to deliver.

For new teams the social risk is much higher - everything is an unknown. Does the team work well together? Is there hidden agendas among the stakeholders? Does the team have the ability to do the work they are tasked with?

Social risks can be tough to identify. It really does require a working knowledge of human psychology, something that you don't often see on the hiring requirements for development types. As with anything, experience and stories from others can help uncover these risks in your own team, so let's get to a story, shall we?

Some time ago a company I worked for acquired the rights to an existing free web application that would help pathologists choose specific cellular "stains" that would allow them to identify what type of cancer cells were in a tissue sample. Our plan was to take the underlying data and build a new application from the ground up that would provide greater functionality and allow us to charge for the service.

Two of the stakeholders for this project were very senior and well respected pathologists, one of whom was the creator of the application we were acquiring. As stakeholders for the new product, they had some very specific opinions about how the product should work, and worked closely with the team to make sure that the new application reflected their experience in pathology.

What the team didn't realize at the time was that although our stakeholders were absolutely correct in their perspective on how a pathologist should work, it turns out that pathologists actually want to work in different ways, ways that weren't supported in the first production release of the application.

The story does have a happy ending - as soon as we realized that there was significant pushback from the user community the team shifted into a weekly release cycle process and communicated extensively with the user base on progress and priorities until they had the application working to the user community's satisfaction.

The team actually learned two things in retrospective. The first was that we didn't understand the motivation on the part of the stakeholders to improve how people in their field do their work. Had we recognized that sooner it would have allowed the team to translate statements like "This is how a pathologist works" into "This is how we think a pathologist should work". The second was a re-affirmation of the importance of the Crystal principle "Easy access to expert users".

The moral of the story for me was that hidden stakeholder motivations were sufficiently disruptive to projects to warrant taking the time to discover and expose them to the team. This doesn't lessen the disruption - exposure of hidden motivations can be quite challenging. But the principle of "Fail Early" covers this nicely. If the warping of the project by a stakeholder(s) agenda is sufficient to ultimately cause failure, it is better to fail as early as possible.

Wednesday, September 7, 2011

Stories from the "Knowledge Acquisition" curve, part 1

If you've ever attended any of Alistair Cockburn's recent presentations on the nature of software "engineering", you'll have heard him discuss the early part of the design process as a game of knowledge acquisition (link - scroll down to the section titled "Chapter 2 of the Story, 2010).

Essentially what he is saying is that it is a successful strategy to treat initial design as a "pay to learn" period. No, you're not handing out piles of cash to trench-coated agents that are slipping you unpublished USDA reports. The nature of your coin is the effort of the team. What you are paying for is information about the success and/or failure of the product you are designing.

Although there is no simple formula to identify exactly what nuggets of knowledge are needed to determine the success and/or failure of a product, Alistair does provide us with a map to the territory. Here are his 4 categories of knowledge worth "paying" to learn:
  1. Business Risk (Are we building the right thing?)
  2. Social Risk (Can our team build it?)
  3. Technical Risk (Will our solution work?)
  4. Cost/Schedule Risk (Do we understand the cost / timing?)
As the title implies, the point of this (and subsequent) blog entries is to tell stories about how this game of knowledge acquisition was played. Depending on where you are at in your Shu Ha Ri path you will hopefully gain something useful even if it is the certainty that yours truly isn't playing the "knowledge acquisition" game with a full deck. 

This first story is about gaining knowledge about Cost/Schedule risks. At the risk of sounding boastful I've chosen to tell this story first because it was the impetus for Alistair to include the Cost/Schedule risk category to the above list.

Our story begins with the decision to create a new application that would allow medical residency physicians to study for their board examinations. Medical board examinations are typically case-centric, presenting the examinee with specific medical cases that they would then identify, diagnose and discuss treatment options.

As you might guess, the key to providing a useful learning product is having high quality content, something that Amirsys is known for within the medical community. Not only did we have existing content, but the content itself was structured in such a way that re-using existing content for new purposes (such as this learning application) is something that we are very good at.

Our skill in reuse of existing content wasn't an accident. Early on in my career with Amirsys I was surprised to discover that the timeline for the creation of high quality medical content significantly exceeded software development timelines. In other words, the most significant cost of developing a new product wasn't the development work, but the content creation for that product.

Considering that we had a pretty good idea of the costs associated with content creation, any new products requiring new content had to have a pretty convincing ROI to justify the content creation costs. Either that, or we had to come up with product ideas that wouldn't require new content creation. Thus the idea for this learning application was born.

At the time of product conception we had data from roughly 60,000 medical cases in our content repository. In order for a learning product to be successful you need to present both content to learn from and assessment about that content (questions). There was no lack of learning content, but at that time we had no authored questions over that content. 

The big break came when it was pointed out that our cases were structured in such a way that it may be possible to automatically create relevant questions about the cases without requiring a medical expert to perform any authoring. On the strength of this realization the development of the product was given the green light.

Shortly after work began on the product I was participating in a conversation about an unrelated product when one of our content authors happened to mention that the cases they authored didn't have all of the content I had assumed to be required for authoring. Given how dependent we were on this content being sufficiently complete to allow for the automated generation of assessment questions, this was a significant risk to the delivery of this product.

Unfortunately there was no easy way to evaluate the suitability of the content outside of actually showing it within the context of the new product and having a medical expert interact with it. Considering that the success of the new product hinged on the fact that we didn't have to explicitly author all of these new questions we had to do something to "acquire knowledge" about the extent of this new risk.

Our solution was to build a walking skeleton of the application that would allow a user to "take a quiz". No bells and whistles - just the presentation of these auto-generated questions and the ability to select an answer and see if the answer was right or wrong. It took members of the team approximately two weeks to build this tool and put it in front of our medical experts.

Within 10 minutes it was clear that there was a significant amount of inconsistency in our case data, to the point where a paying customer would not receive a useful learning experience from the content. Armed with the knowledge that our content was not in the condition we had assumed it was, the business needed to make a decision.

Fortunately the story has a happy ending. It was determined that although our case content wasn't consistent enough to auto-generate assessment questions, it was good enough to allow us to significantly speed up the authoring process by "prepopulating" assessments that authors could then edit rather than create from scratch.

Had we not known about Alistair's pattern of "Paying for Knowledge" it is entirely possible that we would not have discovered our content challenges until it was too late to meet key delivery dates or be required to disrupt other revenue generating activities. As it turned out the product was delivered on time, with the only negative consequence being a smaller set of content shipping with the new product. 

As of the time of this writing this product has been generating revenue for close to a year now and has given us the tools to build similar products for external customers who have similar expert content learning application needs.

Stay tuned for Part 2 - A tale of Social Risk