Monday, September 19, 2011

Stories from the "Knowledge Acquisition" curve, part 2: Social Risk

For the three of you out there that are actually reading this blog (your check for next month is on its way), I'm continuing in the theme of knowledge acquisition stories. For those of you that didn't see the first post, you can find it here.


Social Risk is the focus of this installment. Seems like a strange risk to be listed as important enough to be worth devoting team resources to understanding, doesn't it? Rest assured, it does exist and in my opinion is the most prevalent cause of project delivery failures. 

We've been conditioned since the late 60's to think of software development as an "engineering" discipline, complete with it's own set of laws and physics that remain inviolate regardless of circumstances. This definition works right up to the point where you run into the humans involved with the project. There's no formula for calculating the degree of political sabotage expected from a hostile stakeholder or percentage of bad code expected from an inept developer.

One of the most important accomplishments of the Agile movement (in my humble opinion) has been the acknowledgement of humans as an integral part of the development effort. This is certainly a big step forward from the "engineering" mindset. But what hasn't quite become apparent to the adopters of Agile is the implications of humans as an integral part of the process. Simply put, humans are messy from an engineering point of view.


In most teams the elements of social risk are more background noise than roadblocks. Established teams doing more or less the same work that they have always done have adapted to the unique social elements in their world. But introduce change to the team (technology, personnel, stakeholders) and you now have a new social dynamic that will affect the team's ability to deliver.

For new teams the social risk is much higher - everything is an unknown. Does the team work well together? Is there hidden agendas among the stakeholders? Does the team have the ability to do the work they are tasked with?

Social risks can be tough to identify. It really does require a working knowledge of human psychology, something that you don't often see on the hiring requirements for development types. As with anything, experience and stories from others can help uncover these risks in your own team, so let's get to a story, shall we?

Some time ago a company I worked for acquired the rights to an existing free web application that would help pathologists choose specific cellular "stains" that would allow them to identify what type of cancer cells were in a tissue sample. Our plan was to take the underlying data and build a new application from the ground up that would provide greater functionality and allow us to charge for the service.

Two of the stakeholders for this project were very senior and well respected pathologists, one of whom was the creator of the application we were acquiring. As stakeholders for the new product, they had some very specific opinions about how the product should work, and worked closely with the team to make sure that the new application reflected their experience in pathology.

What the team didn't realize at the time was that although our stakeholders were absolutely correct in their perspective on how a pathologist should work, it turns out that pathologists actually want to work in different ways, ways that weren't supported in the first production release of the application.

The story does have a happy ending - as soon as we realized that there was significant pushback from the user community the team shifted into a weekly release cycle process and communicated extensively with the user base on progress and priorities until they had the application working to the user community's satisfaction.

The team actually learned two things in retrospective. The first was that we didn't understand the motivation on the part of the stakeholders to improve how people in their field do their work. Had we recognized that sooner it would have allowed the team to translate statements like "This is how a pathologist works" into "This is how we think a pathologist should work". The second was a re-affirmation of the importance of the Crystal principle "Easy access to expert users".

The moral of the story for me was that hidden stakeholder motivations were sufficiently disruptive to projects to warrant taking the time to discover and expose them to the team. This doesn't lessen the disruption - exposure of hidden motivations can be quite challenging. But the principle of "Fail Early" covers this nicely. If the warping of the project by a stakeholder(s) agenda is sufficient to ultimately cause failure, it is better to fail as early as possible.

2 comments:

  1. I wanted to make one clarification that you better make sure you stakeholder can fail before you fail early. There is one stakeholder who could never hear about the failures so we had to succeed early to change our plans. It was quite infantile to have to work this way but some people can't deal with failures and you better learn that before you call that baby ugly. Show they were a winner and then make up as excuse for how it was their idea to make the change. Some people just aren't built to be sadists.

    ReplyDelete
  2. Hi Jeremy,

    That's a great illustration of another social risk - the inability to bring bad news to a stakeholder. Definitely worth paying for as early as possible. Teams can't work effectively if they don't have personal safety.

    ReplyDelete