Thursday, April 18, 2013

DevOps – How do you measure team success?

http://blog.devopsguys.com/2013/04/18/devops-how-do-you-measure-team-success

Thursday, April 18, 2013 9:28 AMDevOps – How do you measure team success?Dev Ops GuysDevOpsGuys

In an earlier blog post we talked about the importance of re-structuring your Dev & Ops organisation to remove silos and in another post we touched onhow incentives and metrics influence how the teams work together.

"Organisationally we remained siloed however – we were incentivised in different ways (Operations emphasising availability, Development emphasising feature delivery), we remained in essentially a waterfall delivery model and Ops VS Dev was a constant struggle for manpower & resources. All the usual problems that the DevOps movement is trying to address."

So how do we create alignment in incentives across our merged DevOps team and what Metrics should we be tracking to measure success?

Jesse Robbins from OpsCode wants us to "make more awesome" and proposes a measure of "time to value" as a measure of DevOps success.

Rich Steer gets excited about the "time to value" concept here –http://richsteer.com/time-to-value/ – but I've yet to see a truly operational definition of how it works.

As someone points out in the comments on Rich's article when do you start/stop the "Time to Value" clock?

  • Do you start the clock when the business first has the idea (timing being vague and nebulous) or when it becomes a Story in the backlog (more quantifiable but late)?
  • How do you measure "value" (which is intangible) and how long after "deployment" do you decide "value" has been achieved, if it ever does? Or does "deployment=value"?
  • Some changes will have more "value" than others so how do you weight the metric to account for that?

So if "time to value" is a bit nebulous what other metrics are people proposing?

PuppetLab's State of DevOps report focuses on improvement in 4 key metrics – Change Frequency, Change Lead Time, Change Failure Rate and Mean Time To Recover (MTTR). We suggest that classic metrics like Availability, Performance (page load time) and Mean Time Between Failure (MTBF) should be in there too in order to give a better rounded measure of the overall site performance.

Thoughtwork's have re-worked the old "function point" or Agile "story point" concept by adding a business spin and came up with "Business Value Points". (As an aside why the points-based approach might be a bad idea you should read this article here on Agile Story Points and why they might be harming your velocity, not helping).

I'm not convinced that we can find a single metric that can effectively incentivise or measure a DevOps teams performance – I suspect that we'll have to create a weighted equation of some type to derive a synthetic metric or use a "balanced scorecard" approach to weigh different metrics from different perspectives.

Regardless of the metrics we pick we have to ensure that they incentivise the behaviour we want to reward because as we know from the Met Police and the NHS the wrong incentives result in the wrong behaviour.

This is where an interesting piece of management theory comes into play – it's called "Vroom's Expectancy Theory", and it outlines 3 factors I always like to keep in mind when I am setting team incentive metrics

Basically it boils down to 3 simple things.

"I am only going to be motivated to work harder IF…":

 (1)    Expectancy = My effort will really make a difference to the overall performance (conversely, "why bust a gut if no matter how hard I work it doesn't make a difference to the overall performance", hence why getting the weighting in the overall equation is critical)

(2)    Instrumentality = If we achieve our performance goals I will get the reward/bonus (i.e. do I trust my Boss enough to deliver on the bonus and not weasel out with some BS about the "state of the economy").

(3)    Valence = The reward is something I care about enough to make the incremental effort worthwhile. (e.g. it might be an order of magnitude harder to go from 99.9 to 99.99% availability but if it is only an extra 1% in my bonus is that really worth all that effort?)

 The real key here is finding metrics that EVERYONE in the DevOps team can contribute to, and that overall the metric balances out the different contributions from the different roles within the Team.

If you get the "balance" wrong it can lead to frustration for the team members

For example, a systems administrator might think "no matter how hard I work (e.g. say on infrastructure automation) the crappy code quality will still make us miss out targets, but I'm not a developer and I have few (if any) ways to improve that…".

Conversely from a developer perspective "I can write awesome code but if the ops guys keep messing up the server configuration and causing downtime then what's the point?"?

As a thought experiment I tried to think of what sort of "equation" might be the basis for a meaningful synthetic metrics.

What I came up with was this:

DevOps_Incentive_Bonus_Metric = "Development Velocity" ((100-"availability percentage")*100) + (Apdex(3* 100) +NetPromoterScore ?

If we take my sample equation let's see how some of that works if we turn this into "user stories" for our DevOps Team:

As an "Operations Person"

I want "to create automated test environment build scripts using Chef"

So that "Developers can work faster and increase their Development Velocity"

As a "Development Person"

I want "to incorporate page load time non-functional requirements into my user stories

So that "we only ship code that meets the 3 second page load time objective"

As an "Operations Person"

I want "to create a load-balanced production environment"

So that "we can remove any single point of failures and improve availability"

As a "Developer Person"

I want "to remove any dependency on per-server session state"

So that "my code works better in a load-balanced environment and improves availability"

As a "Developer Person"

I want "ship awesome feature XYZ"

So that "customers are delighted and increase our Net Promoter Score"

These "team user stories" seems to work pretty well  to me (but I am sure that many of you could come up with something better!).

So the questions for the DevOps community at large are:

  1. What Metrics do you use to motivate and measure the successes of your DevOps Team?
  2. How are those metrics shared across the different roles within the Team?
  3. How well are those metrics working for your DevOps team?

I am sure lots of people would like to know so please give us your thoughts in the comments or link over to your blogs etc.

-TheOpsMgr





No comments:

Post a Comment