Measuring Productivity

When I was just a young engineer and managing a small electronics and software development team, my boss asked a question that I couldn’t answer then or for many years afterwards. And I imagine that you may also have the same trouble I did.

It isn’t that his question was wrong or illogical – it wasn’t. It was just that there was no good way to get to an answer that would satisfy his question. And any answer, especially for my boss, who was not a software expert and had little or no experience with software engineering or its practices, was likely to not be received well with the level of qualifying statements I needed to add to the response.

But he was a manager, and the question interested him from that point of view. And it was a simple question so he expected a simple answer.

So for the question. My boss came to me and said that he had been reading that software coders in Freedonia (with apologies to the Marx Brothers) are much more efficient than coders in the US. In fact, they can produce 400 Lines of Code (LOC) per day. How many LOC do your engineers produce? How do you measure their productivity?

Well, first of all, we didn’t measure LOC. And even if we did, my engineers were creating different types of software, and at the same time doing Systems Engineering work, Software Quality work, etc., so it is meaningless to try to compare a LOC metric to the Freedonia coders, especially when the metric is not properly defined.

Today we have all sorts of measures and metrics for software development that were not widely known or used back then, and some of them hint at productivity. Most, however, when used properly, really show us how well the team is working, and by extension, areas where process improvement may have its largest return. There is Velocity, Burn Down, Customer Satisfaction measures, etc. The list goes on and on. And these measures are most useful to tell us where to focus our attention on things that may positively or negatively affect productivity, rather than to directly compare against another team or industry. However, when measures are compared to others doing similar work, any significant difference in the value of the measure should elicit the question “What are they doing that we aren’t”, and hopefully it isn’t just that they measure it differently!

For me, the answer wasn’t any of these. What I thought was important was to see how well we were performing against our estimates, since our estimates were used in the bid for the work. If we weren’t profitable then generally our productivity was questioned, and since it is hard to manage what you don’t measure, gets us back to the original question. Our answer was to use Earned Value to give us a comparison to our work plan, and as a side benefit it provided useful real-time feedback on how work was progressing on that plan. So the definition of productivity was related to our variance to the plan.

I’ll get into Earned Value in another blog post, and there are many good resources on the web for EV. I know you may be thinking just inflate your initial estimates and productivity will improve. However, since we wouldn’t get any more work by pricing ourselves out of the market, there was always intense pressure to improve and to be optimistic in our estimates. The fog of optimism will be another future blog post. Earned Value worked for us, and not just for software engineering, but all engineering disciplines where standard work is not applicable.

Stay tuned…