Home » Blog » The Productivity Myth

The Productivity Myth

I am often asked if using Scrum can make software developers more productive. My response is always the same; ‘Can you please define productivity in a software development context?’ Often people cannot. Often, they quote the standard definition. ‘It is the amount of output per unit of input’. My response is then this; ‘Great. And how do you define output?’ This would be easy if we worked on an orange farm. We could measure how many oranges each worker picked per hour. The one who picked 150 is clearly more productive than the one who picked 120. The output is clearly defined. This is not the case in software development, and knowledge work more generally. The output here often pretty nebulous. It is very hard to define, never mind measure.

Measuring Developer Output

All this poses an interesting question. Is it possible to measure the productivity of software developers? Well here are some of the things companies have tried:

Lines of Code (LOC) – This is a bad idea for several reasons. Firstly, developers can build a feature using lots of lines of code, or a few. The best developers will do it with very little code. This is often a far more elegant solution. One which will run faster, and be much easier to maintain. We want to encourage this kind of simplicity. Anyone can add extra lines of code in to pad it out. Incentivising that is a recipe for a bloated, and unmanageable codebase. Lines of code is a bad proxy for output.

Number of features – This also poses a challenge. A feature can take 2 hours to develop or 2 weeks. It’s not fair to simply count. We need to assume that some will take far more effort and have a higher complexity than others. We must not penalise people for taking on the more challenging features.

Velocity – This a planning tool, not an accurate measure of the size of a feature. It is trivially easy to game this number as developers are the ones who estimate features. Up the estimate and, bingo, productivity has increased with zero extra output.

Function Points – This is slightly better than velocity, but there really is still no accurate way of counting function points. Various tools seem to disagree with each other. Even if function points could accurately measure developer output in terms of features, this misses several big considerations.

The first of which is quality. Is a developer who codes ten function points per day with an average of twenty-five bugs more, or less productive than a developer who codes seven function points per day with almost no bugs? There is no point in churning through large numbers of new features if they are full of bugs. That failure demand will be back to dramatically slow your progress down the line.

The above options also miss another trick. Consider this, developers do more than just writing code. No. I don’t mean playing table tennis and drinking coffee. There are other valuable things that we expect from developers that are equally tricky to measure. For example:

  • thinking about the overall architecture
  • refactoring messy code
  • automating, and speeding up, acceptance tests
  • researching new technologies
  • learning new skills
  • upskilling other team members
  • generating new, innovative ideas to improve the product

Most agree that these activities are valuable, but how do we measure them? If we only measure the output of features, we will only every get people working on features. All the other things that great developers so will fall by the wayside. This will be bad for the product. You get what you measure. Normally to the detriment of many other valuable activities. Be careful what you wish for.

Output vs outcomes

So, as you can see, none of the great minds out there have, to this point, managed to define developer output. Given that, measuring the productivity of a developer is clearly not something we should try to do. To make things even more interesting, I would argue that output is the wrong thing to even try to measure. Consider this, Amazon, one of the most customer-centric organisations on the planet makes hundreds of changes to its site each week. Through relentless A/B testing, they have discovered that only 30% of changes add value. That means 70% of changes made either make no difference or make things worse. With LinkedIn, 80% of changes are waste. For Itsy, it’s 90%. There is no point in increasing the speed at which you travel if you are heading in the wrong direction. I would argue that finding a cheap and fast way to eliminate the waste is far more valuable to an organisation than trying to optimise the rate at which developers develop features that may, or may not, increase the quality of your product.

What we need to focus on are outcomes. Has this change increased conversion, increased sales, grown our customer base. Is the developer who finished ten features that are not used more productive than the developer who finished one killer feature that made the company £1m? After all, wouldn’t it be great if we could get the best outcomes with the least effort? Isn’t that the most efficient way to run an organisation?

Individuals vs Teams

My final point is one I raise with HR departments in almost every organisation with which I work. Incentivising individually leads to poor team performance. In systems thinking, we call this a local optimisation. It almost always sub-optimises the whole. Real teams have a shared purpose, common goals and are mutually accountable for achieving those goals. The right thing to do for the team is not always the right thing to do for the individual. I would much rather have team players than selfish rock-star developers. We should focus on measuring team and product outcomes.

Summary

As you can see, there is no effective way to measure developer output. Given that, there can be no effective way to measure productivity. The question is; why do we even want to? In the context of knowledge work, it simply does not mean anything. It is far better to focus on team performance and whether those teams are achieving the desired business outcomes. Trying to measure anything else is just not a productive use of managers’ time.