Identifying Reliable, Objective Software Development Metrics
Reliable software development metrics on which managers can confidently base decisions around the most effective way to deliver projects is an absolute necessity in this competitive industry. Whether from a selection of outsourcers or in-house, executives need to be sure that they choose the right option and demonstrate to stakeholders that they have done so.
But isn’t it obvious that managers need software development productivity metrics? Who hasn’t heard “You can’t manage what you can’t measure”? Surely software development is no exception?
While we can be sure of the above, can we rely on the currently available means of monitoring productivity and the metrics these provide?
Software development managers might use one or more of these means to track productivity:
Counting Lines of Code
Taking the number of lines of codes produced as a measure of productivity rests on a series of problematic assumptions; chiefly that the number of lines of code a developer writes is proportional to progress through the project. This involves the notion that each line of code requires the same amount of work to produce. Another is that each line is of equivalent necessity towards the end of fulfilling the functions to software is required to perform.
Like any KPIs, the desired outcome expressed by managers will change their staff’s behaviour. Developers will produce as many lines of code as they can. In reality, the opposite situation is more desirable: Functionality should be achieved with the minimal possible code so that the software runs as efficiently as possible. Number of Lines of Code is a highly flawed metric to use as a direct judgement of software development productivity. Managers need to be wary that KPIs can encourage negative effects on staff behaviour in similar ways.
Having said an increase in the number of Lines of Code isn’t necessarily proportional to an increase in functionality, should managers then monitor the amount of functionality that developers and teams are delivering? Each Function Point represents the fulfilment of one step towards producing software with the functionality required. These steps are already recorded in the task tracking tool the development team is used such as JIRA or Subversion, thus developers are not required to spend time inputting any further administration.
The functionality of the software is central to the purpose of creating it. In this vein, Function Point counting might seem the ideal measure of productivity. However, each function is different. Some are straightforward while others present tougher challenges for developers to solve. Further, developers should be incentivized to deliver a Function Point without causing issues which lead to further work for the team – and they shouldn’t be encouraged to cherry-pick easier tasks. These behaviors aren’t in line with good teamwork – especially the Agile methodology. So, one Function Point – or a particular number thereof – cannot be fairly compared with others. Different programming languages require different amounts of work. To track productivity and calculate efficiency, managers need a consistent expression of effort throughout stages of work.
Story Points or Use-Case Points (UCPs)
Story Points represent a stage in the project and could be seen as a higher-level version of Function Points. Use-Case points are higher-level still; the first step in breaking down the high-level software requirement or Use-Case. While Function Points vary in size, Agile teams typically ascribe a size value to Story Points.3 This appreciation of different amounts of work required is a part of why McKinsey evangelize UCPs as a way of tracking progress4, without it requiring further administration beyond usual activities of team management.
The approach fits with the Burn-Down Charts used in Agile to monitor the progress of projects’ steps. But the completion of each step doesn’t reveal any insight into the actual amount of work required. Managers still only have their original estimations to work with which may not be accurate. They may become aware of gross inaccuracies in estimation due to unforeseen challenges, but won’t be aware of, say, Story Points marked in the plan as ‘5’ that should have been ‘3’ but were executed slowly.
Given the unreliability of the software development metrics currently in use, as outlined above, how can organizations be sure they select the best way to deliver software projects and that these projects are on track? Finding a suitable means is the foundation of the research, started in 2001, that resulted in BlueOptima’s proven solution to measure progress invested in (and thus required by) any software project, story point, function point, etc.
At the heart of the solution is Coding Effort; a measure, expressed in hours, of the work that software developers contribute towards code repositories. This metric offers a basis on which to evaluate the efficacy of developers and teams within your organization, various outsourcers – and individuals and teams within the outsourcers you use. Thus, various options can be compared fairly in terms of cost-efficiency. When managers perceive project progress to be faster or slower than they had planned, this may be due to forecasting more or less time required for the project. Using Coding Effort as a meaningful basis for assessing value for money removes this factor of inaccurate estimation.
With experience, organizations can gain intelligence pertaining to Coding Effort required for different types of Function/Story Points or project. Factoring in the time per effort and the cost per time that internal or external developers can supply affords well-informed estimations of time and cost for each option – and confidently choose the most cost-effective way to proceed. Having embarked on a project, BlueOptima provides insight into the rate at which developers exert Coding Effort, so that productivity drags can be resolved, and progress monitored.